000 04039nam a22005535i 4500
001 978-3-319-19947-4
003 DE-He213
005 20200421112549.0
007 cr nn 008mamaa
008 150925s2016 gw | s |||| 0|eng d
020 _a9783319199474
_9978-3-319-19947-4
024 7 _a10.1007/978-3-319-19947-4
_2doi
050 4 _aQA76.9.U83
050 4 _aQA76.9.H85
072 7 _aUYZG
_2bicssc
072 7 _aCOM070000
_2bisacsh
082 0 4 _a005.437
_223
082 0 4 _a4.019
_223
245 1 0 _aContext Aware Human-Robot and Human-Agent Interaction
_h[electronic resource] /
_cedited by Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann, Bum-Jae You.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2016.
300 _aXIII, 298 p. 143 illus.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aHuman-Computer Interaction Series,
_x1571-5035
505 0 _aPreface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
520 _aThis is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: �User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. �Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. �Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
650 0 _aComputer science.
650 0 _aUser interfaces (Computer systems).
650 0 _aArtificial intelligence.
650 0 _aComputer graphics.
650 1 4 _aComputer Science.
650 2 4 _aUser Interfaces and Human Computer Interaction.
650 2 4 _aComputer Imaging, Vision, Pattern Recognition and Graphics.
650 2 4 _aArtificial Intelligence (incl. Robotics).
700 1 _aMagnenat-Thalmann, Nadia.
_eeditor.
700 1 _aYuan, Junsong.
_eeditor.
700 1 _aThalmann, Daniel.
_eeditor.
700 1 _aYou, Bum-Jae.
_eeditor.
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9783319199467
830 0 _aHuman-Computer Interaction Series,
_x1571-5035
856 4 0 _uhttp://dx.doi.org/10.1007/978-3-319-19947-4
912 _aZDB-2-SCS
942 _cEBK
999 _c58771
_d58771