語系:
繁體中文
English
日文
簡体中文
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Context aware human-robot and human-...
~
Magnenat-Thalmann, Nadia.
Context aware human-robot and human-agent interaction[electronic resource] /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
杜威分類號:
629.892
書名/作者:
Context aware human-robot and human-agent interaction/ edited by Nadia Magnenat-Thalmann ... [et al.].
其他作者:
Magnenat-Thalmann, Nadia.
出版者:
Cham : : Springer International Publishing :, 2016.
面頁冊數:
xiii, 298 p. : : ill., digital ;; 24 cm.
Contained By:
Springer eBooks
標題:
Robotics.
標題:
Human-robot interaction.
標題:
Artificial intelligence.
標題:
Computer Science.
標題:
User Interfaces and Human Computer Interaction.
標題:
Computer Imaging, Vision, Pattern Recognition and Graphics.
標題:
Artificial Intelligence (incl. Robotics)
ISBN:
9783319199474
ISBN:
9783319199467
內容註:
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
摘要、提要註:
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI) The research presented in this volume is split into three sections: User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
電子資源:
http://dx.doi.org/10.1007/978-3-319-19947-4
Context aware human-robot and human-agent interaction[electronic resource] /
Context aware human-robot and human-agent interaction
[electronic resource] /edited by Nadia Magnenat-Thalmann ... [et al.]. - Cham :Springer International Publishing :2016. - xiii, 298 p. :ill., digital ;24 cm. - Human-computer interaction series,1571-5035. - Human-computer interaction series..
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI) The research presented in this volume is split into three sections: User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
ISBN: 9783319199474
Standard No.: 10.1007/978-3-319-19947-4doiSubjects--Topical Terms:
175953
Robotics.
LC Class. No.: TJ211
Dewey Class. No.: 629.892
Context aware human-robot and human-agent interaction[electronic resource] /
LDR
:03073nam a2200325 a 4500
001
454665
003
DE-He213
005
20160725143558.0
006
m d
007
cr nn 008maaau
008
161227s2016 gw s 0 eng d
020
$a
9783319199474
$q
(electronic bk.)
020
$a
9783319199467
$q
(paper)
024
7
$a
10.1007/978-3-319-19947-4
$2
doi
035
$a
978-3-319-19947-4
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
TJ211
072
7
$a
UYZG
$2
bicssc
072
7
$a
COM070000
$2
bisacsh
082
0 4
$a
629.892
$2
23
090
$a
TJ211
$b
.C761 2016
245
0 0
$a
Context aware human-robot and human-agent interaction
$h
[electronic resource] /
$c
edited by Nadia Magnenat-Thalmann ... [et al.].
260
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2016.
300
$a
xiii, 298 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
Human-computer interaction series,
$x
1571-5035
505
0
$a
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
520
$a
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI) The research presented in this volume is split into three sections: User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
650
0
$a
Robotics.
$3
175953
650
0
$a
Human-robot interaction.
$3
492121
650
0
$a
Artificial intelligence.
$3
172060
650
1 4
$a
Computer Science.
$3
423143
650
2 4
$a
User Interfaces and Human Computer Interaction.
$3
464000
650
2 4
$a
Computer Imaging, Vision, Pattern Recognition and Graphics.
$3
465964
650
2 4
$a
Artificial Intelligence (incl. Robotics)
$3
463642
700
1
$a
Magnenat-Thalmann, Nadia.
$3
613963
710
2
$a
SpringerLink (Online service)
$3
463450
773
0
$t
Springer eBooks
830
0
$a
Human-computer interaction series.
$3
467563
856
4 0
$u
http://dx.doi.org/10.1007/978-3-319-19947-4
950
$a
Computer Science (Springer-11645)
筆 0 讀者評論
多媒體
多媒體檔案
http://dx.doi.org/10.1007/978-3-319-19947-4
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入