Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments

Written by Scott Christley et al. on October 12, 2011 – 9:00 pm -

by Trevor J. Dodds, Betty J. Mohler, Heinrich H. Bülthoff

Background

When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.

Principal Findings

In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world.

Conclusions

Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.


Posted in Computer Science | Comments Off

Effects of the C-Terminal Truncation in NS1 Protein of the 2009 Pandemic H1N1 Influenza Virus on Host Gene Expression

October 12, 2011 – 9:00 pm

by Jiagang Tu, Jing Guo, Anding Zhang, Wenting Zhang, Zongzheng ...

Comments Off

MiR-221 Influences Effector Functions and Actin Cytoskeleton in Mast Cells

October 12, 2011 – 9:00 pm

by Ramon J. Mayoral, Lorenzo Deho, Nicole Rusca, Nenad Bartonicek, ...

Comments Off

VDA, a Method of Choosing a Better Algorithm with Fewer Validations

October 12, 2011 – 9:00 pm

by Francesco Strino, Fabio Parisi, Yuval Kluger ...

Comments Off

ISG15 Modulates Development of the Erythroid Lineage

October 12, 2011 – 9:00 pm

by Ana Leticia Maragno, Martine Pironin, Hélène Alcalde, Xiuli Cong, ...

Comments Off

A Computational Analysis of Limb and Body Dimensions in Tyrannosaurus rex with Implications for Locomotion, Ontogeny, and Growth

October 12, 2011 – 9:00 pm

by John R. Hutchinson, Karl T. Bates, Julia Molnar, Vivian ...

Comments Off

Neonatal Lethality in Knockout Mice Expressing the Kinase-Dead Form of the Gefitinib Target GAK Is Caused by Pulmonary Dysfunction

October 12, 2011 – 9:00 pm

by Hiroe Tabara, Yoko Naito, Akihiko Ito, Asako Katsuma, Minami ...

Comments Off

The Retinoblastoma Tumor Suppressor Regulates a Xenobiotic Detoxification Pathway

October 12, 2011 – 9:00 pm

by Maria Teresa Sáenz Robles, Ashley Case, Jean-Leon Chong, Gustavo ...

Comments Off

Very Few RNA and DNA Sequence Differences in the Human Transcriptome

October 12, 2011 – 9:00 pm

by Daniel R. Schrider, Jean-Francois Gout, Matthew W. Hahn ...

Comments Off