WestminsterResearch

Real time multimodal interaction with animated virtual human

Jin, Li and Wen, Zhigang (2006) Real time multimodal interaction with animated virtual human. In: Banissi, Ebad and Burkhard, Remo Aslak and Ursyn, Anna and Zhang, Jian J. and Bannatyne, Mark and Maple, Carsten and Cowell, Andrew J. and Tian, Gui Yun and Hou, Ming, (eds.) Proceedings of the Information Visualization (IV'06). IEEE, Los Alamitos, USA, pp. 557-562. ISBN 0769526020

[img]
Preview
PDF
214Kb

Official URL: http://dx.doi.org/10.1109/IV.2006.88

Abstract

This paper describes the design and implementation of a real time animation framework in which animated virtual human is capable of performing multimodal interactions with human user. The animation system consists of several functional components, namely perception, behaviours generation, and motion generation. The virtual human agent in the system has a complex underlying geometry structure with multiple degrees of freedom (DOFs). It relies on a virtual perception system to capture information from its environment and respond to human user's commands by a combination of non-verbal behaviours including co-verbal gestures, posture, body motions and simple utterances. A language processing module is incorporated to interpret user's command. In particular, an efficient motion generation method has been developed to combines both motion captured data and parameterized actions generated in real time to produce variations in agent's behaviours depending on its momentary emotional states.

Item Type:Book Section
Research Community:University of Westminster > Electronics and Computer Science, School of
ID Code:3611
Deposited On:01 Mar 2007
Last Modified:11 Aug 2010 15:31

Repository Staff Only: item control page