Real time multimodal interaction with animated virtual human

Jin, Li and Wen, Zhigang (2006) Real time multimodal interaction with animated virtual human. In: Proceedings of the Information Visualization (IV'06). IEEE, Los Alamitos, USA, pp. 557-562. ISBN 0769526020


Download (219kB)
Official URL:


This paper describes the design and implementation of a real time animation framework in which animated virtual human is capable of performing multimodal interactions with human user. The animation system consists of several functional components, namely perception, behaviours generation, and motion generation. The virtual human agent in the system has a complex underlying geometry structure with multiple degrees of freedom (DOFs). It relies on a virtual perception system to capture information from its environment and respond to human user's commands by a combination of non-verbal behaviours including co-verbal gestures, posture, body motions and simple utterances. A language processing module is incorporated to interpret user's command. In particular, an efficient motion generation method has been developed to combines both motion captured data and parameterized actions generated in real time to produce variations in agent's behaviours depending on its momentary emotional states.

Item Type: Book Section
Subjects: University of Westminster > Science and Technology > Electronics and Computer Science, School of (No longer in use)
Depositing User: Miss Nina Watts
Date Deposited: 01 Mar 2007
Last Modified: 11 Aug 2010 14:31

Actions (login required)

Edit Item (Repository staff only) Edit Item (Repository staff only)