Wright, Alexa and Linney, Alf and Evans, Alun and Lincoln, Mike (2007) Conversation piece: a speech-based interactive art installation. [Show/Exhibition]Full text not available from this repository.
A speech-based interactive art installation creating an intelligent room that can hold conversations with its occupants. Wright's project poses the fundamental question: is meaningful social interaction with a machine possible? Art/science collaboration with UCL and University of Edinburgh. Conversation Piece is a speech-based interactive art installation (approx 10m x 12m) that enables individual audience members to apparently converse with the gallery space itself (see portfolio/URL for details). This work built on Wright’s previous art/science research to explore, via artistic practice, issues of interactivity and response through spoken language. The installation created a transparent interface between the virtual and the 'real' using synthesised spoken voice and concealed microphone arrays. Conversation Piece follows in the tradition of chatbots such as Eliza and Jabberwacky, although by using spoken voice and intelligently constructed conversations, it significantly improves the user's experience of identification and communication with the machine. The installation is broadly accessible, and aims to raise questions about human/machine interaction that equally interest scientists, technologists and art audiences. The work was tested on audiences at several key stages of development – both to enable further development of the technology and the artificial intelligence, and to determine audience response. It was on public exhibition (alongside 9 other artists) at Augsburg, a key venue for this field, where Wright and Lincoln also presented their research findings. Other presentations include Toronto (May 2007) and UCL (April and November 2007). Conversation Piece represents a major body of interdisciplinary research in computer generated speech and language recognition, which is being carried out in collaboration with UCL and the Department of Speech and Language, Edinburgh University (a world leader for speech technologies). It was enabled through an AHRC Arts Science Fellowship and a Wellcome Trust Sci-Art production award. It is the first collaboration between an artist, scientist and cutting-edge computer technologist on this scale and formed an AHRC case study of good practice in Art-Science Collaborations.
|Additional Information:||Also shown at 4th Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms Brno, Czech Republic, (28th-30th June 2007).|
|Uncontrolled Keywords:||interdisciplinar, human-computer interaction, speech, art-science, transparent interface|
|Subjects:||University of Westminster > Media, Arts and Design|
|Depositing User:||Miss Nina Watts|
|Date Deposited:||07 May 2008 13:51|
|Last Modified:||20 Aug 2008 13:11|
Actions (login required)
|Edit Item (Repository staff only)|