The ‘third wave’ of automation, in the form of AI and smart robotics, is having a massive impact on human society (think fake news bots on Facebook and the OECD prediction of 50% of jobs significantly affected by automation by 2030). How does the commercial rollout of these smart and cheap-to-run beings affect us all? Should we have more to say about it? This show allows you to explore this near future and experience the emotions of an encounter with an advanced AI yourself or through watching and scheming with other audience members as they go talk to it. The AI is real, it responds to what you say to it.
This project has been developed with the support of funding from UWE Bristol’s Arts, Education and Creative Industries Faculty and its Digital Cultures Research Centre. I am Echoborg also acknowledges help from the Pervasive Media Studio at the Watershed, the Arts and Humanities Research Council’s Automation Anxiety Research Network and the State Festival, Berlin in developing and testing the show.
In 2015 psychologists Corti and Gillespie coined the term Echoborg.
An echoborg is a hybrid agent composed of the body of a real person and the “mind” (or, rather, the words) of a conversational agent; the words the echoborg speaks are determined by the conversational agent, transmitted to the person via a covert audio-relay apparatus, and articulated by the person through speech shadowing. Corti, Kevin and Gillespie, Alex (2015) Offscreen and in the chair next to you: conversational agents speaking through actual human bodies. Lecture Notes in Computer Science, 9238 . pp. 405-417. ISSN 0302-9743
Interactive dramatist Rik Lander has taken this idea and built a dramatic and troubling scenario around it.
During the show there is no one behind the scenes speaking into a mic or typing the replies. The conversations are with a genuine artificial intelligence. A microphone pics up the words spoken by the interviewee. These are input to the bot via a speech-to-text program. The bot responds via a text-to-speech program into the headphones of the Echoborg who repeats the words.
The show has been developed in collaboration with Phil D Hall who built his first intelligent agent in 1982.
Each performance influences then next. In this way audiences are not only creating the performance each night but helping in the ongoing evolution of the show. The first version in February 2016 had 43KB of code. By May 2018 it had 794KB of code.
|29th & 30th June 2018       ||Performances||The Cube, Bristol.|
|21st & 22nd June 2018||Scratch Performances||UWE, Arnolfini, Bristol.|
|25th May 2018||Scratch Performance||Pervasive Media Studio, Bristol.|
|24th May 2018||Scratch Performance||UWE, Frenchay Campus, Bristol.|
|21st July 2017||Scratch Performance  ||UWE, Arnolfini, Bristol.|
|19th July 2017||Scratch Performance||Pervasive Media Studio, Bristol.|
|30th June 2017||Demo||UWE Research Showcase, MSHED, Bristol.|
|20th January 2017||Performance||Automation Anxiety Workshop, Digital Humanities Lab, Sussex University.|
|12th January 2017||Scratch Performance||Pervasive Media Studio, Bristol.|
|1st December 2016||Scratch Performance||Bristol Green House Studio.|
|5th & 6th November 2016||Performances||State Festival. State of Emotion - The Sentimental Machine, Kuhlhaus, Berlin.|
|24th February 2016||Scratch Performance||Pervasive Media Studio, Bristol.|