Whether you are online or in a theatre, the host will set the audience a challenge; to agree a
best possible outcome for relations between humans and intelligent machines. But the AI itself has
another agenda and it can also learn from the audience. The content and tone of each show, as well
as the ending are different each time depending on the conversation they have.
This is a pioneering use of AI as a tool to deliver genuine audience agency in the creation of an
experiential exploration of the impacts of automation on what it is to be human.
In 2014 social psychologists Kevin Corti and Alex Gillespie of the London School of Economics coined the term echoborg in an academic paper.
An echoborg is a hybrid agent composed of the body of a real person and the “mind” (or, rather, the words) of a conversational agent; the words the echoborg speaks are determined by the conversational agent, transmitted to the person via a covert audio-relay apparatus, and articulated by the person through speech shadowing.
Interactive dramatist Rik Lander took Corti & Gillespie's idea and created a dramatic, funny and troubling scenario around it. After building and testing an initial prototype, he teamed up with Conversational AI maker Phil D Hall of the company Elzware. They recruited actress Marie-Helene Boyd as the echoborg in January 2017. Jim Roper of Digital Medium is the technical supervisor. Our producer is Nicola Strong. Dan Obi often acts as host of the show.
There are a couple of papers about I am Echoborg.
Rik Lander has written about the show in his paper, Audience as Co-writers: Using Conversational AI to Deliver Audience Agency in a Participatory Drama published in the International Journal of Creative Media Research.
Robert Eagle's paper entitled, Questioning ‘what makes us human’: how audiences react to an AI-driven show was published in May 2021 in Cognitive Computation and Systems journal, co-published by the Institution of Engineering and Technology (IET) and Shenzhen University. Robert is a Post-Graduate research assistant at UWE, Bristol and the paper explores audience responses to three performances of I am Echoborg in 2019.
Throughout 2021, we are involved in a research project led by Professor Richard Owen at the University of Bristol to look at whether PhD students from different disciplines talk about different things with the AI.
A performance at Sussex University in January 2017.
During the show there is no one behind the scenes speaking into a mic or typing the replies. The conversations are with a genuine artificial intelligence. A microphone pics up the words spoken by the interviewee. These are input to the bot via a speech-to-text program. The bot responds via a text-to-speech program into the headphones of the echoborg who repeats the words.
Each performance influences then next. In this way audiences are not only creating the performance each night but helping in the ongoing evolution of the show. The first version in February 2016 had 43KB of code, by May 2018 it had 794KB of code. In May 2021 it had 2MB of code.