Title
Learning as Performance: Autoencoding and Generating Dance Movements in Real Time
Abstract
We describe the technology behind a performance where human dancers interact with an "artificial" performer projected on a screen. The system learns movement patterns from the human dancers in real time. It can also generate novel movement sequences that go beyond what it has been taught, thereby serving as a source of inspiration for the human dancers, challenging their habits and normal boundaries and enabling a mutual exchange of movement ideas. It is central to the performance concept that the system's learning process is perceivable for the audience. To this end, an autoencoder neural network is trained in real time with motion data captured live on stage. As training proceeds, a "pose map" emerges that the system explores in a kind of improvisational state. We show how this method is applied in the performance, and share observations and lessons made in the process.
Bio
Alexander Berman is an artist and software developer based in Gothenburg and working at the intersection of art, technology and research.