When
Thematic Session 2: Software and Synthesis (Monday, 15:40)
Abstract
From the use of vocal deepfakes to commit crimes to artificial voice actors, data-driven generative machine learning is challenging the embodied situation of vocal identity. As part of my ongoing doctoral research into the uncanny embodiment of generative AI, I am developing musical and theatrical performances using vocal doppelgangers and voice transformations, and addressing the artistic practice of dataset creation.
My recent work on this theme includes a year-long generative radio broadcast featuring an artificial BroadCaster, a bespoke synthesis instrument combining fine-tuned GPT, TTS, and RAVE models to create an ongoing vocal performance that morphs in real-time between human and non-human speech, and whose models are continuously fine-tuned throughout the year using audience contributions.
Currently I am building an extensive annotated vocal dataset from my own voice with the goal of creating a controllable, extended, and expressive voice synthesis model building upon current TTS methods. I would like to present both of these projects as work in progress. And I would like to use this opportunity to discuss the conceptual scaffold grounding this project, to gain insight from others and to test out my own emergent concepts.
Bio
Jonathan Chaim Reus (b. US) is a transmedia musician known for his use of expanded electronic instrumentation in live and theatrical contexts. He is a co-founder of the instrument inventors initiative [iii] in the Hague, Netherlands Coding Live [nl_cl], and a recipient of a Fulbright Fellowship for his work on electronic instruments at the former Studio for Electro-Instrumental Music [STEIM]. He is an affiliate of the Intelligent Instruments Lab (Reykjavik) and is a PhD candidate within the interdisciplinary Sensation and Perception to Awareness doctoral programme at the University of Sussex.