Abstract
Through dance, a wide range of emotions can be expressed. As virtual agents and robots continue to become part of our daily lives, the need for them to efficiently convey emotion and intent increases. When trained to dance, to what extent can AI learn to model the tacit mappings between sound and motion? In this talk we explore the creative capacity of a generative model trained on 3D motion capture recordings of improvised dance.
Bio
Wallace is a PhD fellow at the RITMO Centre for Interdisciplinary Studies of Rhythm, Time and Motion at the University of Oslo.
With a background in music production and an MSc. in Informatics, her academic interests lie in the cross-section between art and science.
Her current research is centered around sound-motion mappings with 3D motion capture, generative machine learning and the use of AI as a tool for pursuing and understanding creativity.