When
Thematic Session 4: Creativity and Expressivity (Tuesday, 11:05)
Abstract
Controlling a musical instrument is a one-way operation translating human gestures into musical sound. But is it really? Performing on most musical instruments involves haptic, visual and aural feedback between (at least) a musician, their instrument, and a sonic result.
Embodied generative AI, where gestural information can be considered a data-source and generative target, provides opportunities to enhance, disrupt, or otherwise play with these relationships. Such interventions could result in intelligent musical instruments that expose new opportunities and challenges for music and sound practitioners.
This talk considers the potential of embodied musical AI to manipulate feedback networks musical systems. In particular, generative AI has the capacity to produce alternative or continued musical gestures which can be translated into novel physical, visual, or sonic feedback. Examples are drawn from recently developed intelligent musical instruments such as those developed using the Interactive Musical Prediction System (IMPS) and other generative AI systems used in real-time performance. The designs of these systems will be characterised in terms of their intervention in the feedback networks present in existing musical practice. The results of these interventions in music making will frame suggestions for future embodied musical AI designs.
Bio
Charles Martin is a computer scientist specialising in music technology, creative AI and human-computer interaction at The Australian National University, Canberra. Charles develops musical apps such as MicroJam, and PhaseRings, researches creative AI, and performs music with Ensemble Metatone and Andromeda is Coming. At the ANU, Charles teaches creative computing and leads research into intelligent musical instruments. His lab’s focus is on developing new intelligent instruments, performing new music with them, and bringing them to a broad audience of musicians and performers.