Abstracts

Abstracts and bios for the International Sonic Design Seminar.

Aksnes, Hallgjerd: Sonic Objects and Motor-mimetic Cognition: Insights, Tensions, and Emotive Affordances

The notions of sonic objects and motor-mimetic cognition play a central role in Rolf Inge God?y’s work (e.g. 1984, 2006, 2010, 2018, 2021), which draws upon an interdisciplinary array of sources including Edmund Husserl (1917), Pierre Schaeffer (1967-77), gestalt theory, James J. Gibson (1979), embodied cognition, and the motor theory of perception. During the past decades, God?y and followers have focused increasingly on links between music and body motion (God?y & Leman 2010; God?y 2021; Jensenius 2009; Jensenius & Erdem 2022). This paper points out that there are inherent tensions between the notions of sonic objects and motor-mimetic cognition, as sonic objects are grasped by means of explicit memory, whereas motor-mimetic cognition belongs to the domain of implicit (procedural) memory, which resists objectification. An analogous distinction has been made by Martin Heidegger (1927), who distinguishes between “cognition” (Erkennen) and “dealing with” (Umgang), for which Hubert Dreyfus (1991) introduces the analogous terms “representational intentionality” and “absorbed intentionality. The paper also focuses on associations between music, motion, and emotion, understood in terms of Susanne K. Langer’s notion of forms of feeling (1953) and Daniel Stern’s notion of vitality affects (1985). God?y’s analysis of gestural affordances in Brahms’ Hungarian Dance No. 5, exemplified by Charlie Chaplin in the barber scene from The Great Dictator, might thus be complemented by the emotive meanings and vitality affects the music also affords. God?y’s example of “the fast back-and-forth hand movements of applying the shaving cream to the client’s face”, reflects the sense of urgency created by the corresponding flurry of notes in the music; and the later “protracted upward shaving gestures with the razor similar to sustained crescendo sounds (probably up-bow, i.e. from tip to frog movements in the strings in the crescendos)” (God?y & Leman 2010: 104-105) reflect the vitality affect surging (Stern 1985: 54), with the richness of emotive meanings conveyed by this musical gesture.

Hallgjerd Aksnes is Professor of musicology at the University of Oslo. In addition to musicology, she has a background in medicine (preclinical studies) and comparative literature from the University of Oslo, as well as philosophy studies with Mark Johnson and Maxine Sheets-Johnstone at the University of Oregon. Her research focuses especially on problems related to musical meaning, music analysis, Norwegian music history, cognitive semantics, and GIM therapy. During 2008-13 Aksnes led the NRC FRIHUM project “Music, Motion, and Emotion: Theoretical and Psychological Implications of Musical Embodiment” (MME), where her own subproject focused on GIM therapy, in collaboration with the GIM therapist Svein Fuglestad.

Bader, Rolf: Acoustic Metamaterials for Sonically Designing Musical Instruments and Room Acoustics

Acoustic Metamaterials have properties not found in natural materials like wood or metals, e.g. having negative stiffness, negative Young's Modulus, or negative refraction index, showing cloaking or extreme damping. Building musical instruments with metamaterials allows constructing bodies of guitars, violins, or pianos, or the design of percussion or wind instruments with complex absorption spectra consisting of freely designable band-gap structures. Therefore, arbitrary sonic designs are possible. As acoustic metamaterials are characterized by complex geometries rather than complex material, suitable designs can be built by instrument builders or musicologists easily. Two examples are presented, that of a metamaterial drum allowing varying articulation as a mixture of a band-gap and a regular spectrum, and a room acoustic example of a metamaterial wall showing broad-band, high-efficient, and low-frequency sound absorption for sonically designing concert spaces or recording rooms.

Rolf BaderRolf Bader studied Systematic Musicology, Physics, Ethnology, and Historic Musicology at the University of Hamburg where he obtained his PhD and Habilitation. He is Professor for Systematic Musicology at the Institute of Systematic Musicology, University of Hamburg since 2007. His major fields of research are Musical Acoustics and Musical Signal Processing, Musical Hardware and Software Development, Music Ethnology, Music Psychology, and Philosophy of Music. He published several books and papers about these topics. He was a visiting scholar at the Center for Computer Music and Research (CCRMA) at Stanford University 2005-6. He was also working as a professional musician, composer, and artist, running recording studios, working as a music journalist, leading exhibitions, and running a cinema. He conducted fieldwork as an Ethnomusicologist in Bali, Nepal, Thailand, Cambodia, Myanmar, Sri Lanka, China and India since 1999.

Christodoulou, Anna-Maria: The evolution of electroacoustic music in a live SuperCollider composition

From acousmatic music to serialism and from Musique Concrète to Elektronische music, there have been numerous techniques for sound generation and manipulation. Following the electroacoustic principle of a new sound being generated from its previous one, I am using SuperCollider as a tool to create a live coding composition attempt to represent the electroacoustic music evolution. The musicological aspects of the different composition techniques of this music style will be explored, through their integration in the algorithmic synthesis as well as the attempt to find the elements of these techniques that distinguish and unite them.

ChristodolouAnna-Maria Christodoulou is a MA student at National and Kapodistrian University in Athens, and is currently an Erasmus Trainee and Research Assistant Intern at RITMO, working on the MIRAGE project. Her research focus is mainly on computational music analysis.

Coca, Andres: Embodied sound design of robots to reflect desired personality traits

In terms of embodied sonic design robots present a tantalising opportunity. Beyond the representations found in films and games, the level of familiarity amongst the wider society is comparatively low, despite being an estimated twelve million industrial robots in the world, which is considerably greater than the population of Sweden. This figure will only grow as companion and medical robots become even more commonplace. Human Robot Interaction (HRI) is a well-established academic discipline, as there is an inherent expectation that communication will flow in both directions. There is no need for each interaction to be identical, with unique experiences being a distinct possibility. Each aspect of a robot, from its movements through to verbalisation can be tailored according to an individual and the context. A robot performing a task at two o’clock in the afternoon does not need to sound the same as at two o’clock in the morning. Similarly, the observed and unobserved actions can differ. The most important aspect in terms of sound design is that of matching end users’ expectations, to ensure continued, confident use. By facilitating real-time customisation, it is possible to cycle through auditory cues in order to establish the optimal configuration for different contexts and users. This allows configuration according to the desired personality, whether it be servant or companion. Reassuring, sociable, evocative sounds can be incorporated more regularly if the desired interaction is that of a friend. If the robot is seen as a tool, then minimal, artificial, confirmatory sounds would be incorporated, with an approach that the robot is only heard when absolutely necessary. Both options can be available concurrently with the choice of sonic cues made according to how the robot is addressed, and by whom. Replicating an intuitive colleague or friend will ensure that only the desired auditory cues are experienced.

With a background in experimental physics, and after working as a performing musician for years, Andres Coca started exploring sound design in 2018. This transition was sparked by an interest in the subtlety with which sound can emotionally impact an audience (film, TV, etc) or a product's user. He is now studying for a part-time master’s degree in Sound Design at Edinburgh Napier University, where he is researching Human-Robot Interaction.

Crowe, Gemma: The Listening Body: Sound and the Sensory Apprehension of Movement

[NB: the sound examples in the presentation work best with headphones.] Sound acts as an extension of the body, created by movement and received as vibration. I am focused on the removal of a visual representation of the body as a template; to instead facilitate an embodied experience. As an embodied practitioner I create immersive sound and media installations derived from recordings of my own moving body. My background in dance informs how I approach the listening subject through embodied sonic design. Treating it as material, the movement of sound depicts the presence of a body in motion through sensory illusion. I physically manipulate the recording of sound to perceptually rematerialize the moving physical form during playback. Sound can be experienced by the whole body through binaural and ambisonic recordings, using contact microphones and transducers, as well as a source blocking technique which creates what I call Sound Shadows. These techniques encourage the listener to perceive sound and space with the same awareness that situates the body. The perceived physical interaction within the reception of this sound is akin to a kinesthetic projection and is an engagement in spatial thinking, activating mirror neurons and kinesthetic empathy. Creating awareness through physical attunement can regulate systems out of balance by offering the embodiment of alternative states: shifting how one thinks and feels in a particular setting. Inhabiting the body in a new way empowers us to live differently within conditions that we can’t control. My research seeks to recognize the listener’s unique perspective through their individual body. I use technology specifically as a way-in through channels already affectively seared into us through digital dependence. My artistic research includes a study of movement and cognition, spatial thinking, embodiment and community care, perception, phenomenology, and philosophy of the virtual.

Gemma CroweGemma Crowe is a new media and movement artist working with the mediated human image: records of our bodily existence. Living and working in Vancouver, Canada, Crowe creates community and connectivity through embodiment and discursive practices. She works by the belief that art is an ideal vehicle to explore ways of knowing beyond cognition by encouraging a felt-sense through aesthetic affect. Crowe is currently exploring the illusory potential of spatial sound and vibration as well as how to apprehend features of the world by listening and generating a felt sense of the physical body in space.

Dahl, Sofia: Connecting theories of music performance and expression for sonic design

The past many decades have brought several theories and frameworks related to embodiment such as musical gesture, emotional expression and communication. Departing from current work on real-time music feedback for movement rehabilitation, I will attempt to connect earlier work on models of music performance and expression with embodied metaphors, and give examples of how the frameworks can be used for sonic design in real-time interactions.

Sofia DahlSofia Dahl holds a PhD in Speech and Music communication from KTH, Royal Institute of Technology, Sweden. As associate professor at Aalborg University, her primarily research field is within embodied music cognition, including the extraction of  relevant movement characteristics from motion capture data. A recent focus in her work is the use of music as augmented feedback during movement rehabilitation.   Dr. Dahl is in the steering committee for the Nordic Sound and Music Computing University Hub, funded by Nordforsk, co-director of Augmented Cognition Lab, and currently serving in the Executive Council of the European Society for the Cognitive Sciences of Music (ESCOM).

Erdem, Cagri: Designing Music Interactions Between Human Performers and Artificial Agents

This presentation will focus on embodied perspectives when developing artificial agents for sound and music interaction. Following an overview of musical AI through experimental arts and embodied music cognition lenses, I will structure my talk around five interactive music systems I have developed and performed with over the years. While all these systems used various technologies for sensing music-related body movement, Rolf Inge Godoy's extensive work on constrained-based sound-motion objects provided the primary conceptual devices for these projects, which are the focus of this talk. The muscle-based audio effects-controller Biostomp explores the creative agency related to the non-stationarity and noisiness of biological control signals akin to sound-producing actions. The shared music–dance piece Vrengt demonstrates the musical possibilities of sonic microinteraction and provides a conceptual model of co-performance. The muscle-based instrument RAW implements various AI techniques to explore a chaotic instrumental behavior and automated interaction with the ensemble. A novel empirical study sheds light on how guitar players transform biomechanical energy into sound. The collected multimodal dataset is used as part of a modeling framework for “air performance.” The coadaptive audiovisual instrument CAVI uses generative modeling to automate live sound processing and investigates expert improvisers’ varying sense of agency. All in all, I will stress the importance of embodied perspectives when developing musical AI systems and advocate an entwined artistic–scientific research model for interdisciplinary studies on performing arts, AI, and embodied music cognition.

Cagri Erdem?a?r? Erdem (he/him) is an interdisciplinary artist and researcher specializing in musical artificial intelligence (AI), improvisation studies, and human-computer interaction. As a Ph.D. fellow with the RITMO centre at the University of Oslo, he expanded his research by combining theories and methods from the performing arts, computer science, and music cognition. His dissertation, "Controlling or Being Controlled? Exploring Embodiment, Agency and Artificial Intelligence in Interactive Music Performance," investigated real-time sound and music control among human performers and artificial agents, with a particular focus on situating AI in an embodied, co-adaptive music performance context implemented as several interactive systems.

Gibet, Sylvie: A Grammar of Conducting Expressive Gestures

Most studies of conducting gestures focus on the gestures made by the dominant hand, i.e., the beating gestures that indicate the structural and temporal organization of the musical piece (tempo, rhythm) and lead to precise, efficient and unambiguous indications for the musicians. Gestures performed by the non-dominant hand show other aspects of music performance and interpretation, including variations in dynamics and intensity, musical phrasing or articulation, accentuation, entrances and endings, sound quality and color, and more generally musical intent and expression. In our study we are concerned by the latter expressive gestures performed by orchestral conductors. Following the hypothesis that there is a set of meaningful gestures shared by conductors (Meier, 2009), we propose a gram- mar of expressive gestures that draws directly from the grammatical foundations of sign languages (SL). SL gestures share indeed some common properties with conducting gestures, as they are visual-gestural languages. Both of them exploit iconic dynamics to describe metaphorical entities and to manipulate them. They also use the body space and the so-called signing space, making possible the evolution of these entities in the narrative or along the musical discourse. We will show how this grammar, which is based on phonetic patterns composed of basic phonetic elements, proposes a lexical and syntactic structuring capable of efficiently expressing various signs from a limited number of parameters (through composition, derivation and inflection). This grammar facilitates the re-cognition of gestures and their expressiveness, offers the possibility of a written transcription of gestural performances, and of the learning of conducting gestures. This presentation will be illustrated by examples of conductor’s gestures analyzed at the light of this grammar, as well as other examples extracted from a repertoire of expressive gestures captured in our team.

Sylvie Gibet graduated with a PhD in Computer Science in 1987 from the “Institut National Polytechnique de Grenoble” (INPG) in France, where she studied haptic gesture for controlling sound synthesis. She then held a research position at the University of Geneva in 1989 in Cognitive Science, and a postdoctoral position at the University of California, San Diego (Computer Music Experiment lab.) in 1990. She became an assistant professor at the University of Paris Sud in France between 1992 and 2000, before becoming a professor at the University of South Brittany, where she joined the IRISA Laboratory. Her research focuses on the modeling, analysis and generation of expressive gesture. She has mainly studied expert gestures with strong semantics such as sign language gestures, but also musical gestures such as percussing or conducting gestures, or emotional theatrical movements.

God?y, Rolf Inge: Generic motion components for sonic design

Sonic design comprises both musical craftsmanship and aesthetic reflection and may include elements of not only recent technologies for sound synthesis and processing but also of traditional schemes for sound generation as encountered in orchestration and/or in the shaping of sound on individual musical instruments or the human voice. Common to many instances of sonic design is having concurrent motion components that blend with acoustic features, making sonic design into a multimodal topic. Yet we may often be lacking suitable concepts for differentiating these motion components, and the aim of this paper is to present ongoing work in developing a more systematic scheme for detecting and handling these components, be that in analytic contexts or in creative sonic design tasks.

God?yRolf Inge God?y is a professor of music theory at the University of Oslo and has taught and supervised several generations of students in a broad range of musicological sub-disciplines. During most of his academic career, he has developed Pierre Schaeffer's concept of the sonic object into his embodied sound and music theory. During the last decades, he has also been central in establishing the fourMs Lab as a world-leading infrastructure for research into music-related body motion. He is a principal investigator at RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion, a Norwegian Centre of Excellence, until his retirement.

Halmrast, Tor: From a squeak to something general

In one of the first nice discussions I had with Rolf Inge, he questioned: "Why does the squeak of a door sound like a trombone?" In lack of a good definition for Sound Design, we could start with the way nature itself "designs" all kinds of oscillations: the stock market, sound, and fashion trends like skirt length. When something is pushed too far, it bounces back to a bit behind where it started, and then it moves "forwards" again. Nature is all about Filtering. Adding a delayed version to a broad banded impulse, gives a comb filter. Nature has never summed up sine waves! Such a filter approach is the background for Modal Synthesis of musical instruments but is often forgotten in the broader sense of Sonic Design. Using this approach, we see that for instance coloration in room acoustics and the acoustics of musical instruments is, by nature,  the same. If the comb filter "matches" the Critical Bandwidth, the comb filter is perceived as a coloration of a broad banded signal. Adding feed-back, physical or electronic, the peaks in the comb gradually become narrower, turning the comb upside/down, as for musical instrument with clearer overtones. These overtones are sometimes harmonic, but actually; all real strings etc. have a small degree of inharmonicity. This leads us to another important aspect of Sound Design: The Perception of Pitch. Mankind has reached the moon, but we still do not understand how we perceive pitch. Pitch is not just a frequency analysis! We must include other methods, like Cepstrum, (Auto)correlation and Harmonicity/Spectral Entropy in the evaluation of Sound Design.

Tor Halmrast is a composer and acoustician, as well as associate professor II (em.) at the Department of Musicology, University of Oslo and the Norwegian Academy of Music. He received Prix Italy (European Broadcasting Union) for RadiOpera. Throughout his career, he has done acoustic design for concert halls and studios.

Haugen, Mari Romarheim: Metrical Shapes

It is commonly understood that the experience of musical rhythm encompasses the interaction between the sonic rhythm and endogenous reference structures such as meter. Meter is often described and understood as points in time or durations between such points. In this presentation, I argue that musical meter also has a shape. Essential to this perspective is Professor Rolf Inge God?y’s motor-mimetic perspective on music perception (e.g., 2003, 2006, 2010) and musical shape cognition (2019). God?y refers to motor-theories of perception and highlights that when we experience music, we mentally imitate sound-producing actions related to the sounds we perceive. Such imitated actions can be directly related to playing an instrument or imitative of sonic shapes (for example, sustained, impulsive, or iterative sounds), including corresponding simulated actions with similar shapes. We perceive these action–sound relationships as meaningful units due to multimodal perception. I propose something similar concerning meter perception—namely, we perceive and make sense of musical meter based on our previous musical experiences involving meter-related bodily motion. A point of departure is the notion that we learn and shape meter through periodic body motion during musical experiences. In other words, we do not perceive the meter as one thing and meter-related movements as something else. Instead, we understand meter–motion relationships as meaningful wholes. In other words, the meter-related motion is integral to the perceived meter—they are the same. Meter thus has a shape that relates to the embodied sensations of these movements. Also crucial is the notion that musical meter is conditioned by musical culture—how one usually moves—as well as by each person’s embodied experience with the culture’s and genre’s intrinsic ways of moving. I will exemplify this embodied perspective on meter via music genres wherein music and dance are intrinsically related.

Image may contain: Necklace, Forehead, Nose, Cheek, Smile.Mari Romarheim Haugen received a Ph.D. in music cognition from the University of Oslo, Norway. She is currently a guest researcher and a former postdoctoral fellow at the RITMO centre. Her research focuses on music and motion, rhythm production and perception, and music and dance. She is particularly interested in music genres where music and dance are intrinsically related and the role of our previous embodied experiences and musical enculturation in rhythm perception. Haugen is also a trained vocalist and music teacher from The Royal Academy of Music, Denmark.

Holbrook, Ulf A. S.: Sonic design and spatial morphology

Sonic design has been proposed as an interface towards studying musical sound, however, from its basis in Pierre Schaeffer's theories on the sound object, the importance of sonic design extends beyond mere musical sound and into spatial features and significations. The sound object is a multidimensional unit, meaning it is ontologically complex and can contain multiple significations at the same time. This multidimensional model can be described by its main features, as well as broken down into sub-features and sub-sub-features and so on. This presentation will take the core feature of sonic design and Schaeffer’s theories on the sound object and extend this idea of a multidimensional entity into morphologies of space. By drawing on sound design, acoustics, geography, and questions of landscape, a proposal for an extension of the sound object will be offered with the aim of discussing how we can apply sonic design to an analysis which range from acoustic and shape-based features, to historically and culturally significant features.

Image may contain: Glasses, Forehead, Nose, Glasses, Cheek.Ulf A. S. Holbrook works with sound in a variety of media, including composition, improvisation, installations, text and academic research; with an interest in representations of space and place through sound, through spatial audio, ecoacoustics, geospatial research and computational methods.

Holopainen, Risto: Excitations and resonances: misinterpreted actions in Neon Meditations

The distinction between concrete and abstract poles of music, as discussed by Schaeffer, is perhaps most easily identified in instrumental music, where the score represents the abstract side and the interpretation and timbral particularities of instrumentation represent the concrete side. In electroacoustic music this divide in often less clear-cut, with the abstract and concrete poles intertwined. Nevertheless, sometimes the creative process can be separated into a phase of composition followed by a phase of sound design or timbral refinement. This has been the approach in some of my electroacoustic pieces (e.g. Kolmogorov Variations) where a raw signal is first generated by digital or analog synthesis, and then submitted to acoustic processing. In the performance work Neon Meditations (in collaboration with visual artist Per Hess), there is a similar separation between a sound producing system, an analog modular synthesizer in this case, and a final stage of sound shaping where the sound is distributed through exciters attached to vibrating objects. This separation roughly corresponds to that of excitation and resonance, or a source and a filter. The instrument is performed simultaneously by two performers in a way that easily causes confusion about action and sound; it is often hard to tell whose gesture it is that causes a certain timbral change. It is a somewhat inconvenient, and therefore all the more interesting example to illustrate the motor action perspective and the notion of integrated sound-motion objects as proposed by God?y. Even the idea of acousmatic listening can be worthwile to revisit in the coloured light of this unusual instrument.

Risto Holopainen studied composition at the Norwegian State Academy before taking up musicology. His PhD project Self-organised Sound with Autonomous Instruments was completed at the University of Oslo in 2012. Since then he has returned to composing electroacoustic, radiophonic, and instrumental works, collaborated with dancers and visual artists, published essays and a novel, engaged in performance works, printmaking and video works.

Kelkar, Tejaswinee: The Gestural Sonorous Other

In the work of professor Rolf Inge God?y, there is the insinuation of how sonic design and gestural sonorous objects, as embodied extensions of Schaeffer's conceptual toolbox - could relate to, inform, or enhance our understanding or study of concrete aspects of non-western music. In the paper I would like to present, I want to discuss aspects of how the gestural sonorous object relates to music in oral traditions. I will briefly discuss the ideological lineages of Schaeffer's project at the time of French decolonization, and how these ideas cross over to the concepts of sound objects. Furthermore, I will discuss how professor God?y's contributions may be applied to concrete analyses of orally discoursed, and orally passed-down musical traditions. Coming specifically from the perspective of North Indian classical singing, I will discuss how long histories of disseminating musical ideas through ‘gestural-sonorous-objects’, that may or may not look like the ones discussed in the potential of musique concrete, but find resonances.

Tejaswinee KelkarTejaswinee Kelkar is a researcher in music and motion, and a singer. She holds an adjunct professor position at the University of Oslo and works as a data analyst at Universal Music Norway. Tejaswinee finished her PhD with the RITMO center of excellence at University of Oslo in November 2019. Her research interests are melodic cognition, motion-capture and musical-cultural analysis. Her doctoral thesis can be found here. Her research focus is on how aspects of melodic perception are illustrated through multimodality, and linguistic prosody. As a musician, her focus is on retelling traditional music from her experiences of migration, and mediated by machine. She is interested in understanding the use of the voice as a disembodied, and distorted object. She trained in north indian classical singing from a young age, and later, western classical composition. In addition to being a vocalist, she plays the harmonium and other instruments. In her free time, she loves to swim and sketch.

Kobb, Christina: A Motion Capture Story from Facebook to the New York Times

Rolf Inge called me one day, wondering if I could be his lab rat for research on piano technique. I happily agreed, adding the question of whether we could add comparisons of my earlier modern piano technique with my newly restored piano technique of the early 19th Century. In this way, my practice study on reconstructing the piano technique for the Viennese fortepiano was confirmed in the MoCap Lab - and through the magic of Social media and internet publication - our collaboration story ended up in The New York Times in July 2015. This is a tribute to Rolf Inge, including some piano pieces from the project that brought us both an unprecedented number of 'likes' and clicks!

Christina KobbChristina Kobb is finally finishing her PhD (Piano Playing in Beethoven’s Vienna. Reconstructing the technique, exploring its practical application) after a few years of delays after the New York Times publication and other life events. The notable delays includes her solo debut in Carnegie Hall, New York in 2017, live broadcasts for the radio station All Classical Portland (OR) and invitations to the Ira F. Brilliant Center for Beethoven Studies (CA), Harvard University und Duke University. Other life events include moving between 4 countries, getting married, getting a real job – and resigning to do more performance and independent research.

Kozak, Mariusz: Music and Enactive Time Design: Glitches, (Dis-)Orientations, and the Ethics of Hesitation

In this talk I explore the seemingly incontrovertible claim that music is a temporal artform by examining the relationship between time, embodiment, and affectivity. I argue that music is temporal not because it unfolds in time, or because it takes time as its vector, or even because it has the capacity to alter our sense of temporal flow. Rather, music is a temporal artform because it is constituted by a situated body moving in relation to sound. Like other artforms, music isolates, extends, and intensifies the dynamical patterns of our everyday engagement with our environment. This engagement folds ongoing processes into the body’s own temporal frame, giving them a past, a present, and a future. Since we are fundamentally animate creatures, this folding-into is achieved through a moving body: it is enacted. In this view, musical time is a system of affects—intensive qualities—that define the animate nature of our openness to the world. Musical time is something felt in a physical, kinesthetic sense through a body that responds to sound by intelligibly coordinating its movements in different ways. Music, in other words, participates in what I call "enactive time design." Because of its intimate link with the moving body, music has the capacity to orient, reorient, and even disorient our temporal frame by altering the physicality of our engagement with sound. Extending Rolf Inge God?y's work on the phenomenology of time in relation to physical gestures, I identify some ways in which these (re-/dis-)orientations can happen, and offer suggestions for how this capacity can shore up music analysis.

Mariusz KozakMariusz Kozak is an Associate Professor of Music at Columbia University and the author of Enacting Musical Time: The Bodily Experience of New Music. His research centers on the relationship between music, cognition, and the body. His articles have appeared in the Journal of Music Theory, Music Theory Spectrum, and Music Theory Online, among others. In 2020 he was one of the featured speakers at the Plenary Session of the Annual Meeting of the Society for Music Theory. His current projects include meter and rhythm in progressive metal, the role of musical affect in creating interpersonal relationships, and a book on the history of the cognitive science of music in the twentieth century.

Lartillot Olivier, Rolf Inge God?y, Anna-Maria Christodoulou: Computational detection and characterisation of sonic shapes: Towards a Toolbox des objets sonores

Computational detection and analysis of sound objects is of high importance both for musicology and sound design. Yet Music Information Retrieval technologies have so far been mostly focusing on transcription of music into notes in a classical sense whereas we are interested in detecting sound objects and their feature categories, as was suggested by Pierre Schaeffer’s typology and morphology of sound objects in 1966, reflecting basic sound-producing action types. We propose a signal-processing based approach for segmentation, based on a tracking of the salient characteristics over time, and dually Gestalt-based segmentation decisions based on changes. Tracking of pitched sound relies on partial tracking, whereas the analysis of noisy sound requires tracking of larger frequency bands possibly varying over time. The resulting sound objects are then described based on Schaeffer’s taxonomy and morphology, expressed first in the form of numerical descriptors, each related to one type of taxonomy (percussive/sustained/iterative, stable/moving pitch vs unclear pitch) or morphology (such as grain). This multidimensional feature representation is further divided into discrete categories related to the different classes of sounds. The typological and morphological categorisation is driven by the theoretical and experimental framework of the morphodynamical theory. We first experiment on isolated sounds from the Solfège des objets sonores—which features a large variety of sound sources—before considering more complex configurations featuring a succession of sound objects without silence or with simultaneous sound objects. Analytical results are visualised in the form of graphical representations, aimed both for musicology and music pedagogy purposes. This will be applied to the graphical descriptions of and browsing within large music catalogues. The application of the analytical descriptions to music creation is also investigated.

Olivier LartillotOlivier Lartillot is a researcher at RITMO, working in the fields of computational music and sound analysis and artificial intelligence. He obtained funding from the Research Council of Norway under the FRIPRO-IKTPLUSS program, for a project called MIRAGE - A Comprehensive AI-Based System for Advanced Music Analysis (2020-2023). He designed MIRtoolbox, a reference tool for music feature extraction from audio. He also works on symbolic music analysis, notably sequential pattern mining. In the context of his 5-year Academy of Finland research fellowship, he conceived the MiningSuite, an analytical framework that combines audio and symbolic research.

ChristodolouAnna-Maria Christodoulou is a MA student at National and Kapodistrian University in Athens, and is currently an Erasmus Trainee and Research Assistant Intern at RITMO, working on the MIRAGE project. Her research focus is mainly on computational music analysis.

Leman, Marc: Sonic design and the power-theory of music

Over the last decades, empirical musicology has been evolving at a pace that matched new developments in technology (cf. motion capture, and all kinds of sensors, 3D-audio rendering, augmented reality). Underneath this development is the musicological foundation of a “power-theory” of music, developed by a generation of musicologists, including Rolf Inge Godoy, which served as backbone for the success of getting large National and EU-research projects at different musicology labs in Europe. This “power-theory” of music, with roots in historical phenomenology and cybernetics, can inspire sound designers, especially in view of biofeedback systems that could be useful for music making, music education, multi-media interactions (e.g. gaming), rehabilitation, and other applications. In this talk I will focus on the pillars of this “power-theory” of music, which I characteristic in terms of (i) predictive coding, (ii) embodiment, and (iii) expression/emotion/gesture.  I explain why the dynamic concept of homeostatic (co-)regulation of processes is important for building biofeedback systems and why these systems are important in our future techno-culture. Rather than focusing on the effects of musical power, I focus on the challenge of having sound designs that fully integrate with the “power-theory” and its application in biofeedback systems.

Marc LemanMarc Leman is Methusalem research professor in Systematic Musicology, director of IPEM, Institute for Psychoacoustics and Electronic Music, Dept. Musicology, and former head of the department of Art History, Musicology, and Theatre Studies at Ghent University, Ghent, Belgium. He founded the ASIL (Arts and Science Interaction Lab) at the KROOK (inaugurated in 2017). His research is about (embodied) music interaction, and associated epistemological and methodological issues. He has > 450 publications, among which several monographies (e.g. “Embodied music cognition and mediation technology”, MIT Press, 2007; “The expressive moment”, MIT Press, 2016) and co-edited works (e.g. “Musical gestures: sound, movement, and meaning”, Routledge, 2010; "The Routledge companion to embodied music interaction", 2017).

Migotti, Léo: Embodied pitch while walking to music

The interface between music and movement has long been investigated with a focus on timing. For instance, people are able to spontaneously coordinate their movement with musical beat when walking to music (Styns et al., 2007). However, the relationship between the spectral dimension of music, i.e. parameters such as pitch, timber, or harmonic structure, and the physical features of similar moves such as walking has received less attention. It has however been argued that there are profound cognitive mappings between musical and physical features, and that music gets its meaning from our ability to form mental representations of body movements from it (God?y et al. 2010). In this presentation, we present some recent findings from an experiment in which participants were asked to walk on a treadmill and on beat to piano sounds randomly changing in pitch. We found that pitch height was systematically encoded in participants’ gait patterns, supporting the idea that mental representations triggered by music are partly grounded in our embodied cognition of its components. In particular, we found that steps are heavier and longer when co-occurring with lower pitched sounds, compared to higher pitched sounds. These findings suggest that pitch cognition is at least partially embodied as listeners display spontaneous motor reactions to pitch, and provides insights as to how the embodied encoding of such sound features might be used in sonic design.

Leo MigottiAfter getting a secondary degree in harp and Music Studies, a master's degree in Cognitive Science and a master’s degree in Philosophy in 2019, I started a PhD on music and dance semantics under the cosupervision of Philippe Schlenker (CNRS, NYU) and Jean-Julien Aucouturier (CNRS, Femto-ST) at the Institut Jean Nicod at the Paris Ecole Normale Supérieure. I am particularly interested in the formalization of musical meaning by applying tools and methods from linguistics to the analysis of music and its interaction with dance and movement more generally.

Louhivuori, Jukka: Wearable musical instrument in rehabilitation

The aim of the paper is to describe a wearable musical instrument planned to be used in rehabilitation processes in health care and feedback we have received from patients and therapists/medical doctors. The wearable instrument can be in a form of a glove or a cuff. It has 18 sensors that give music/sound feedback. Sensors are touch-sensitive and have an aftertouch feature. Parameters of all sensors can be adjusted according to the needs of the rehabilitation process. In addition to sensors, the instrument has a gyroscope, accelerometer and magnetometer, which are used for measuring body movements, such as upper and lower limbs, head, etc. The instrument gives music/sound feedback related to body movements. In addition to that, the application supporting the use of the instrument has an animation of a human body that follows measured movements in real-time. The application collects data from both physical sensors and from body movements to a cloud service. The aim of biofeedback is to motivate patients to make optimal amount of movement repetitions. Music/sound feedback is used also in order to guide the patient to make movements correctly. The third aim is to reduce pain related to rehabilitation movements. Preliminary results have shown that patients have experienced the use of music/sound positively supporting rehabilitation processes. In rare cases, music/sound was experienced more as a disturbing stimulus. Patients' feedback depends on their musical background. Music/sound should be chosen according to musical preferences, type of rehabilitation movement (speed, extent) and specific aspects of injury (target level of the movement vs level not to be exceeded). The sensitivity of sensors was experienced as a special and useful feature given feedback even of movements that are almost invisible. This feature gives encouraging feedback to patients about the progress of rehabilitation.

louhivuoriJukka Louhivuori is professor (emeritus) from the Jyv?skyl? University. He has been the chair of national and international music societies, such as the Society for Musicology (Finland) and the Society for Music Education (Finland). He has acted as the chief editor of Musicae Scientiae, president of ESCOM and permanent secretary of ESCOM. He has organized several national and international conferences on ethnomusicology and cognitive music psychology. His main research topics have been cross-cultural cognitive ethnomusicology, music and well-being and - latest - new musical interfaces for rehabilitation in healthcare.

Minafra, Annamaria: Different attitudes of expressive movements awareness in professional musicians through a phenomenological approach

This paper aims to explore professional musicians’ awareness of expressive bodily movements to assist them in phrasing. Expert musicians who practice for years achieve a high degree of movement automation (Davidson, 2011). They achieve fluency and automaticity of movements (Davidson, 2011) embedding expressive bodily movements within technical movements (Davidson, 2005). Although sensorimotor and auditory feedback merged with perception and cognition constitute fundamental elements of musical experience (God?y, 2011), musicians may act through unconscious or pre-reflective self-awareness (Petitmengin et al., 2017), automatising movements without consciously controlling them (Holgersen, 2010). Three case studies, which are part of a larger research project, are presented in this study to explore musicians’ expressive movement awareness while performing. Data were collected by applying the phenomenological approach through semi-structured interviews, observation, and audiovisual materials. A violinist, a guitarist and a pianist carried out three tasks, each corresponding to a phenomenological reduction (Vermersch, 2002). The musicians were asked to perform three times from memory an easy, slow piece of music, which they had previously chosen. The first time the piece was performed with no intervention. For the second, the musicians mentally rehearsed the piece (Ross, 1985) before playing it again. For the third they simulated the movements of playing (God?y et al., 2006) without their instruments before performing their pieces. The musicians verbalised their feelings following each task. All activities and performances were video-recorded and the data were analysed in terms of verbal and nonverbal responses. The findings on the musicians' expressive bodily movements show three different attitudes. The violinist seemed to be unaware of his performed expressive movements, while the guitarist gradually reduced each expressive movement to achieve optimal performance, and the pianist unsynchronized expressive movements with her musical intentions, giving the impression of having embedded those movements during her practice as part of her motor programs.

Annamaria MinafraAnnamaria Minafra received her PhD in Philosophy of Music Education in the 2019 at UCL-Institute of Education in London (UK). From an empirical phenomenological approach, her research interest focuses on the body-mind relationship both in professional musicians and beginner violin-group players. She graduated both in viola in 1993 and Philosophy of Education in 2010. She directed the Doron Association in Florence-Italy for ten years, developing teaching experience, collaborating with local institutions such as state schools and Florence municipality. She collaborated with other Italian Associations, organizing some orchestra workshops for children and teachers, and was member of the Pedagogic Committee of Italian Association of Music School (AIdSM). She presented selected findings from her research at national and international conferences and published part of her research in English. She currently teaches for a music teacher training program at Conservatory of music “N. Piccinni” -Bari (IT).

Sagesser, Marcel Zaes: Rendering Collected Ephemera Audible: Sonic Design as Attending to Multiplicity

The author’s “#otherbeats” [otherbeats.net] is an artistic research project on the web about, and made with, participatory sound recordings. The author organized and made playable the collected ephemera as a way not only of creating an unconventional sound archive, but also of designing a sonic world that attends to the collected ephemera. Its principal mode of organization is a “crossfader,” which lets the user create their own mix by scrolling across the website. Josh Kun writes, “The crossfader is an anti-assimilation technology . . . [it] chooses the mix over the melt, the many over the one” (1) – an approach that “#otherbeats” is deeply indebted to, since it makes room for every single ephemeron to be heard as well as it crafts the possibility of an “ensemble” of several sound sources. In this paper/chapter, the author discusses the idea of tools for sonic design based on rendering sound collections audible while maintaining what these recordings index. The analysis of his own piece “#otherbeats” he will contrast with other people’s sound works in web audio, such as Nicola di Croce’s, and argues that sonic design (in the sense of constructing new sound worlds with found materials) and social practice (in the sense of conducting artistic work with participatory ephemera), when combined, result in sound worlds which afford a listening modality that may teach the user – while they play with the ephemera – attending to the sociocultural multidimensionality of the present collection. The author argues that this modality, which is an affordance of sonic-design- as-research, is important as it may carry over to our increasingly ubiquitous-technology-informed listening situations in everyday life. (1) Kun, “On Loop and in the Crossfade: Music in the Age of Mass Persistence,” CTM Magazine 2019.

Marcel SagessarMarcel Zaes Sagesser, also known under the artist name Marcel Zaes, Assistant Professor of the School of Design at Southern University of Science and Technology (China), is an artist and researcher in sound, digital media and music composition. He holds a PhD in Computer Music and Multimedia from Brown University. Both his research and his artistic practice focus on the manifold ways in which humans craft their relationships with sounding technologies. His work is located at the intersection of sonic materiality, the technologies of sound (re)production, digital rhythm machines, and popular culture. As an artist, he often deploys rhythm machines to craft moments of togetherness, hesitation, and doubt. Producing installations, concert works, and video works, he holds an extensive international gallery and concert record, has published twelve music albums to date, has been awarded a number of grants and art prizes, and he has repeatedly been an artist in residence. https://marcelzaes.com

Stover, Chris: Cecil Taylor's Gestural Designs

“As gesture jazz became…” So suggested Black radical pianist Cecil Taylor in the prose-poem album note he wrote to function alongside his 1966 recording Unit Structures. The notion that jazz emerged as gesture aligns with a provocative allusion by Black novelist Nathaniel Mackey (2010), that jazz “alchemizes” its often banal sources (for example, the American popular songs that form much of its repertoire), not merely transforming them but recoding their fundamental syntactic structure, opening them onto new expressive planes. Taylor seems to have pursued such a project of recoding in his early engagments with this repertoire. He only recorded a few such songs, none after 1962, which could easily be interpreted as ironic—we hear jarring syntactic juxtapositions and vertiginous torsions, oscillating between playful and violent. I read Taylor’s engagement with jazz as aspects of a holistic approach to sonic design that would by the mid-1960s manifest as the highly idiosyncratic gestural language he would be recognized for, in which shape and density become primary syntactic referent rather than pitch and rhythm. In this paper, I will consider the development of Taylor’s improvisational language as a continuation of the jazz tradition, rather than in the rupturous terms in which it is usually described. More topically, I will consider it in terms of a gestural strategy involving four techniques that contribute to how sonic design functions for Taylor, which I call permutating, displacing, thickening–pruning, and encroaching. (The active verbs are important, and link to aspects of a generalized Black Atlantic semiotics theorized by Henry Louis Gates and others). In this paper I will analyze how these four techniques contribute in tandem to the ways two Taylor performances (Cole Porter’s “Love for Sale” and Haggart and Burke’s “What’s New”) unfold.

Chris StoverChris Stover is an improvising trombonist, scholar, and Senior Lecturer of Music Studies and Research at Queensland Conservatorium, Griffith University. He is currently completing two books: one for Oxford University Press on temporal processes in African and Afro-diasporic music, and one for Routledge on reimagining music theory learning and teaching.

Thomas, Denez: Modifying audio parameters as we move: a site-specific approach to sonic design.

If the artistic practices that refuse the traditional places of listening are numerous, there is a certain gap in the tools available to artists who devote themselves to site-specific sound creation. The main purpose of this presentation is to share several years of experience in the development of such tools for artistic practices ranging from sound installations to soundwalks, through other transdisciplinary practices involving the body. The chosen approach takes into account the heritage of Pierre Schaeffer's work, but also that of soundscape studies and the philosophy of listening. The sound work is considered as a set of interdependent processes on which the artist intervenes in real time, changing audio parameters in a constant back and forth between listening and moving around the place. When working on a multi-channel sound installation, this practical approach gives more importance to the sound space becoming an integral part of the piece. Developing tools for site-specific sound creation is challenging, because we have to cover enough functionalities not to restrict the artist (for example in terms of sound synthesis and audio signal processing), but also allow rapid prototyping and ease of use in order to spend more time experimenting on site. The methodology we propose to address these problems puts aside the idea of creating a single tool that would meet all these criteria. On the contrary, we are interested in simplifying the process of designing custom tools made for a single site-specific sound work. For instance, in the case of a geolocated soundwalk, artists can build a custom smartphone application allowing them to virtually place listening points on their path, and design the way the sounds will be perceived by listeners as they approach their location. This presentation will address both philosophical and technical aspects of this particular type of sonic design.

Denez ThomasDenez Thomas is a PhD student in musicology at the University of Rennes 2 (Rennes, France) and a software engineer at Orbe (Paris, France). His work, in the field of Sound and Music Computing, focuses on the development of site-specific audio creation tools. He teaches creative coding and exhibition techniques to students of the Digital Creation master's program at the University of Rennes 2 and organizes workshops around practices such as computer music, installations and sound walks. In his thesis, he approaches these disciplines through the lens of place and its relationship to listening.

Upham, Finn: Broadcast Signals that Enable Sustained Concurrent Action

What is the purpose of musical sounds? Distinct from other species of primates, humans developed a suite of perceptual sensitivities that allow us to recognise and engage in musical behaviours, principally through sound. Specific ranges of temporal and harmonic regularities trigger modes of sensory processing that privilege signal information essential for interpreting and making music. Whatever our modern uses of relative pitch perception and metrical entrainment, the evolved capacities supporting this cultural technology circumscribe its initial use. I argue that music perception is designed to be engaged by broadcast signals that enable sustained concurrent action (BSESCA). This criterion defines a niche for music distinct from other human activities, balancing involuntary perception against practiced voluntary response, and firmly demarks what counts as music on these perceptual grounds. Outside of a time with recording technology and amplification, what we hear is the sound being made right now, in this moment, by people near enough for the energy of their actions to reach our ears. An open-ended medium of communication, sound travels beyond our line of sight. While speech quickly loses intelligibility in noisy environments, acoustically produced musical signals travel far with temporal regularities that supports interpretation despite acoustic distortion and distraction. This robustness and a lower information rate leave room for action planning within the metrical, harmonic, and affective parameters the signal defines. Further insights from developmental science, comparative musicology, psychoacoustics, and information theory combine to suggest an active objective may be a more efficient explanation for music cognitive capacities, even if it is not a prominent component of the most common musical experiences today.

Finn UphamA graduate of McGill University and New York University, Finn Upham is currently a postdoctoral researcher at the University of Oslo’s RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion. They have studied the real time experience of music listeners and audiences through continuous ratings, psychophysiological measurements, movement, and social media data. At RITMO, they are working on the empirical evaluation of temporal relationships between rhythms in musical signals and the oscillatory systems of the human body.

van Noorden, Leon: My 50-year stroll through Sonic Design and Embodiment

In a period of more than 50 years I have been dabbling in the sound scape of Sonic Design. I designed a few sonics with the purpose of understanding music, its perception, and purposes. On this stroll it was always pleasant to meet Rolf Inge. My first sonic design is the “ABA gallop rhythm”, which has become the basic paradigm for a new field: Auditory Scene Analysis. Although it was at that time not yet common to think of embodiment of perception, I thought secretly that the temporal coherence boundary was determined by the maximum speed with which you can move the pitch of your voice up and down. My second major pre-embodiment investigation was on the Resonance in the Perception of the Musical Pulse, together with Dirk Moelants. There I had the private suspicion that this resonance was related to the human locomotion system. This became only obvious when I started to work at IPEM, (where embodiment meanwhile had become a basic tenet), with Frederik Styns and Bart Moens on “Walking on Music” and with Prof. Marek Franek in Hradec Králové on walking in different urban environments. We also did a major experiment on the development of the 2 Hz resonance in children between 3 and 11 years old. The last 6 years I have been interested in Music and Relaxation. In the medical department of Unicauca, Popayan, Colombia, we had 2 groups of 12 students, who relaxed for 90 minutes in supine position, one with relaxing music, and the other with only silence. We measured heartbeat, respiration, and mini movements. Besides the finding that tempo transitions in music can cause cardiorespiratory coordination mode switches, the more surprising and clear result is that women react to music with their heart, and men with their respiration. A result for further investigation and confirmation.

Leon van NoordenLeon Van Noorden received his PhD in 1975 and was a visiting professor at the Institute for Psychoacoustics and Electronic Music (IPEM), University of Ghent 2005-2021.

Toiviainen, Petri: Analyzing kinematics of complex embodied music interaction

During musical activities we employ our body movement to interact with the sonic events in music, with other people and/or with musical instruments. Examples of such interactions comprise rhythmic and social entrainment as well as various types of communicative and expressive gestures. The kinematics of these kind of interactions can be complex: they can be simultaneously multidimensional (e.g. different body parts moving independently from each other), multilevel (e.g. entrainment to different metrical levels), and/or multiperson (e.g. group dance, collective movement of audience). Moreover, such interactions often are nonstationary, comprising movement patterns that vary over time. Commonly used methods for quantifying embodied music interaction usually can tackle a subset of these sources of complexity but fall short of dealing simultaneously with all of them. In my talk I will summarize some of our recent work that aims to understand the spatiotemporal characteristics of such complex embodied musical interaction, both intra- and intermodal, using various novel data-driven methods based on time-frequency analysis and latent space projections.

Portrait of manPetri Toiviainen is professor of music at University of Jyv?skyl?.

Varoutsos, Georgios: Sounding Belfast During Covid-19: Lockdowns

During a global pandemic, countries and cities have become literal ghost towns. Once flustered with human-generated sounds, it has now enhanced perspectives on the urban and natural sound environments. Recording multiple points during the lockdowns, we can chronologically experience the differences imposed by Covid-19 restrictions and how these spaces have changed from the start of the pandemic. The recordings focus on periods of Lockdowns (1-3), the Exit Strategies (Summers 2020 and 2021), as well as the Removed Restrictions. This project reflects through an auditory and sonic art perspective how the city sounds without the presence of humans or the normal amount of human density in popular areas in public spaces of Belfast and a few locations in Montreal. A total of 87 audio soundscapes and 26 locations have been documented at this time. PhD Research Link: https://georgiosvaroutsos.com/covid-19/

Georgios VaroutsosGeorgios Varoutsos is a sonic artist and researcher from Montreal, Canada. He is currently completing his PhD studies in Music at the Sonic Arts Research Centre (SARC) at Queen’s University Belfast, Northern Ireland. He has graduated with a Master’s in Research, Pass with Distinction, in Arts & Humanities with a focus in Sonic Arts at Queen’s University Belfast. He has also completed a BFA with Distinction in Electroacoustic Studies and a BA in Anthropology, both from Concordia University in Montreal. He is currently working on research consisting of urban arts, sound studies, sonic arts, and socially engaged arts. Website: https://georgiosvaroutsos.com/

Visi, Federico; Schramm, Rodrigo; Frodin, Kerstin; Unander-Scharin, ?sa; ?stersj?, Stefan: Empirical Analysis of Gestural Sonic Objects Combining Qualitative and Quantitative Methods


This paper reports on a multimodal study of a performance of Fragmente2, a choreomusical composition based on the Japanese composer Makoto Shinohara’s solo piece for tenor recorder Fragmente (1968). We carried out a multimodal recording of a full performance of the piece and obtained synchronised audio, video, motion, and electromyogram (EMG) data describing the body movements of the performers. We then added qualitative annotations related to gestural sonic objects in order to analyse which audio and movement features are the best predictors of the qualities of the performed sonic objects. We devised a method to annotate the qualities of gestural sonic objects in an audiovisual recording of a music performance. Firstly, the performance is segmented by identifying salient events occurring in the meso timescale (approx. 0.5 – 5 s). These gestural sonic objects are analysed further to annotate qualities in the gestural and sonic domains. For each modality, iterative, sustained, and impulsive components are annotated, thus describing how each gestural sonic object is structured. We used the Random Forest machine learning algorithm to rank which of the 42 features extracted from the audio, movement, and muscular activation data we collected are the most important for predicting the qualities annotated within each object. For this performance of Fragmente2, results show that, for the sound modality, statistical descriptors of audio pitch and loudness ranked the highest, immediately followed by descriptors of the finger flexors EMG. For the gestural modality, descriptors of body centre of mass, weight shifting, and contraction index ranked the highest together with the variance of audio loudness. The work of professor Rolf Inge God?y strongly influenced the conceptual framework for the study. Particularly, the concept of gestural sonic object is informed by God?y’s embodied extension of Schaeffer’s notion of sonic object. God?y’s work also inspired our interdisciplinary methodological approach combining creative and analytic perspectives.

Federico visiFederico Visi (he/they) is a researcher, composer and performer based in Berlin, Germany. He carried out his doctoral research on instrumental music and body movement at the Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, UK. He currently teaches and carries out research on embodiment in network performance at Universit?t der Künste Berlin and at Lule? University of Technology, where is part of the “GEMM))) Gesture Embodiment and Machines in Music” research cluster. Under the moniker AQAXA, they released an EP in which they combine conventional electronic music production techniques with the exploration of personal sonic memories by means of body movement and machine learning algorithms.

Wanderley, Marcelo: Gestures and instrument design

Rolf Inge God?y's studies on embodiment and gestures are integral to his interest in sonic design and the part of his work that most influenced me. From studies of "air-instruments" and gesture classifications to the notion of co-articulation, his work has proposed fundamental concepts that help one better understand the richness and complexity of music performance. In this talk, I will present recent results on studies related to digital musical instruments (DMIs), including the design and evaluation of DMIs for expert musical applications. First, I will review some recent work on the use of DMIs by professionals (Sullivan, Guastavino and Wanderley, in press) and highlight factors that contribute to the use of these instruments over long periods. I then discuss the design of two new DMIs, the Slapbox and the t-Tree. The Slapbox, designed by Boettcher and Sullivan, is a responsive, self-contained instrument (gesture capture, mapping and sound synthesis in the same place) that allows for a variety of percussive interactions. The t-Tree, by Kirby and Buser, is a docking station/hub for t-Stick controllers (Malloch and Wanderley 2007). It allows several controllers to share the same embedded sound processing/synthesis capabilities and create individual or group performances using the instrument. Apart from its use as a hub for collaborative performances, the t-Tree attempts to amplify the usable life of DMIs by allowing different versions of the t-Stick to seamlessly connect to the same synthesizer, eliminating the need for specific versions of mapping/sound synthesis software used over more than 15 years (Nieva et al. 2018), many of which have become obsolete.

Marcelo Mortensen Wanderley holds a Ph.D. degree from the Université Pierre et Marie Curie (Paris VI), France, on acoustics, signal processing, and computer science applied to music. His interdisciplinary research focuses on the development of novel interfaces for music performance. He has authored and co-authored several dozen scientific and technological publications on NIME, including the development of open databases on sensor and actuator technologies for musical applications – the SensorWiki.org project. In 2000, he co-edited the first English language research reference in this area, Trends in Gestural Control of Music (Wanderley & Battier, 2000). In 2003, he chaired the second International Conference on New Interfaces for Music Expression (NIME03) and in 2006, he co-wrote the first textbook on this subject, New Digital Musical Instruments: Control and Interaction Beyond the Keyboard (Miranda & Wanderley, 2006). In September 2016 has was appointed a member of Computer Music Journal’s Editorial Advisory Board. He is a senior member of the ACM and of the IEEE.

Wang Yichen: Sonic Design in Augmented Reality

In the field of sound and music computing, sonic design is afforded by the specific technology of its in use as well as design motivations. Applications range from mobile music computing systems, augmented/virtual instruments and expressions, interactive installations, etc. The recent development of augmented reality (AR) technologies such as head-mounted displays (HMDs) advances sonic practices in the computer music research community. With free-hand gesturing and enclosed large field-of-views, one can immersive themselves in this digital environment and interact with virtual sonic elements which is different from their physical experiences. Furthermore, in contrast to virtual reality, the advantage of still being able to access reality allows for co-experiences and co-creations. That being said, AR presented a new digital medium for computational artists for sonic practices. In this seminar, I will present my experience and insights into sonic design in the augmented reality environment. I'll start with an overview of existing sonic applications using augmented reality technology, specifically works that incorporate HMDs devices. I'll then present three of my works: Sonic Sculpture, Sonic Sculptural Staircase and Cubing Sound, which progressively explored different sonic design ideas such as soundscape, new musical expression, etc. With formal evaluation and improvisation, I conclude with reflections on creating sonic experiences in an AR environment and how this medium could benefit sonic design for sonic practitioners in a broader context.

Image may contain: Lip, Glasses, Flash photography, Dress, Orange.Yichen is currently a PhD candidate in computer science at The Australian National University. Her work focuses on new creative sonic expression in Augmented Reality, exploring the relationship between technology and art(s).

Published Apr. 15, 2022 9:27 AM - Last modified Feb. 19, 2024 1:08 PM