-
-
-
Jensenius, Alexander Refsum
(2025).
What happens in the body when you stand still?
Show summary
Professor Alexander Refsum Jensenius will talk about his decade-long exploration of human micromotion. Motion data from the 365 standstill sessions he carried out during 2023 reveals lots of biomechanical noise, but also some interesting signals.
-
-
Guo, Jinyue
(2024).
Comparing Four 360-Degree Cameras for Spatial Video Recording and Analysis.
-
Nielsen, Nanette & Martin, Remy Richard
(2024).
Affective framing, care, and (en)action in musical encounters.
-
Leske, Sabine Liliana; Endestad, Tor; Volehaugen, Vegard Akselsson; Foldal, Maja Dyhre; Blenkmann, Alejandro Omar & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Predicting the Beat Bin: Beta Oscillations Predict
the Envelope Sharpness in a Rhythmic Sequence
.
-
Grane, Venke Arntsberg; Endestad, Tor; St?ver, Isak Elling August; Leske, Sabine Liliana & Solbakk, Anne-Kristin
(2024).
Executive Function in a Treatment-Naive ADHD Cohort Diagnosed in Adulthood .
Show summary
Background: Attention Deficit Hyperactivity Disorder (ADHD) is an early onset neurodevelopmental condition presenting with diverse cognitive/behavioral impairments that persist into adulthood for half of those affected.
Objective: To examine whether unmedicated adults show general- or specific reductions in core executive functions (EFs).
Method: Performance on EF-tasks was assessed in adult patients with ADHD, Combined type (n=36) and in healthy controls (n=34), matched on gender, age, and education level. The tasks tapped memory span/working memory (Digit Span), interference control/response inhibition (Color Word Interference Test; CWIT-Inhibition), set-shifting/switching (CWIT-Switching; Trail Making Test [TMT]), and abstract reasoning (Wisconsin Card Sorting Test; [WCST]).
Results: There was no group difference in immediate memory span, but the patients performed significantly worse than controls when there was an additional demand on working memory. Statistically controlling for individual differences in information processing speed (using an independent reaction time measure) did not alter the result. Patients performed inferiorly on basic psychomotor speed conditions of the TMT, but the most pronounced group difference appeared on the set-shifting condition. The CWIT-Inhibition condition did not distinguish the groups, but patients had a near-significant tendency to perform more poorly when a concurrent demand on rapid set-switching was introduced. They completed
fewer card-sorting categories on the WCST than controls, with more errors overall. There was no significant difference in perseverative errors, but patients committed more non-
perseverative errors and failures to maintain set, indicating more random choices and/or losing track of the current sorting principle.
Conclusion: ADHD-related reductions of attention maintenance, switching, and working memory, but not inhibitory control, support the literature indicating that ADHD in adulthood is neither associated with specific deficits in inhibitory control, and nor with a general executive impairment. Accordingly, clinical assessment should span a range of EF tests, including the core control functions studied here.
-
Lucas Bravo, Pedro Pablo
(2024).
Swarmalators: Sound and Music System.
Show summary
This repository contains a Unity and a Max project to replicate the work from the paper "Interactive Sonification of 3D Swarmalators" presented at NIME 2024. The Swarmalators model was introduced in the paper "Oscillators that sync and swarm".
-
Br?vig, Ragnhild
(2024).
Boklansering: Parody in the Age of Remix: Mashups vs. the Takedown (MIT Press).
-
-
Talseth, Thomas & Br?vig, Ragnhild
(2024).
Kommentar til innlegget "Refser NRK for ? la G?te-l?t konkurrere i Melodi Grand Prix".
[Newspaper].
VG.
-
-
-
-
-
Jensenius, Alexander Refsum
(2024).
Sound Actions: Conceptualizing Musical Instruments.
Show summary
El martes 6 de agosto, a las 14:30 horas, se realizará la charla "Sound Actions: Conceptualizing Musical Instruments" que impartirá Alexander Refsum Jensenius, profesor de Tecnología Musical y Director del Centro RITMO para Estudios Interdisciplinarios en Ritmo, Tiempo y Movimiento en la Universidad de Oslo.
Alexander Refsum Jensenius es profesor de Tecnología Musical y Director del Centro RITMO para Estudios Interdisciplinarios en Ritmo, Tiempo y Movimiento en la Universidad de Oslo.
En la charla, que se realizará en idioma inglés, el académico presentará algunos aspectos destacados de su libro "Sound Actions: Conceptualizing Musical Instruments". Esto incluye una discusión sobre las diferencias entre los instrumentos acústicos y electroacústicos y cómo los instrumentos de hoy en día no son solo "creadores de sonido", sino que cada vez más son "creadores de música". Ejemplificará este cambio con varios de sus propios nuevos instrumentos para la expresión musical (NIME).
-
Jensenius, Alexander Refsum
(2024).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Jensenius, Alexander Refsum; Danielsen, Anne; Kvammen, Daniel & Tollefsb?l, Sofie
(2024).
Musikksnakk: Musikk i urolige tider.
Show summary
P? konsert f?ler vi samhold med fremmede, viser forskning. I et kort ?yeblikk samler musikken oss. Hvordan kan musikk ogs? samle oss i urolige tider? Hva er det med akkurat musikk som forener oss? Bli med p? musikksnakk med artistene Daniel Kvammen og vokalist i FIEH, Sofie Tollefsb?l, og musikkforsker Anne Danielsen. Her vil musikkprofessor Alexander Refsum Jensenius lede samtalen med ulike sp?rsm?l knyttet til tematikken – kanskje svarer de p? ditt sp?rsm?l ogs?? Samtalen er beregnet for et publikum uten faglig bakgrunn i temaet.
-
Jensenius, Alexander Refsum & Bochynska, Agata
(2024).
Opphavsrettslige utfordringer ved overgangen til FAIR forskningsdata ved UiO.
-
Br?vig, Ragnhild
(2024).
Exploring the Intersection of AI, Ethics, and Music.
-
Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2024).
Hey Siri, what are the royalty splits of the
song you wrote for me?
-
Jónsson, Bj?rn Thór; Erdem, ?a?ri; Fasciani, Stefano & Glette, Kyrre
(2024).
Towards Sound Innovation Engines Using Pattern-Producing Networks and Audio Graphs.
Show summary
This study draws on the challenges that composers and sound designers face in creating and refining new tools to achieve their musical goals. Utilising evolutionary processes to promote diversity and foster serendipitous discoveries, we propose to automate the search through uncharted sonic spaces for sound discovery. We argue that such diversity promoting algorithms can bridge a technological gap between the theoretical realisation and practical accessibility of sounds. Specifically, in this paper we describe a system for generative sound synthesis using a combination of Quality Diversity (QD) algorithms and a discriminative model, inspired by the Innovation Engine algorithm. The study explores different configurations of the generative system and investigates the interplay between the chosen sound synthesis approach and the discriminative model. The results indicate that a combination of Compositional Pattern Producing Network (CPPN) + Digital Signal Processing (DSP) graphs coupled with Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and a deep learning classifier can generate a substantial variety of synthetic sounds. The study concludes by presenting the generated sound objects through an online explorer and as rendered sound files. Furthermore, in the context of music composition, we present an experimental application that showcases the creative potential of our discovered sounds.
-
Saplacan, Diana
(2024).
Human Robot Interaction: Studies with Users.
-
Saplacan, Diana
(2024).
Qualitative Observational Video-Based Study on Perceived Privacy in Social Robots’ Based on Robots Appearances.
-
Jensenius, Alexander Refsum
(2024).
Musikk, Data og KI.
Show summary
Musikk er en av de mest komplekse menneskelige kommunikasjonsformene som finnes og egner seg derfor godt for ? utforske kunstig intelligens. Presentasjonen beskriver hvordan musikkforskere, psykologer og informatikere jobber sammen ved RITMO for ? forst? mer om rytme, tid og bevegelse hos mennesker og maskiner.
-
Jensenius, Alexander Refsum
(2024).
Fostering the emergence of new research data careers.
Show summary
Equipping future graduates, researchers, and society at large with the skills needed to support the digital transition is becoming a priority on European, national, and institutional agendas. Research data management (RDM) and FAIR data are part of this skillset, and research data careers are increasingly in demand in both the public and private sectors. At organisational level, the availability of staff with data competencies is crucial to support the implementation of FAIR RDM practices and, ultimately, to foster the transition towards Open Science. Data collected by the European University Association show for example how universities are creating dedicated research data support services and hiring specific support staff, but significant disparities exist between countries and institutions. RDM responsibilities still fall to existing members of staff. In many cases, technical skills are only partially available and new dedicated staff is required. Universities who have hired specific research data support roles may still have problems meeting the growing demand for research data expertise. Within this context, a major challenge is represented by the absence of a shared recognition and definition of research management professional profiles, despite recent progress being made at European level through ERA Action 17 on research management. This session will address needs, challenges and opportunities related to the emergence of new research data careers, including the identification of key skills, clear career paths and their integration into research assessment systems. It will do so by showcasing best practices and reflecting on ways forward with a panel of experts representing different actors, i.e. university leaders, research data practitioners and policymakers.
-
-
Bishop, Laura & Kwak, Dongho
(2024).
Ignoring a noisy metronome during dyadic drumming.
-
Marín Bucio, Diego Antonio & Polak, Rainer
(2024).
Exploring motion capture systems in dance research: a case study of djembe dance from West Africa.
-
T?rresen, Jim
(2024).
Ethical and Regulatory Perspectives of Robotics and Automation.
-
Marin Bucio, Diego Antonio
(2024).
Can an AI dance?
Show summary
The reflections and results presented in this paper originate from the experience of designing an artificial intelligence dancer and the subsequent co-creation of human-IA dance in the "Dancing Embryo" project (Marin, Wallace and 6A9). The research revolves around three ethnographies that capture the collaborative process between humans and machines to create and perform dance. This research transcends mere technological innovation into a profound philosophical enquiry, questioning the very nature and limits of the art of dance.
-
Bernhardt, Emil
(2024).
The Beauty and/of the Beat: Expressive Regularity in Schubert.
-
Lartillot, Olivier
(2024).
Successes and challenges of computational approaches for audio and music analysis and for predicting music-evoked emotion.
Show summary
Background
Decades of research in computational sound and music analysis has led to a large range of analysis tools offering rich and diverse description of music, although a large part of the subtlety of music remains out of reach. These descriptors are used to establish computational models predicting perceived or induced emotion directly from music. Although the models can predict a significant amount of variability of emotions experimentally measured (Panda et al., 2023), further progress seems hard to achieve, probably due to the subtlety of music and of the mechanisms underlying the evocation of emotion from music.
Aims
An extensive but synthetic panorama of computational research in sound and music analysis as well as emotion prediction from music is presented. Core challenges are highlighted and prospective ways forward are suggested.
Main contribution
For each separate music dimension (dynamics, timbre, rhythm, tonality and mode, motifs, phrasing, structure and form), a synthetic panorama of the state of the art is evoked, highlighting strengths and challenges as well as indicating how particular sound and music features have been found to correlate with rated emotions. The various strategies for modelling emotional reactions to audio and musical features are presented and discussed.
One common general analytical approach carries out a broad and approximate analysis of the audio recording based on simple mathematical models, describing individual audio or musical characteristics numerically. It is suggested that such loose approach might tend to drift away from commonly understood musical processes and to generate artefacts. This vindicates a more traditional musicological approach based on a focus on the score or approximations of it – through automated transcription if necessary – and a reconstruction of the types of traditional representations commonly studied in musicology. I also argue for the need to closely reflect the way humans listen to and understand music, inspired by a cognitive perspective. Guided by these insights, I sketch the idea of a complex system made of interdependent modules, founded on sequential pattern inference and activation scores not based on statistical sampling.
I also suggest perspectives for the improvement of computational prediction of emotions evoked by music. Discussion and conclusion
Further improvements of computational music analysis methods, as well as emotion prediction, seem to call for a change of modelling paradigm.
References
R. Panda, R. Malheiro, R. Paiva, "Audio Features for Music Emotion Recognition: A Survey", IEEE Transactions on Affective Computing, 14-1, 68-88, 2023.
-
Lartillot, Olivier
(2024).
Introduction to the MiningSuite toolbox.
-
Lartillot, Olivier
(2024).
KI-verkt?y for h?ndtering, transkribering og analyse av musikkarkiver.
Show summary
Jeg presenterer en rekke verkt?y utviklet i 澳门葡京手机版app下载 med Nasjonalbiblioteket. AudioSegmentor deler automatisk b?ndopptak i individuelle musikkstykker. Dette verkt?yet forenklet digitaliseringen av Norsk folkemusikksamling. Vi bruker avanserte dyp l?ringsmetoder for ? skape et banebrytende automatisk musikktranskriberingssystem, MusScribe, f?rst finjustert for Hardingfele, og n? gjort tilgjengelig for musikkarkivprofesjonelle for et bredt spekter av musikk. Jeg diskuterer ogs? v?re p?g?ende fremskritt innen den automatiserte musikologiske analysen av folkemusikkstykker og omfattende samlinger.
-
Ziegler, Michelle; Sudo, Marina; Akkermann, Miriam & Lartillot, Olivier
(2024).
Towards Collaborative Analysis: Kaija Saariaho’s IO.
-
Jónsson, Bj?rn Thór; Erdem, Cagri; Fasciani, Stefano & Glette, Kyrre
(2024).
Cultivating Open-Earedness with Sound Objects discovered by Open-Ended Evolutionary Systems.
Show summary
Interaction with generative systems can face the choice
of generalising towards a middle ground or diverging towards novelty. Efforts have been made in the domain of
sounds to enable divergent exploration in search of interesting discoveries. Those efforts have been confined
by pre-trained models and single environments. We are
building on those efforts to enable autonomous discovery of sonic landscapes. Furthermore, we draw inspiration from research on open-ended evolution to continuously provide evolutionary processes with new opportunities for sonic discoveries. Exposure to autonomously
discovered sound objects can elevate openness to sonic
experiences, which in turn offers inspiring opportunities
for creative work involving sounds.
-
Jónsson, Bj?rn Thór
(2024).
Phytobenthos 1.
Show summary
A playlist of livestream recordings during several nights of stochastic sequencing through sets of sounds found during runs with different configurations of quality diversity search.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Doelling, Keith & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Rhythm-based temporal expectations: Unique contributions of predictability and periodicity.
-
Asko, Olgerta; Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingrid; Llorens, Ana?s & Ivanovic, Jugoslav
[Show all 12 contributors for this article]
(2024).
Predictive encoding of deviant tone sequences in the human prefrontal cortex.
-
Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingerid; Rezende Carvalho, Vinicius; Endestad, Tor & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Unheard Surprises: Attention-Dependent Neocortical Dynamics Following Unexpected Omissions Revealed by Intracranial EEG.
-
Laczko, Balint
(2024).
Two-part guest lecture about spatial audio and Ambisonics for MCT students.
-
Blenkmann, Alejandro Omar
(2024).
Audiopred Project: Neurophysiological mechanisms of auditory predictions.
-
Laczko, Balint
(2024).
Code repository for "Synth Maps: Mapping The Non-Proportional Relationships Between Synthesizer Parameters and Synthesized Sound".
Show summary
Parameter Mapping (PM) is probably the most used design approach in sonification. However, the relationship between a synthesizer’s input parameters and the perceptual distribution of its output sounds might not be proportional, limiting its ability to convey relationships within the source data in the sound. This study evaluates a basic Frequency Modulation (FM) synthesis module with perceptually motivated descriptors, measures of spectral energy distribution, and latent embeddings of pre-trained audio representation models. We demonstrate how these metrics do not indicate straightforward relationships between synthesis parameters and perceived sound. This is done using interactive audiovisual scatter plots—Synth Maps—that can be used to explore the sound distribution of the synthesizer and qualitatively evaluate how well
the different representations align with human perception. Link to the code and the interactive Synth Maps are available.
-
Basiński, Krzysztof; Dom?alski, Tomasz & Blenkmann, Alejandro Omar
(2024).
The effect of harmonicity on mismatch negativity responses to different auditory features.
-
Asko, Olgerta; Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingrid; Anais, Llorens & Ivanovic, Jugoslav
[Show all 12 contributors for this article]
(2024).
Predictive encoding of deviant tone sequences in the human prefrontal cortex.
Show summary
The ability to use predictive information to guide perception and action relies heavily on the prefrontal cortex (PFC), yet the involvement of its subregions in predictive processes remains unclear. Recent perspectives propose that the orbitofrontal cortex (OFC) generates predictions about perceptual events, actions, and their outcomes while the lateral prefrontal cortex (LPFC) is involved in prospective functions, which support predictive processes, such as selective attention, working memory, response preparation or inhibition. To further delineate the roles of these PFC areas in predictive processing, we investigated whether lesions would impair the ability to build predictions of future events and detect deviations from expected regularities. We used an auditory deviance detection task, in which the structural regularities of played tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) and global (i.e., between sequences) level.
We have recently shown that OFC lesions affect detecting prediction violations at two hierarchical levels of rule abstraction, i.e., altered MMN and P3a to local and simultaneous local + global prediction violations (https://doi.org/10.7554/eLife.86386). Now, we focus on the task's predictive aspect and present the latest results showing the involvement of PFC subregions in anticipation of deviances informed by implicit predictive information.
Behavioral data shows that deviance expectancy induced faster deviance detection in healthy adults (n=22), suggesting that participants track a state space representation of the task and anticipate upcoming deviant sequences.
The analysis of EEG data from patients with focal lesions to the OFC (n = 12) or LPFC (n = 10), and SEEG from the same areas in patients with epilepsy (n = 7), revealed interesting differences. Healthy adults (n = 15) showed modulations of the Contingent Negative Variation (CNV) – a marker of anticipatory activity - tracking the expectancy of deviant tone sequences. However, patients with OFC lesions lacked CNV sensitivity to the predictive context, while patients with LPFC lesions showed moderate sensitivity compared to healthy adults. These results were further supported by intracranial recordings, which revealed expectancy modulation of the high-frequency broadband signal from electrodes in OFC and LPFC, with an earlier latency of activity modulation for the OFC and a later one for the LPFC.
Altogether, the complementary approach from behavioral, intracerebral EEG, scalp EEG, and causal lesion data provides compelling evidence for the distinct engagement of the two prefrontal areas in predicting future events and signaling deviations.
-
Laczko, Balint & Jensenius, Alexander Refsum
(2024).
Poster for "Synth Maps: Mapping The Non-Proportional Relationships Between Synthesizer Parameters and Synthesized Sound".
Show summary
Parameter Mapping (PM) is probably the most used design approach in sonification. However, the relationship between a synthesizer’s input parameters and the perceptual distribution of its output sounds might not be proportional, limiting its ability to convey relationships within the source data in the sound. This study evaluates a basic Frequency Modulation (FM) synthesis module with perceptually motivated descriptors, measures of spectral energy distribution, and latent embeddings of pre-trained audio representation models. We demonstrate how these metrics do not indicate straightforward relationships between synthesis parameters and perceived sound. This is done using interactive audiovisual scatter plots—Synth Maps—that can be used to explore the sound distribution of the synthesizer and qualitatively evaluate how well
the different representations align with human perception. Link to the code and the interactive Synth Maps are available.
-
Jensenius, Alexander Refsum
(2024).
From air guitar to self-playing guitars.
Show summary
What can air guitar performance tell about people's musical experience and how does it relate to real guitar performance? Alexander Refsum Jensenius will tell about his decade-long research into music-related body motion of both performers and perceivers. He will also tell about how this has informed new performance paradigms, including the self-playing guitars that will be showcased at the festival.
?
Alexander Refsum Jensenius is a professor of music technology at the University of Oslo and Director of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. He studies how and why people move to music and uses this knowledge to create new music with untraditional instruments. He is widely published, including the books Sound Actions and A NIME Reader.
-
Jensenius, Alexander Refsum
(2024).
Video Visualization and Analysis.
Show summary
In this workshop, I will introduce video visualization as a method for understanding more about music-related body motion. Examples will be given of various methods implemented in the standalone application VideoAnalysis and the Musical Gestures Toolbox for Python.
-
-
Jensenius, Alexander Refsum
(2024).
20 years of concert research at the University of Oslo.
Show summary
In my talk I will give an overview of the concert research conducted in the fourMs Lab at the University of Oslo from the early 2000s to today. Over the years, we have explored and refined numerous data captures methods, from qualitative observation studies, interviews, and diaries to motion capture and physiological sensing. At the core has always been the attempt to shed light on the complexity of music performance. This includes understanding more about the subtleties of performer's sound-producing actions, sound-facilitating motion, and communicative and expressive gestures. It also includes the intricacies of inter-personal synchronization. Over the years, we have been able to expand from studying duos, trios, and quartets to full orchestras. Today, we have lots of data, some answers, and even more questions than when we started. An excellent starting point for future research.
-
-
Jensenius, Alexander Refsum
(2024).
Labprat #3: NM i stillstand.
Show summary
Klarer du ? st? stille til favorittl?ta di? Pr?v selv og vinn 1000kr!
Folk sier ofte at det er umulig ? ikke bevege seg til musikk, men stemmer det?
Onsdag 3. april kan du teste deg selv n?r professor Alexander Refsum Jensenius – ogs? kjent som Professor stillstand – inviterer til ?NM i stillstand? her p? Popsenteret.
Vinneren k?res samme kveld p? LAB.prat #3 med nettopp Alexander! Her f?r du ogs? vite mer om hva som faktisk skjer i kroppen n?r vi h?rer p? musikk.
Som vanlig ledes kvelden av fasilitator og ?MC? Dr. Kjell Andreas Oddekalv, ogs? kjent som ?Dr. Kjell? (eller hele Norges Kjelledegge som han selv liker ? si) fra Hiphop orkesteret Sinsenfist. Sammen med Alexander inviterer han til en uformell samtale og Q&A om kroppsrytmer og hvordan de p?virkes av omgivelsene v?re.
I tidsrommet mellom stillstandkonkurransen og LAB.prat er Popsenteret ?pent og du er velkommen til ? bes?ke utstillingen v?r og alt den har ? by p?!
-
Jensenius, Alexander Refsum
(2024).
20 Years of Piano Research at the University of Oslo.
Show summary
In this lecture-recital, I will present piano-related research from the Department of Musicology over the last twenty years. I will also reflect on my role in this history, both as an artist and scientist. Finally, I will scrutinize the department's new Disklavier while performing various exploratory etudes.
-
Jensenius, Alexander Refsum
(2024).
Embodied music-related design.
Show summary
Abrahamson et al. (2022) recently called for a merging of Embodied Design-Based Research and Learning Analytics to establish a coherent and integrated focus on Multimodal Learning Analytics of Embodied Design. In Spring 2022, members of EDRL and selected international collaborators of the lab participated in “Rhythm Rising,” a workshop week hosted at University of Oslo’s RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion. The workshop featured activities for graduate students to learn the scientific research methodologies of gathering physical, physiological, and neurobiological data from study participants engaged in interactive learning of STEM content. The activities combined the respective expertise of Abrahamson (learning sciences) and Jensenius (embodied music cognition and technology) to investigate sensorimotor micro-processes hypothesized to form the cognitive basis of conceptual understandings, such as hand- and eye actions leading to the emergence of mathematical insight. Whereas the Oslo workshop spurred great enthusiasm among the graduate students, its duration only allowed time for initial data collection. Therefore, we would like to regather in Spring 2024 to continue our collaborative work and to share insights about data analysis, visualization, and interpretation. Concurrently, we’ll develop ideas for future joint research projects.
-
Jensenius, Alexander Refsum
(2024).
The Ambient project at RITMO.
Show summary
The AMBIENT project aims to study how such elements influence people's bodily behaviors and how they feel about the rhythms in an environment. This will be done by studying how different auditory and visual stimuli combine to create rhythms in various settings.
-
Jensenius, Alexander Refsum; Vo, Synne; Kelkar, Tejaswinee & Kjus, Yngvar
(2024).
Musikksnakk: Musikk p? Spotify - hvordan funker algoritmene?
Show summary
Hvorfor er det slik at plateselskaper ?nsker at artister skal lage TikTok?er for ? promotere musikken sin? Hva bestemmer hvilke musikkanbefalinger du f?r i Spotify? Og hvordan bruker plateselskapene dataene dine til ? generere klikk og lytt? Bli med p? en samtale om algoritmer p? apper som TikTok og Spotify - og hvordan de p?virker musikksmaken din!
Til ? diskutere dette kommer:
- Synne Vo. Hun er en artist som slo igjennom p? TikTok, og bruker plattformen aktivt for ? promotere musikken sin. Hun kommer til panelet for ? dele sine erfaringer med bransjen og appene.
- Yngvar Kjus. Han er professor i musikk og medier p? UiO, og har forsket mye p? popul?rmusikk, musikkproduksjon og musikkbransjen.
- Tejaswinee Kelkar. Hun er er en sanger og forsker innen musikk og bevegelse. Hun har tidligere jobbet som dataanalytiker i Universal Music Norway og ved RITMO Center of Excellence ved Universitetet i Oslo.
Samtalen ledes av Alexander Refsum Jensenius. Han er professor i musikk ved Universitetet i Oslo, og leder av RITMO - Senter for tverrfaglig forskning p? rytme, tid og bevegelse. Han pr?ver hele tiden ? forst? mer om hvordan og hvorfor mennesker beveger seg til musikk.
-
Jensenius, Alexander Refsum
(2024).
Mock PhD Interview.
Show summary
The objective of the interview mockup is to provide an example of what a PhD interview looks like. We want to provide a safe space to ask questions to an experient interviewer and to understand how to better prepare for the interview if you're applying to PhD positions in other countries.
-
Jensenius, Alexander Refsum
(2024).
Mock PhD Interview.
Show summary
The objective of the interview mockup is to provide an example of what a PhD interview looks like. We want to provide a safe space to ask questions to an experient interviewer and to understand how to better prepare for the interview if you're applying to PhD positions in other countries.
LatAm BISH Bash is a series of meetings and networking events that connect engineers, researchers, students, and companies working on speech, acoustics, and audio processing.
This time, we will have a PhD mockup interview conducted by Alexander Jensenius, who is a professor of music technology and Director of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion.
-
-
Rezende Carvalho, Vinicius; Collavini, Santiago; Kochen, Silvia; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2024).
Single-neuron responses to a multifeature oddball paradigm.
-
Jensenius, Alexander Refsum & Laczko, Balint
(2024).
Video Visualization.
Show summary
This workshop is targeted at students and researchers working with video recordings. You will learn to use MG Toolbox, a Python package with numerous tools for visualizing and analyzing video recordings. This includes visualization techniques such as motion videos, motion history images, and motiongrams; techniques that, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes some basic computer vision analysis, such as extracting quantity and centroid of motion, and using such features in analysis.MG Toolbox for Python is a collection of high-level modules for generating all of the above-mentioned visualizations and analyses. This toolbox was initially developed to analyze music-related body motion but is equally helpful for other disciplines working with video recordings of humans, such as linguistics, psychology, medicine, and educational sciences.
-
Lartillot, Olivier
(2024).
Harmonizing Tradition with Technology: Enhancing Norwegian Folk Music through Computational Innovation.
Show summary
My work involves developing computational tools to safeguard and elevate the cultural significance of music repertoires, with a focus on a cooperative project with the National Library of Norway related to their collection of Norwegian folk music. Our first phase centered on transforming unstructured audio tapes into a systematic dataset of melodies while ensuring its access and longevity through efficient data management and linking with other catalogues.
Our core activity involves transcribing audio recordings into scores, comparing the traditional manual method with our modern attempts towards automation. Providing detailed performance notation, the close alignment between scores and audio recordings will help improve comprehension and overall accessibility, as well as a more advanced structuring of the collection.
Challenges arose when incorporating this music into the International Inventory of Musical Sources (RISM) database due to the incompatible 'incipit' concept, unfitting genres like Hardanger fiddle folk music. We suggest innovative generalisations for this concept. Moreover, we're creating techniques to digitally dissect the musical corpus, aiming to extract key features of each tune. This initiative not only serves as an alternative to incipits but also provides novel metadata formats, increasing the usability and connectivity within its content and with other databases.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
muScribe: a new transcription service for music professionals.
-
Lartillot, Olivier
(2024).
MIRAGE Closing Seminar: Digitisation and computer-aided music analysis of folk music.
Show summary
One aim of the MIRAGE project is to conceive new technologies allowing to better access, understand and appreciate music, with a particular focus on Norwegian folk music. This seminar presents what has been achieved during the four years of the project, leading in particular to the digital version of the Norwegian Catalogue of Folk Music. We are also conceiving tools to automatically transcribe audio recordings of folk music. More advanced musicological applications are discussed as well. To conclude, we introduce the new spinoff project, called muScribe, aimed at the development of transcription services, for a broad range of music, besides folk music, in a first stage tailored to professional organisations such as archives, publishers and producers.
-
Johansson, Mats Sigvard & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Tracking the beats.
-
Thedens, Hans-Hinrich & Lartillot, Olivier
(2024).
The Norwegian Catalogue of Folk Music Online.
-
Jensenius, Alexander Refsum & Jerve, Karoline Ruderaas
(2024).
Verdens st?rste musikkeksperiment.
[Business/trade/industry journal].
Ballade.
Show summary
I kveld m?tes NRKs popul?rvitenskapelige radioprogram Abels t?rn, KORK og forskningsprosjektet MusicLab for ? m?le hva som skjer mellom musikere og publikum n?r de utsettes for musikk.
-
Saplacan, Diana
(2024).
Social Robots and Socially Assistive Robots (SARs) within Elderly Care: Lessons Learned So Far.
-
-
Vuoskoski, Jonna Katariina; Treider, John Melvin Gudnyson & Huron, David
(2024).
The attribution of virtual agency to music predicts liking.
-
Vuoskoski, Jonna Katariina & Stupacher, Jan
(2024).
Investigating internal motor simulation in response to music stimuli with varying degrees of rhythmic complexity.
-
-
Lartillot, Olivier
(2024).
Real-time MIRAGE visualisation of Bartok's first quartet, first movement.
-
Lartillot, Olivier
(2024).
Overview of the MIRAGE project.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Detecting the notes.
-
Oddekalv, Kjell Andreas
(2024).
Humaniorafestivalen 2024, Inn i historien: Dr. Kjell presenterer: 40 ?r med norsk hiphop | Bj?rnholt.
-
Oddekalv, Kjell Andreas
(2024).
Vi skriv p? tog, og vi skriv p? tog - pitch.
-
Oddekalv, Kjell Andreas
(2024).
Humaniorafestivalen 2024, Inn i historien: Dr. Kjell presenterer: 40 ?r med norsk hiphop | Stovner.
-
Oddekalv, Kjell Andreas & Laeng, Bruno
(2024).
LAB.prat #2: Professor Bruno Laeng - Vi lytter med ?ynene.
-
Oddekalv, Kjell Andreas
(2024).
The Sound of the crew in rap:
Rapping chimeras, illusory posses and other fantastical creatures summoned in the studio and cipher.
.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2024).
Sinsenfist p? Carls.
-
Oddekalv, Kjell Andreas & Swarbrick, Dana
(2024).
LAB.prat #1: Dr. Dana Swarbrick || Dana & The Monsters.
-
Oddekalv, Kjell Andreas & Jensenius, Alexander Refsum
(2024).
LAB.prat #3 og "NM i stillstand": Kan man st? stille til musikk?
-
Jensenius, Alexander Refsum & Danielsen, Anne
(2024).
Tverrfaglighet: 40-grupper til besv?r.
Uniforum.
ISSN 1891-5825.
Show summary
Vi er positive til fler- og tverrfaglige studiel?p og synes 40-grupper er en god idé. Strukturen er p? plass, men implementeringen er mangelfull. Til tider er det vanskelig ? skj?nne at vi jobber ved samme institusjon.
-
Vuoskoski, Jonna Katariina
(2024).
Some of our favourite songs make us sad, which may be why we like them.
[Internet].
https://www.newscientist.com/article/2426284-some-of-our-fav.
-
Danielsen, Anne
(2024).
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm.
Show summary
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
-
T?rresen, Jim
(2024).
Kunstig intelligens og forskningsetiske vurderinger.
-
Jensenius, Alexander Refsum
(2024).
Muskelmusikk.
Show summary
Hva skjer i musklene n?r vi fors?ker ? st? stille? Hvordan kan men lage musikk fra kroppen. I pausen p? Forsker Grand Prix vil jeg underholde med et sceneshow hvor jeg utforsker interaktive muskelarmb?nd og en musikkhanske.
-
Serdar G?ksülük, Bilge
(2024).
Remote Dance Improvisation Through Advanced Telematic Technologies.
-
Norstein, Emma Stensby; Yasui, Kotaro; Kano, Takeshi; Glette, Kyrre & Ishiguro, Akio
(2024).
A bio-inspired decentralized approach to multi-morphology control.
Show summary
Traditional robot controllers are usually optimized
for a specific robot design, and will often fail if the robot’s morphology is changed. This can be a challenge for robustness to damage, self-reconfiguring robots, or in the case of morphology design search algorithms, where each new robot design dictates re-learning the controller. Here, we take on a highly bio-inspired approach, taking inspiration from the versatility of myriapod locomotion, to create a multi-morphology controller.
We propose a simple decentralized controller model, which
can, without change in parameters, adapt to various centipede-like morphologies and display different behaviors based on changes in the morphology and environment. The approach shows potential for robot design and could potentially be useful for understanding mechanisms of animal locomotion.
-
Glette, Kyrre
(2024).
Bio-inspiration and divergent search algorithms for robotics and sound exploration
.
-
Solbakk, Anne-Kristin; Hope, Mikael; Solli, Sandra; Leske, Sabine Liliana; Foldal, Maja Dyhre & Blenkmann, Alejandro Omar
(2024).
Research seminar .
-
C?mara, Guilherme Schmidt
(2024).
Looking for the perfect JND: In search of more ecological thresholds for the perception of microrhytm in groove-based music.
Show summary
There is currently a gap in microrhythm research regarding to what extent we perceive nuances in the
timing of complex acoustic stimuli in realistic musical contexts. Classic studies tend to investigate the
so-called just-noticeable difference (JND) thresholds of timing discrimination in non- or quasi-rhythmic
contexts, and generally use non-musical sound stimuli such as clicks or sine waves. Findings from these
studies show that we can discriminate minute timing irregularities between such simple sounds with a
high degree of precision – as low as 2 milliseconds for onset asynchronies between tones (Hirsh 1959,
Zera & Green 1993). In more recent decades, studies have incorporated musical sounds into
discrimination experiments, though these are often still synthesized, and at best tend to resemble quasimusical/rhythmic contexts. Even so, these have revealed similar impressive acuity results, as well as
further revealing important effects of tempo (IOI) and degree of musical training on timing JNDs (Frane
& Shams 2017; Friberg & Sundberg 1995). To our knowledge, however, none have yet focused attention
towards JND thresholds in more realistic musical contexts. As such, the extent to which results derived
from non- or quasi-musical experimental settings translate to our perception of microrhythmic nuances
in real groove-based music – that is, highly multilayered ensembles featuring a range of complex
instrumental sounds and rhythmic patterns – remains somewhat poorly understood.
In this talk, I will present an overview of some of the abovementioned salient literature on perceptual
thresholds of microtiming, with focus on asynchrony (beat delay/anticipation) and anisochrony (swing).
In addition, I will present some preliminary results from our own ongoing series of JND experiments
which seek to generate more ecologically valid perceptual heuristics for microrhythm in simple, yet
realistic, groove-based musical contexts. Results from pilot experiments on a standard funk pattern
(modelled on James Brown’s Soul Power) already indicate that JNDs for simple detection of asynchrony
in a given instrument layer (guitar, bass, drums [hi-hat, kick, snare]) exceed those predicted by the
previous literature. This suggests that perhaps we are not as sensitive to certain forms of microrhythmic
nuances in realistic musical contexts as previously thought. They also point to important differences in
JND thresholds between musicians and non-musicians – individuals without musical training appear to
be significantly less sensitive to microrhythmic nuances than musicians – as well as between percussive
and stringed instruments – we appear to be more sensitive to asynchronies produced by sharper,
impulsive drum sounds, as opposed to wider, smoother ones such as those of the electric bass and guitar.
These latter findings in particular add to the growing awareness in the field of microrhythm studies that
sound-related features related to timbre are fundamental to the perception and production of timing in
groove-based contexts (C?mara et al. 2020a; 2020b). Different methodological approaches and
challenges will also be discussed, with focus on how different procedures/tasks (e.g. directly comparing
two grooves – one with, and one without, asynchrony – then identifying the one with asynchrony
[2AFC], as opposed to simply listening to one groove, then identifying whether asynchronies were
present or not [Yes/No]) can further affect JND timing thresholds and ultimately provide quite different
answers as to what extent ‘microrhythm matters’ perceptually to us as listeners.
-
Serdar G?ksülük, Bilge
(2024).
Immersive Technologies and Their Implications in Theatre for Young Audiences: Challenges and Opportunities.
-
T?rresen, Jim
(2024).
Vil vi ha roboter til ? hjelpe oss n?r vi trenger hjelp?
-
Serdar G?ksülük, Bilge
(2024).
Remote Intercorporeality Through Telematic Technologies.
-
Serdar G?ksülük, Bilge
(2024).
Conducting Semi-Structured Dance Research in Motion Capture Labs.
-
Serdar G?ksülük, Bilge; Tidemann, Aleksander & Jensenius, Alexander Refsum
(2024).
Telematic Testing: One Performance in Three Locations.
-
Glette, Kyrre; Ellefsen, Kai Olav; Norstein, Emma Stensby & de Bruin, Ege
(2024).
Automatic Design of Robot Bodies and Brains with Evolutionary Algorithms - Tutorial.
Show summary
The evolution of robot bodies and brains allows researchers to investigate which building blocks are interesting for evolving Artificial Life, and how controllers and morphologies can be shaped together for automated robot design. This tutorial aims to introduce evolution of robot body and control, and some of the key challenges one faces when doing experiments in Evolutionary Robotics. These include finding good ways to represent robots (genotypic encodings), challenges related to co-optimizing morphology and control, how environments shape body and control, and selecting the right physical substrate for evolving robots.
After introducing these challenges and showing relevant examples from our own and other labs’ research, we will present a short demo of how to run Evolutionary Robotics experiments in practice, with the Unity ML-Agents framework.
-
-
Solbakk, Anne-Kristin & Jensenius, Alexander Refsum
(2024).
Research Ethics and Legal Perspectives.
-
T?rresen, Jim
(2024).
From Adaptation of the Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
-
-
-
-
-
Solbakk, Anne-Kristin
(2024).
Inhibitory control and impulsive actions in ADHD.
-
Blenkmann, Alejandro Omar
(2024).
Electrophysiological correlates of auditory regularity expectations and violations at short and long temporal scales: Studies in intracranial EEG and prefrontal cortex lesion patients.
-
-
Blenkmann, Alejandro Omar; Volehaugen, Vegard Akselsson; Rezende Carvalho, Vinicius; Leske, Sabine Liliana; Llorens, Anais & Funderud, Ingrid
[Show all 14 contributors for this article]
(2024).
An intracranial EEG study on auditory deviance detection.
-
Polak, Rainer
(2024).
Embedded Audiency: Performing as Audiencing at Music-Dance Circle Events in Mali.
-
Fleckenstein, Abbigail Marie; Vuoskoski, Jonna Katariina & Saarikallio, Suvi
(2024).
Being Musically Moved.
-
Fleckenstein, Abbigail Marie; Vuoskoski, Jonna Katariina & Saarikallio, Suvi
(2024).
Being Musically Moved.
-
Danielsen, Anne
(2024).
Interdisciplinary music research: gains and challenges.
Show summary
Interdisciplinary music research: gains and challenges
Recent years have seen a steady increase in calls for interdisciplinary approaches to research from politicians, university administrators, and public and private funding agencies alike. Interdisciplinary research is needed, it is claimed, to solve many of the foundational crises faced by societies today. While interdisciplinary research holds great promise for large-scale problem-solving, it is also bedeviled by obstacles at the institutional and individual level that monodisciplinary research does not face to the same extent, such as insufficient infrastructure, organizational barriers, lower employability, and few well- established publication channels. Sometimes even more challenging, however, are the different research traditions of the disciplines involved, which might adhere to profoundly different methodological traditions, lack shared criteria for quality assessment, and even disagree regarding what counts as science.
In this talk, I will address the gains and challenges of working across radically different disciplines in music research, sharing my experience from three highly interdisciplinary projects: the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion; the MusicLab Copenhagen research concert; and the TIME project on musical microrhythm.
-
Lucas Bravo, Pedro Pablo
(2024).
Csound vs. ChucK: Sound Generation for XR Multi-Agent Audio Systems in the Meta Quest 3 using the Unity Game Engine.
Show summary
Extended Reality (XR) technologies, particularly headsets like the Meta Quest 3, are revolutionizing the field of immersive sound and music applications by offering new depths of user experience. As such, the Unity game engine emerges as a preferred platform for building such auditory environments. As part of its capabilities, Unity allows the programming of sound generation through a low-level digital signal processing API, which requires specialized knowledge and significant effort for development. However, wrappers that integrate Unity with programming languages for sound synthesis can facilitate the implementation of this task. In this work, we focus on applications for the Meta Quest 3 involving multiple spatialized audio sources; such applications can be framed as XR multi-agent audio systems. We consider two wrappers, CsoundUnity and Chunity, featuring Csound and ChucK programming languages. We test and analyze these wrappers in a minimal XR application, varying the number of audio sources to measure the performance of both tools in two device environments: the development machine and the Meta Quest 3. We found that CsoundUnity performs better in the headset, but Chunity performs better in the development machine. We discuss the advantages, limitations, and computational issues found on both wrappers, as well as the criteria for choosing them to develop XR multi-agent audio applications in Unity.
-
Lucas Bravo, Pedro Pablo
(2024).
Human-Swarm Interactive Music Systems' Examples.
Show summary
These examples use a software for visualization and interaction of simple 3D elements that can communicate with external applications through OSC messages. We called the Human-Swarm Interactive Music System App (HS-ims app). It is intended for multi-agent systems with a user interaction focus, especially for sound and music applications. There are two examples: a decentralized application for "attrators a repellers", and "swarmalators". Both are implemented in Max 8, Python, and/or C++.
-
Vrasdonk, Atilla Juliana; Keller, Peter E.; Endestad, Tor & Vuoskoski, Jonna Katariina
(2024).
The influence of improvisational freedom on flow in flamenco duos
.
-
D'Amario, Sara & Bishop, Laura
(2024).
Self-Reported Experiences of Musical Togetherness in Music Ensembles.
-
-
D'Amario, Sara
(2024).
Cardiac coupling of orchestral musicians and audience members during orchestra performances.
-
Bishop, Laura & D'Amario, Sara
(2024).
Methods tracking four-hand piano performances
.
-
Blenkmann, Alejandro Omar
(2024).
The role of the Orbitofrontal Cortex in building predictions and detecting violations.
-
Lucas Bravo, Pedro Pablo
(2024).
Sound Engines Comparison Platform: Csound vs ChucK for XR.
Show summary
Repository for replication of the work: "Csound vs ChucK: Sound Generation for XR Multi-Agent Audio Systems in the Meta Quest 3 using the Unity Game Engine". This product can be extended to additional applications regarding sound generation in Extended Reality (XR)
-
Lucas Bravo, Pedro Pablo
(2024).
XR Human-Swarm Interactive Music System.
Show summary
This project is an Interactive Music System (IMS) that uses Mixed Reality (MR) and Spatial Audio technologies for a multi-track looping session, having every track represented by an "Agent", which in this case is an entity embodied as a sound source and space visualized as a virtual sphere.
-
Lucas Bravo, Pedro Pablo
(2024).
Self-Assembly and Synchronization: Crafting Music with Multi-Agent Embodied Oscillators.
Show summary
This paper proposes a self-assembly algorithm that generates rhythmic music. It uses multiple pulsed oscillators embedded in cube-shaped agents in a virtual 3D space. When these units connect with each other, their oscillators synchronize, triggering regular sound events that produce musical notes whose sound dynamics change based on the size of the structures formed. This study examines the synchronization time of these oscillators and the emergent properties of the structures formed during the algorithm's execution. Moreover, the resulting sound, determined by multiple interactions among agents, is analyzed in the time and frequency domains from its signal. The results show that the synchronization time slightly increases when more agents participate, although with high variability. Also, a quasi-regular pattern of increase and decrease in the number of structures over time is observed. Additionally, the signal analysis illustrates the effect of the self-assembly strategy in terms of rhythmical patterns and sound energy over time. We discuss these results and the potential applications of this multi-agent approach in the sound and music field.
-
Lucas Bravo, Pedro Pablo
(2024).
Interactive Sonification of 3D Swarmalators.
Show summary
This paper explores the sound and music possibilities obtained from the sonification of a swarm of coupled oscillators moving in a virtual space called "Swarmalators". We describe the design and implementation of a Human-Swarm Interactive Music System based on the 3D version of the Swarmalator model, which is used for signal analysis of the overall sound output in terms of scalability; that is, the effect of varying the number of agents in a swarm system. We also study the behaviour of autonomous swarmalators in the presence of one user-controlled agent, which we call the interactive swarmalator. We observed that sound frequencies barely deviate from their initial values when there are few agents, but they diverge significantly in a highly dense swarm. Additionally, with the inclusion of the interactive swarmalator, the group's behaviour tends to adjust towards it. We use these results to explore the potential of swarmalators in music performance under various scenarios. Finally, we discuss opportunities and challenges to use the Swarmalator model for sound and music systems.
-
T?rresen, Jim
(2024).
Ethical, Legal and Technical Challenges and Considerations.
-
Polak, Rainer & Jacoby, Nori
(2024).
Biological Constraints and Cultural Possibilities in Rhythm Perception.
.
-
Rezende Carvalho, Vinicius; Collavini, Santiago; Kochen, Silvia; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2024).
Human single-neuron responses to a local-global oddball paradigm.
-
Nielsen, Nanette
(2024).
Improvisation as praxis: music as a form-of-life.
-
Leske, Sabine Liliana; St?ver, Isak Elling August; Solbakk, Anne-Kristin; Endestad, Tor; Kam, Julia & Grane, Venke Arntsberg
(2024).
Behavioral and Electrophysiological Markers of Altered Mind Wandering and Sustained Attention in Adult ADHD
.
Show summary
Background: Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that often persists into adulthood. Difficulties with executive control and sustained attention are key characteristics of the disorder that often lead to thoughts that are unrelated to the current task, i.e., mind wandering.
Objective: Investigate behavioral and neurophysiological correlates of sustained attention and mind wandering in adults with ADHD not on medication when examined.
Method: Sustained attention and propensity to mind-wander was investigated in ADHD patients (n=17) and in healthy controls (n=17), matched on gender, age, and IQ. Participants performed an auditory oddball task (participant feedback: On or Off Task), which required continuous manual responses to standard and target tones, while Electroencephalography (EEG) was measured. Sustained attentional control was additionally examined with the Test of Variables of Attention (T.O.V.A.). General IQ was estimated with a reduced version of the Wechsler Adult Intelligence Scale, 4th edition (WAIS-IV).
Results: ADHD patients reported significantly more episodes of mind wandering (Off Task), exhibited reduced target detection accuracy and more Off Task impulsive responses compared to controls.
Patients showed a significantly reduced P300 for the target-standard difference components, for both On and Off Task conditions. This target-standard P300 difference was less modulated between On versus Off Task conditions in patients compared to controls. In the T.O.V.A. test, patients committed significantly more Commission Errors, but did not differ from controls on other variables (Attention Comparison score, Reaction Time variability and latency, Omission Errors).
Conclusion: In comparison to controls, patients showed deteriorated behavioral performance, more episodes of mind wandering, and reduced P300 in a sustained attention task. The P300 decrease likely reflects impaired control of attentional resources allocated to the task, which has an impact on high-level cognitive processing abilities, while early sensory-related stimulus processing is intact.
-
-
Jensenius, Alexander Refsum
(2024).
Vurderinger i akademiske karrierel?p.
Show summary
UHRs arbeidsgruppe for ?pen vurdering utarbeidet i 2021 en veileder for vurdering i akademiske
karrierel?p – NOR-CAM. Det finnes ogs? andre initiativer for vurdering av akademiske karrierer,
deriblant det europeiske Coalition for Advancing Research Assessment (CoARA). Men hva er verdien
av disse vurderingsveilederne? Hvem er de for og hva er de ment ? f? til? Og hvilke vurderingsveiledere
er det som blir viktig i tiden som kommer?
-
Jensenius, Alexander Refsum
(2024).
Interdisciplinarity.
-
-
Jensenius, Alexander Refsum
(2024).
Hjernen i sentrum: Kunst.
Show summary
Hvorfor er noen musikalske og andre ikke? Hvordan har det seg at kunst kan treffe oss s? voldsomt - og s? ulikt! Ulike kunstneriske uttrykk som musikk, malerkunst, litteratur, dans og teater kommer uten fasit og tolkes vidt forskjellig fra person til person. Er det hjernen som styrer dette? Det er ?penbart at hjernen v?r er aktiv og ikke passiv n?r vi opplever kunst. Hvorfor er det s?nn? Gir kunstneriske opplevelser god hjernetrim? Er kunst viktig for hjernehelsen?
-
-
-
Jensenius, Alexander Refsum
(2024).
Tverr faglighet? Muligheter og utfordringer med fler- og tverrfaglighet.
Show summary
Tverrfaglighet nevnes gjerne i festtaler og s?knadstekster, men hvordan er virkeligheten? I denne presentasjonen vil professor Alexander Refsum Jensenius diskutere egne erfaringer med fler- og tverrfaglige forskningsprosjekter.
Han vil ogs? presentere hvordan RITMO jobber med ? utvikle en forskningskultur og prosjekts?knader p? kryss og tvers av gjeldende fagdisipliner.
-
Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2024).
Hey Siri, can you write me a chipmunk soul track? A snapshot of AI tools currently used in music production .
-
Br?vig, Ragnhild & Stevenson, Alex
(2024).
Performing Experimental Hip-Hop: Abstract Orchestra's Cover of Madvillain's "Meat Grinder".
-
Br?vig, Ragnhild
(2024).
Boklansering: Parody in the Age of Remix: Mashups vs. the Takedown (MIT Press).
-
Polak, Rainer; Lara, Pearson & Samuel, Horlor
(2024).
Audiency Beyond the Concert Hall: An Interaction-Based, Music-Theoretical Approach.
-
Polak, Rainer; Holzapfel, Andre & Paschalidou, Stella
(2024).
Motion capture in the field: three reports of hardships in data collection and processing.
-
T?rresen, Jim
(2024).
Sensing and Understanding Humans by a Robot – and vice versa.
-
Marin Bucio, Diego Antonio
(2024).
Dancing Embryo: Danza y co-creatividad humano-IA.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Doelling, Keith & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Rhythm-based temporal expectations: Unique contributions of predictability and periodicity.
Show summary
Flexibly adapting to our dynamic surroundings requires anticipating upcoming events and focusing
our attention accordingly. Rhythmic patterns of sensory input offer valuable cues for these temporal
expectations and facilitate perceptual processing. However, a gap in understanding persists regarding
how rhythms outside of periodic structures influence perception.
Our study aimed to delineate the distinct roles of predictability and periodicity in rhythm-based
expectations. Participants completed a pitch-identification task preceded by different rhythm types:
periodic predictable, aperiodic predictable, and aperiodic unpredictable. By manipulating the timing
of the target sound, we observed how auditory sensitivity was modulated by the target position in the
different rhythm conditions.
The results revealed a clear behavioral benefit of predictable rhythms, regardless of their periodicity.
Interestingly, we also observed an additional effect of periodicity. While both periodic and aperiodic
predictable rhythms improved overall sensitivity, only the periodic rhythm seemed to induce an
entrained sensitivity pattern, wherein sensitivity peaked in synchrony with the expected continuation
of the rhythm.
The recorded event-related brain potentials further supported these findings. The target-evoked P3b,
possibly a neural marker of attention allocation, mirrored the sensitivity patterns. This supports our
hypothesis that perceptual sensitivity is modulated by temporal attention guided by rhythm-based
expectations. Furthermore, the effect of rhythm predictability seems to operate through climbing
neural activity (similar to the CNV), reflecting preparation for the target. The effect of periodicity is
likely related to more precise temporal expectations and could possibly involve neural entrainment.
Our findings suggest that predictability and periodicity influence perception via distinct mechanisms.
-
-
Danielsen, Anne
(2024).
Musikalsk rytme, rytmeforskning og hva den kan brukes til.
-
Jensenius, Alexander Refsum
(2024).
NOR-CAM as an enabler for flexible academic career paths in and out of Norway.
Show summary
Dette webinaret er det fjerde i rekka og vil handla om forslaget om attraktive karrierar i akademia. Europakommisjonen sitt utgangspunkt er at internasjonalt utdannings澳门葡京手机版app下载 og kvalitetsutvikling i h?gare utdanning ikkje blir st?tta og verdsett i s? stor grad som naudsynt i dei akademiske karrierane, og at dette er eit hinder for utvikling av europeisk h?gare utdanning.
Kva inneber dette forslaget, og korleis ser det ut fr? perspektivet til europeiske og norske universitet? Korleis heng den europeiske prosessen for utvikling av akademiske karrierar saman med det som skjer i Noreg?
-
God?y, Rolf Inge
(2024).
Motormimetic cognition of sound-motion objects in music.
Show summary
Motormimetic cognition of sound-motion objects in music
The focus in my talk is on how experiences of body motion contribute to listeners’ perception of meaning in music. The term ‘meaning’ can here be understood as sensations of distinct and significant events evoked in our minds in listening to (or imagining) music, events that may range from the unremarkable and immediately forgotten everyday happenings to the highly remarkable and engaging, i.e. extending from basic sound features (e.g. identifying a sound as made by strumming on a guitar) to high-level affective and/or narrative associations (e.g. identifying a sound made on a guitar as the James Bond chord). However, the ambition of this talk is limited to some basic features of meaning perception, and to fragments of music in the approximately 0.5 to 5 seconds duration range, to what we call sound-motion objects, and correlated motor sensations in this perceptual process, what we call motormimetic cognition. The duration range of sound-motion objects is optimal for focus on significant features such as overall ‘sound’, style, sense of motion and affect, and is a compromise between local and more global occurrences of meaning in music.
There can be little doubt that music can make listeners move, or evoke motion sensations in the minds of listeners. In the past couple of decades, we have seen a surge of publications on music-related body motion, predominantly on whole-body motion such as in dance, walking, and sports, as well as on musicians’ communicative motion, but less on the smaller-scale sound-producing effector motion, e.g. that of fingers, hands, arms, and the vocal apparatus, in various contexts of performance, of expressivity, and of articulation. The contention in this talk is that such smaller-scale sound-producing effector motion is not only crucial in shaping the output sound, but is actually integral to our perceptions of music, and hence, focusing on such motion could help us understand some basic workings of meaning formation in music.
We may call the approach presented here concrete in the sense of focusing on actual sound-producing motion and on actual resultant output sound features, rather than on the abstract Western music notation concepts. The inherited notation-oriented conceptual apparatus posits discrete pitches and durations as the point of departure for meaning formation, whereas the motormimetic approach posits the more holistic sound-producing motion and the resultant holistic sound events as primordial. It means that any sound event will be embedded in some sound-producing motion trajectory, and also that such motion trajectories are integral to our images of the music (e.g. hearing a ferocious drum fill evoking imagery of energetic hand and mallet motion, or hearing soft and slow string music evoking imagery of slow bow motion). Using available technologies and methods for motion capture, motion analysis, and motion features representation, as well as means for analysis and representation of continuous, non-symbolic sound features, it is now possible to gain more detail knowledge of the relationships between sound-producing motion and salient perceptual features of both sound and motion. It is also possible to make holistic representations of temporally distributed features such as dynamic, timbral, pitch-related, textural, and articulatory features as shapes, given the fact that shapes are holistic and concrete, whereas symbols are punctual and abstract.
-
Jensenius, Alexander Refsum; R?nning, Anne-Birgitte; Haug, Dag Trygve Truslew & S?ther, Steinar Andreas
(2024).
Frokostm?te: Humaniora og infrastruktur.
Show summary
Heller ikke en humanistisk forsker klarer seg helt p? egenh?nd. Men hvilken infrastruktur trenger vi for humanistisk forskning? Infrastrukturer kommer i mange st?rrelser og former, og vi snakker stadig mer om dem - s?rlig n?r samtalen dreier seg om det digitale skiftet. Derfor sp?r vi: hva er humanioras infrastrukturer? Hvilke forskjeller og likheter er det mellom de forskjellige fagene p? HF? Hvordan kan vi best s?rge for at n?dvendig infrastruktur er p? plass? For ? gi seg i kast med disse sp?rsm?lene har vi samlet et panel med erfarne forskere og undervisere fra ulike HF-fag som alle i tillegg har erfaring fra lederroller og verv med betydning for hvordan HF og UiO forholder seg til infrastruktur.
-
Jensenius, Alexander Refsum; Edwards, Peter; Klungnes, Kristina Mariell Dulsrud; Berg, Anna & Jenssen, Kjell Runar
(2024).
Musikksnakk: Filmmusikk.
Show summary
Musikk skaper stemning i filmer. Men hvordan klarer filmmusikken ? bevege oss s? mye? Og hva er starten p? fortellingen om hvorfor man bruker musikk for ? skape en viss stemning?
Se for deg en hai komme sv?mmende mot en uviten bader – i stillhet. Hva med Frodo og Sam som karrer seg opp Mount Doom til lyden av... ingenting? Eller Katniss Everdeen som kj?rer i en vogn i flammer gjennom Capitol, uten trommer som dr?nner og majestetiske horn? Litt kjedelig, ikkesant?
Musikk er viktig i film for ? lage en viss stemning. Men hvordan ble det s?nn? Er det bare for ? f? oss til ? f?le, eller ligger det en historie bak filmmusikken?
-
Jensenius, Alexander Refsum
(2024).
Can doing nothing tell us everything?
Show summary
Can doing nothing tell us everything? Meet Professor Alexander Refsum Jensenius, a music researcher exploring the deep connections between sound, space, and the human body. Through his fascinating studies on stillness and motion, Alexander has discovered surprising insights into how we interact with our environment.
-
Jensenius, Alexander Refsum
(2024).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Jensenius, Alexander Refsum
(2024).
?pen forskning muliggj?r forskningsn?r utdanning.
Khrono.no.
ISSN 1894-8995.
Show summary
Selv om vi liker ? si at driver med forskningsbasert utdanning, er organiseringen av forskning og utdanning gjerne plassert i ulike siloer, skriver Alexander Refsum Jensenius.
-
Jensenius, Alexander Refsum
(2024).
Hvordan kan ?pen forskning lede til ?pen utdanning? Og omvendt?
Show summary
?penhet og akademisk frihet er hj?rnesteiner i et velfungerende forskningssystem. Dette m? vi bevare n?r vi skal bygge et helhetlig nasjonalt forskningssystem som ogs? inkluderer skjermet og gradert forskning. Hvordan skal vi klare ? bygge et forskningssystem som er s? ?pent som mulig og s? lukket som n?dvendig? ?pen forskning inneb?rer at forskningen gj?res tilgjengelig og deles av forskere, institusjoner, sektorer og over landegrenser. Det har v?rt lite fokus p? de positive sidene ?pen forskning kan ha innen utdanning. Hvordan kan vi motivere til mer forskningsn?r utdanning, men ogs? ?ke kvaliteten p? forskningen?
-
Oddekalv, Kjell Andreas
(2024).
“I’m sorry y’all, I often drift – I’m talking gift” Microrhythmic analysis of rap – categorization, malleability and structural bothness.
.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2024).
Sinsenfist julekonsert 2024.
-
Oddekalv, Kjell Andreas
(2024).
Panelsamtale: Fra mainstream til oppr?r til mainstream igjen – Hiphop 40 ?r i Norge.
-
-
-
Jensenius, Alexander Refsum; Wendt, Kaja Kathrine; Ski-Berg, Veronica & Slette, Aslaug Louise
(2024).
S1E3 Forskerkarrierer - i tall og matriser.
[Internet].
Podcast.
Show summary
I tredje episode av NIFUs podkastserie Kunnskapsfloker snakker vi om forskerkarrierer. Hva er egentlig en ?forskerkarriere?, og hvor i samfunnet finner vi forskere? ? utvikle gode forskerkarrierer st?r h?yt p? dagsorden b?de i Norge og i Europa. Det utvikles for tiden nye rammeverk for karriereutvikling samt statistiske indikatorer som dokumenterer hvordan forskerkarrierer utvikler seg over tid. Men hvordan kan forskningssystemet tilrettelegge for mangfoldige forskerkarrierer? Gjester i episoden er Kaja Kathrine Wendt fra SSB/NIFU og Alexander Refsum Jensenius fra UiO. Programledere er Veronica Ski-Berg og Aslaug Louise Slette.
-
Christodoulou, Anna-Maria & Jensenius, Alexander Refsum
(2024).
Navigating Challenges in Multimodal Music Data Management for AI Systems.
-
Gholamipour-Shirazi, Azarmidokht & Mossige, Joachim
(2024).
Impact of Mixing on Flavor and Aroma Development in Fermented FoodsImpact of Mixing on Flavor and Aroma Development in Fermented Foods.
arXiv.
doi:
10.48550/arXiv.2412.10190.
-
Vestre, Katharina; Mossige, Joachim; L?vvik, Ole Martin & Jemterud, Torkild
(2024).
Abels t?rn 12.1.2024.
[Radio].
Abels t?rn NRK P2.
-
Quiroga-Martinez, David R.; Blenkmann, Alejandro O.; Endestad, Tor; Solbakk, Anne-Kristin; Kim-McManus, Olivia & Willie, John T.
[Show all 10 contributors for this article]
(2024).
Enhanced frontotemporal theta and alpha connectivity during the mental manipulation of musical sounds.
-
D'Amario, Sara & Bishop, Laura
(2024).
Self-Reported Experiences of Musical Togetherness in Music Ensembles.
-
Blenkmann, Alejandro Omar; Leske, Sabine Liliana; Llorens, Ana?s; Lin, Jack J.; Chang, Edward & Brunner, Peter
[Show all 12 contributors for this article]
(2024).
Novel tools for the anatomical registration of intracranial electrodes.
-
Blenkmann, Alejandro Omar
(2024).
Current challenges in human EEG/iEEG/SUA.
-
-
-
Riaz, Maham
(2024).
Comparing Spatial Audio Recordings from Commercially Available 360-degree Video Cameras.
Show summary
This paper investigates the spatial audio recording capabilities of various commercially available 360-degree cameras (GoPro MAX, Insta360 X3, Garmin VIRB 360, and Ricoh Theta S). A dedicated ambisonics audio recorder (Zoom H3VR) was used for comparison. Six action sequences were performed around the recording setup, including impulsive and continuous vocal and non-vocal stimuli. The audio streams were extracted from the videos and compared using spectrograms and anglegrams. The anglegrams show adequate localization in ambisonic recordings from the GoPro MAX and Zoom H3VR. All cameras feature undocumented noise reduction and audio enhancement algorithms, use different types of audio compression, and have limited audio export options. This makes it challenging to use the spatial audio data reliably for research purposes.
-
Upham, Finn
(2024).
M?linger av orkestermusikerne.
Show summary
Video demonstration of cardiac and respiratory measurements from an orchestra. Originally to be played during Lydo 2024 concert series by the Stavanger Symphony Orchestra.
-
von Arnim, Hugh Alexander & Kelkar, Tejaswinee
(2024).
The Shapeshifter: Motion Capture and Interactive Dance for Co-constructing the Body.
-
-
Upham, Finn
(2024).
Heart Rate consistency and Heart Rate Variability constraints in orchestral musicians across performances.
-
Upham, Finn
(2024).
SUSY Limits.
Show summary
Surrogate Synchrony (SUSY) is a strategy for identifying some shared information in parallel recordings of the same type of signal from people in some interactive context, originally in motion trajectories between people in dialogue (specifically therapy sessions), but it has been applied to many types of measurements taken for people in musical interaction conditions as well. Most recently in used in a Scientic Reports paper (Tschacher, et al., 2023) on physiological and motion measurements of an audience during a classical music concert, it has been applied in similar contexts a few times over without a reconning of how synchronised the shared information must be to be counted (Seibert, et al., 2019; Tschacher, et al., 2019). From my experience with similar measurements and other approaches to shared information in time series, I am very concern that this method is missing much of what readers (and maybe some authors) assume it is capturing in these musical contexts.
The current notebook is only on dyadic SUSY with average non-absolute-valued Fisher's Z transformed cross-correlations, though many of the consequences generalise to the multivariate derivative (M. What I am trying to show is not an opinion about where this technique could be useful, these are the mathematical facts of what kinds of "synchrony" it is capable of assessing. As it stands, there are surely some published false negatives from using this method without sensitivity to what it cannot see.
-
Esterhazy, Rachelle; von Arnim, Hugh Alexander & Damsa, Crina I.
(2024).
Multimodal learning analytics to explore key moments of interdisciplinary knowledge-construction.
-
Marin Bucio, Diego Antonio
(2024).
Embodying the artificial: a multimodal human-machine performance.
-
Marin Bucio, Diego Antonio
(2024).
Dance in the unequal world of High Tech.
-
Serdar G?ksülük, Bilge
(2024).
Immersive Technologies in TYA: Bodily Concerns, Challenges and Opportunities.
-
Wallace, Benedikte
(2024).
Imitation or Innovation? Translating Features of Expressive Motion from Humans to Robots.
-
Abrahamsson, Liv Merve Akca; Bishop, Laura; Vuoskoski, Jonna Katariina & Laeng, Bruno
(2024).
Are human voices ‘special’ in the way we attend to them?
-
Christodoulou, Anna-Maria; Dutta, Sagar; Lartillot, Olivier; Glette, Kyrre & Jensenius, Alexander Refsum
(2024).
Exploring Convolutional Neural Network Models for Multimodal Classification of Expressive Piano Performance.
-
-
Rezende Carvalho, Vinicius
(2024).
Da trajetória acadêmica à experiência no exterior (From academic trajectory to experience abroad).
-
-
Nielsen, Nanette
(2024).
Enacting musical aesthetics: the embodied experience of live music.
-
Br?vig, Ragnhild & Grydeland, Ivar
(2024).
Love your Latency and the Glitching Spatiotemporal Condition.
-
Br?vig, Ragnhild
(2023).
Different approaches to qualitative methods.
-
Br?vig, Ragnhild
(2023).
Machine Rhythms.
-
-
-
Blenkmann, Alejandro Omar; Asko, Olgerta; Volehaugen, Vegard; Foldal, Maja Dyhre; Solli, Sandra & Leske, Sabine Liliana
[Show all 9 contributors for this article]
(2023).
Auditory perception, memory, and predictions.
-
Jensenius, Alexander Refsum
(2023).
Tverrfaglig forskning p? rytme, tid og bevegelse.
Show summary
RITMO er et unikt SFF p? grunn av sin radikalt tverrfaglige oppbygning. Hvordan fungerer det i praksis?
-
Jensenius, Alexander Refsum
(2023).
Introducing MusicLab.
Show summary
In 2021, one of the world’s finest string quartets, The Danish String Quartet (DSQ), and a large team of international researchers based at RITMO, co-hosted MusicLab Copenhagen – a groundbreaking event where DSQ performed their best repertoire while researchers experimented with, measured, and analyzed the experiences and behavior of musicians and audience. Some of the questions we tried to answer were: Do we become one grand “we” when absorbed in music together? How do we synchronize our bodily rhythms with the music during a concert? As an innovative musical and scientific format, the concert has been widely reported and won “Event of the Year” by the Danish National Broadcasting Corporation (DR P2). Now, the researchers have completed their analyses, and we are excited to share findings in a hybrid launch event.
-
Pleiss, Martin Peter
(2023).
The affective cycle of orientation in unfamiliar contexts within an aesthetic Virtual Reality environment.
Show summary
Art has been proposed as an opportunity (Noe?) to observe the ‘strange’ (Gallagher) in the quest for phenomenological descriptions. These fringe forms of factual variations help illuminate experiential structures (Merleau-Ponty).
This paper is presenting
1) an observed, shared cycle of relationships between perceptive actions, affect and degrees of familiarity with a novel and hard-to- grasp Virtual Reality artwork.
This description results from 2) a concrete experimental methodology to utilise the combination of factual variations and experiential artworks in a phenomenological project.
As part of the analysis of my PhD project, this paper describes an experience of orientation that we subjected experiment participants to. The experiment used a multimodal VR artwork, which features a very abstract and unexpectedly inter- actable world, devoid of apparent contexts, symbolisms or real-world references. While intended as an aesthetic experience by its creators, the artwork is rich in its underlying governing laws of physics, visual- and auditory design and in its interaction dynamics.
The resulting ‘strange’ experience made it possible to observe the changing relationship of how-it-mattered: The VR world appearing as the unknown initially and then gradually becoming familiar as something through an action-centred being-with its objects. Different stages of affect become apparent, outlined as an ‘affective cycle of orientation’. Further the paper describes an observable spectrum in the quality of orienting actions and their respective intentions and stances. At one pole of this spectrum actions serve in a mediating and enabling function, the other end falls in a more classical definition of ‘affordances’ (Gibson). I will discuss this with respect to gestures and habits (Merleau-Ponty). And the paper showcases the self as more than just re-acting, but having self-enabling capacities for intentional actions by being enacted, embedded and embodied in an aesthetic experience.
-
Br?vig, Ragnhild
(2023).
My way to becoming a full professor.
-
Br?vig, Ragnhild & Furunes, Marit Johanne
(2023).
Karrierel?psprogrammet p? RITMO med mentorordning.
-
Jensenius, Alexander Refsum
(2023).
Still Standing: The effects of sound and music on people standing still.
Show summary
Throughout 2023, I have been standing still for ten minutes around noon every day, in a different room each day. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for the conscious and unconscious control of musical sounds.
-
-
-
-
God?y, Rolf Inge
(2023).
Exploring sound-motion links in motormimetic cognition.
Show summary
The focus of my talk is on the intimate links between sensations of sound and of motion in music, summarized in the expression motormimetic cognition. The purpose of coining this neologism was to give a name to the mental re-enactment (in some cases, also as overt, visible body motion) of sound-related motion in listening to, or merely imagining, musical sound, and typically, as re-enactments of assumed sound-producing body motion, but also of more overall sensations of energy and/or affect.
My motivation for exploring this topic was a number of personal, introspection-based experiences of sound-producing body motion sensations when listening to music, or when merely imagining music. After quite extensive readings in various domains of the cognitive sciences, it dawned on me that maybe other people could have similar motion sensations when listening to, or merely imagining, music. When publishing papers on motormimetic cognition in musical experience, the response of people in the music cognition community was quite varied. However, in the last couple of decades, there has with the growing popularity of so-called embodied cognition in the cognitive sciences, become more accepted that there are indeed extensive links between perception and body motion in most, perhaps all, domains of human behavior. Yet, there are needless to say still very many outstanding questions as to what we mean by embodied cognition in music, and in my opinion, we seem in particular to lack more detail and systematic knowledge of how such embodied elements play out in very concrete musical features. And this is the aim of my presentation, namely to give an account of how the fusion of sound and motion can be explored in more detail.
One leading idea here is that there are constraints in sound production, both of instruments and sound-producing body motion, concerning biomechanics as well as motor control, and that we may enhance our understanding of motormimetic cognition in music by studying such constraints, first of all in performance, but also in improvisation and composition. This will include constraints and affordances of motion and body postures associated with patterns of textures, rhythm, various figures, ornaments, contours, spectral and formantic shapes, as well as the associated sense of effort and affect.
The basic idea here is to regard musical sound as intimately linked with sensations of motion, to the extent that we may actually perceive salient musical features as multimodal phenomena, e.g. in the case of a drum fill where sensations of drum sound and hands/arms motion are totally fused. Recognizing the extent of this multimodal fusion of sound and motion in music perception, should then have consequences for how we think about various theoretical and practical music-related activities, i.e. encourage us to think about a work of music as just as much a choreography of sound-producing motion as sequence of sounds.
-
Jensenius, Alexander Refsum
(2023).
Musikk og kunstig intelligens.
-
Oddekalv, Kjell Andreas
(2023).
Sounding Same/Sounding Other:
Creative, practical and aesthetic aspects of ad libs and ‘backtracks’ in rap
.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist p? Parkteateret - med Horny Horns og Fister Sisters - Support louilexus & ?se.
-
Oddekalv, Kjell Andreas
(2023).
Flow, layering and rupture in composite auditory streams.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist p? Strynefestivalen.
-
Karbasi, Seyed Mojtaba; Jensenius, Alexander Refsum; God?y, Rolf Inge & T?rresen, Jim
(2023).
Exploring Emerging Drumming Patterns in a Chaotic Dynamical System using ZRob.
Show summary
ZRob is a robotic system designed for playing a snare drum. The robot is constructed with a passive flexible spring-based joint inspired by the human hand. This paper describes a study exploring rhythmic patterns by exploiting the chaotic dynamics of two ZRobs. In the experiment, we explored the control configurations of each arm by trying to create un- predictable patterns. Over 200 samples have been recorded and analyzed. We show how the chaotic dynamics of ZRob can be used for creating new drumming patterns.
-
Bukvic, Ivica Ico; Jensenius, Alexander Refsum; Wittman, Hollis & Masu, Raul
(2023).
Implementing the new template for NIME music proceedings with the community.
Show summary
We will analyze a new possible template for NIME submissions which would simplify the integration of NIME music performances in the COMPEL, a database which facilitates navigation across different categories (pieces, persons, instruments). The template emerges from a workshop run last year at NIME about the structure of COMPEL and the process of entering all performances presented last year. From this workshop we expect to improve the template and validate it with a community.
-
Oddekalv, Kjell Andreas; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Nova nedstrippa - Sinsenfist.
[Radio].
Radio Nova.
-
Karbasi, Seyed Mojtaba
(2023).
Reinforcement Learning for Curious Systems.
-
Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Hope, Mikael; Spiech, Connor & Bégel, Valentin
(2023).
Synchronizing at Slower Tempi Increases Pupil Activity Compared With One’s Own Spontaneous Motor Tempo.
-
Nielsen, Nanette; Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Oddekalv, Kjell Andreas
(2023).
A Norwegian emcee/scholar – Theorizing rap flow from the outside and inside
.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
[Show all 8 contributors for this article]
(2023).
Sommeren for ti ?r sia : "Strekker meg, "Hvite sneakers", "Storeslem".
-
Oddekalv, Kjell Andreas
(2023).
On Analysing Hip-Hop/Rap : Doing Hip-Hop Scholarship in a hip-hop way
.
-
Oddekalv, Kjell Andreas
(2023).
Weak Alternatives …and their presence making shit dope.
-
Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Swarbrick, Dana & Vuoskoski, Jonna Katariina
(2023).
Exploring the Relationship Between Experiences of Awe, Being Moved, and Social Connectedness in Concert Audiences.
-
Swarbrick, Dana
(2023).
Being in Concert: Fostering Togetherness in Audiences.
-
Swarbrick, Dana; Danielsen, Anne; Jensenius, Alexander Refsum & Vuoskoski, Jonna Katariina
(2023).
The Effects of “Feeling Moved” and “Groove” On Standstill.
-
Swarbrick, Dana
(2023).
The Effects of Music on Climbing.
-
Swarbrick, Dana
(2023).
Les Effets du Musique sur Grimper .
-
-
Jensenius, Alexander Refsum
(2023).
Observing spaces while standing still.
Show summary
Throughout 2023, I stand still for ten minutes around noon every day, in a different room each day. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. Previously, I have been interested in the impact of music. Now, I am listening to ventilation systems, elevators, and people walking and talking and reflecting on how they influence my body and
mind. The aim is to understand more about the rhythms of the environment.
-
Jensenius, Alexander Refsum
(2023).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Masu, Raul; Morreale, Fabio & Jensenius, Alexander Refsum
(2023).
The O in NIME: Reflecting on the Importance of Reusing and Repurposing Old Musical Instruments.
Show summary
In this paper, we reflect on the focus of “newness” in NIME research and practice and argue that there is a missing O (for “Old”) in framing our academic discourse. A systematic review of the last year’s conference proceedings reveals that most papers do, indeed, present new instruments, interfaces, or pieces of technology. Comparably few papers focus on the prolongation of existing NIMEs. Our meta-analysis identifies four main categories from these papers: (1) reuse, (2) update, (3) complement, and (4) long-term engagement. We discuss how focusing more on these four types of NIME development and engagement can be seen as an approach to increase sustainability.
-
C?mara, Guilherme Schmidt; Danielsen, Anne & Oddekalv, Kjell Andreas
(2023).
Funky rhythms – broken beats!?Kulturelle og estetiske perspektiver p? groove-basert musikk.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist p? St?dt.
-
Oddekalv, Kjell Andreas
(2023).
Project: Chimera
Postdoctoral project – overview, examples, loose thoughts. HHRIG meeting presentation
.
-
Oddekalv, Kjell Andreas
(2023).
'Them bars really ain't hittin' like a play fight' : Analysing weak alternative lineations and ambiguous lineation in relation to metrical structure in rap flows.
.
-
Jensenius, Alexander Refsum
(2023).
Innovasjon og ?pen forskning.
-
Lartillot, Olivier & Monstad, Lars L?berg
(2023).
MIRAGE - A Comprehensive AI-Based System for Advanced Music Analysis.
-
Blenkmann, Alejandro Omar & Agrawal, Rahul Omprakash
(2023).
Intracranial Electrode Localization workshop.
-
Jensenius, Alexander Refsum
(2023).
Exploring Human Micromotion Through Standing Still.
Show summary
Moving slowly likely puts us into a special state of mind. Subjective reports from various practices including dance, Tai Chi and walking meditation suggest that slow movements can bring participants into a special state involving increased relaxation and awareness. Interestingly, relatively little research has been performed specifically to understand the underlying mechanisms and the possible applications of human slow movement. One reason might be that slow movements are not common in day-to-day life: when we want to move – for example to pick up our cup of coffee - we usually want to do it now. Some evidence suggests that humans tend to avoid moving slowly in different tasks, for example, when improvising movements together. The goal of this meeting is to bring together scholars and practitioners interested in slow movement, and to foster interdisciplinary research on this somewhat neglected topic.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions - Conceptualizing Musical Instruments.
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: Conceptualizing Musical Instruments.
Show summary
How do new technologies change how we perform and perceive music? What happens when composers build instruments, performers write code, perceivers become producers, and instruments play themselves? These are questions addressed in the new book by Professor Alexander Refsum Jensenius: Sound Actions: Conceptualizing Musical Instruments published by the MIT Press.
-
Vuoskoski, Jonna Katariina & Swarbrick, Dana
(2023).
Moving together: Exploring the relationship between emotions, connectedness, and motion in concert audiences.
Show summary
Music is able to evoke experiences of being moved and a sense of social connectedness in audiences – even in the context of streamed concerts and recorded music. The present study set out to investigate audiences’ emotional experiences and amount of movement in a classical string quartet concert, which was attend by both a live (N=91) and a livestreaming (N=45) audience. The results revealed that both audiences felt similarly connected to the performers, while the live audience felt more connected to other audience members than the livestreaming audience. Reports of ‘being moved’ and awe were influenced more by the piece of music than by the listening context, and the live audience demonstrated distinct motion patterns in response to different musical pieces. The amount of audience movement was also associated with the degree of connectedness experienced towards other audience members. In a follow-up online experiment, 189 participants continuously rated their experience of being moved while watching a recording of the Beethoven string quartet performance from the main concert experiment. Cross-correlations between the continuous ratings and musical features and audience movement patterns were analysed. Overall, the findings demonstrate that the degree of connectedness experienced towards other audience members is modulated by shared presence as well as the amount of audience movement, while experiences of ‘feeling moved’ and awe are influenced by the music itself.
-
Polak, Rainer
(2023).
Embedded audiency: Performing as audiencing at music-dance events in Mali.
-
Polak, Rainer
(2023).
Cultural plasticity of cognitive constraints on rhythm perception in listeners from Mali: An interdisciplinary approach.
-
Mossige, Joachim
(2023).
Paneldeltaker p? Abels t?rn.
-
Vuoskoski, Jonna Katariina & Peltola, Henna-Riikka
(2023).
Who hates (some) music, and why? Explaining individual differences in the intensity of music-induced aversion.
-
Lucas Bravo, Pedro Pablo
(2023).
Sonic Explorations for 3D Swarmalators.
Show summary
A swarmalator is a type of self-organizing system where agents or particles interact with each other through local rules. The term "swarmalator" is a combination of "swarm," which refers to a group of agents, and "oscillator," which refers to a system that exhibits periodic behavior. In a swarmalator system, each agent has an internal oscillator that determines its behavior, and the agents interact with their neighbors, affecting each other's oscillations and leading to synchronization. This synchronization can result in collective behaviors such as coordinated motion or pattern formation. Both the phase dynamics and spatial dynamics of the oscillators are coupled in swarmalators. Swarmalators are related to the concept of entrainment, which refers to the synchronization of rhythmic patterns in biological or physical systems.
Swarmalator systems can be used to model entrainment and the emergence of collective behavior in natural systems. Sonic and musical properties can be explored using the parameters involved in swarmalators, leading to interesting self-organized compositions and emergent behaviors capable of interacting with humans in a synchronized environment. Some of these sonical mappings will be presented for a 3D version of swarmalators, and future directions for interactive music systems based on synchronized swarms will be discussed.
-
Polak, Rainer
(2023).
Stephen Blum: Theory for Ethnomusicology (Panel).
-
Polak, Rainer; Pearson, Lara & Horlor, Sam
(2023).
Theorizing audiency.
-
Christodoulou, Anna-Maria; Lartillot, Olivier & Anagnostopoulou, Christina
(2023).
Computational Analysis of Greek Folk Music of the Aegean.
-
Lartillot, Olivier
(2023).
Towards a Comprehensive Modelling Framework for Computational Music Transcription/Analysis.
Show summary
Computational music analysis, still in its infancy, lacking overarching reliable tools, can be seen at the same time as a promising approach to fulfill core epistemo- logical needs. Analysis in the audio domain, although approaching music in its entirety, is doomed to superficiality if it does not fully embrace the underlying symbolic system, requiring a complete automated transcription and scaffolding of metrical, modal/harmonic, voicing and formal structures on top of the layers of elementary events (such as notes). Automated transcription enables to get over the polarity between sound and music notation, providing an interfacing semiotic system that combines the advantages of both domains, and surpassing the limitation of traditional approaches based on graphic representations. Deep learning and signal processing approaches for the discretisation of the continuous signal are compared and discussed. The multi-dimensional music transcription and analysis framework (where both tasks are actually deeply intertwined) requires to take into account the far-reaching interdependencies between dimensions, for instance between motivic and metrical analysis. We propose an attempt to build such a comprehensive framework, founded on general musical and cognitive principles and an attempt to build music analysis capabilities through a combina- tion of simple and general operators. The validity of the analyses is addressed in close discussion with music experts. The potential capability to produce valid analyses for a very large corpus of music would make such a complex system a potentially relevant blueprint for a cognitive modelling of music understanding. We try to address a large diversity of music cultures and their specific challenges: among others, maqam modes (with Mondher Ayari), Norwegian Hardanger fiddle rhythm (with Mats Johansson and Hans-Hinrich Thedens), djembe drumming from Mali (with Rainer Polak) or electroacoustic music (Towards a Toolbox des objets musicaux, with Rolf Inge God?y). We aim at making the framework fully transparent, collaborative and open.
-
Ranjan, Snehal; Hari, Kancharla Aditya; Vuoskoski, Jonna Katariina & Alluri, Vinoo
(2023).
Sad songs say so much: Analyzing moving music shared online.
-
-
Asko, Olgerta; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Meling, Torstein Ragnar; Knight, Robert T. & Endestad, Tor
[Show all 7 contributors for this article]
(2023).
Orbitofrontal lesion impacts formation of auditory expectations.
Show summary
Current findings of orbitofrontal cortex (OFC) function suggest that this region might have a role in the generation of prediction error signals associated with top-down expectation of upcoming stimuli. We investigated the impact of lesions to the OFC on the Contingent Negative Variation (CNV), an electrophysiological marker of cognitive expectation and time perception. Twelve OFC patients and fifteen healthy controls performed an auditory local-global paradigm while brain electrical activity was recorded. The structural regularities of the tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) level with a short timescale and at a global (i.e., between sequences) level with a longer timescale. At the global level, deviant tone sequences were interspersed among standard tone sequences in a pseudorandom order, rendering some deviant sequences more anticipated than others. We found that healthy controls exhibited CNV build-up before the occurrence of deviant sequences. The CNV drift rate was modulated by the expectancy of deviant sequences (i.e., the higher the expectancy, the higher the CNV drift rate), reflecting their ability to anticipate when a deviant tone sequence would occur. However, patients with OFC lesions did not show CNV drift modulations by the expectancy of the deviant tone sequences, indicating impaired anticipation of these upcoming events. These findings suggest involvement of the OFC in generating auditory expectations based on the contextual and temporal structure of the task.
-
Solbakk, Anne-Kristin; Blenkmann, Alejandro Omar; Leske, Sabine Liliana & Endestad, Tor
(2023).
Orbitofrontal lesion impacts formation of auditory expectations.
-
-
Jensenius, Alexander Refsum
(2023).
Forskarperspektivet.
Show summary
Denne hausten har Utkast til strategi for norsk vitenskapelig publisering etter 2024 vore ute til h?yring. Strategien skildrar tilr?dingar til b?de forskarar, forskingsutf?rande institusjonar, forskingsfinansi?rar og myndigheiter. I dette seminaret inviterer vi ein av dei som har utarbeidd strategien, Vidar R?eggen fr? Universitets- og H?gskoler?det, til ? fortelje om arbeidet med rapporten, innspel som har komme inn og korleis han ser for seg det framtidige publiseringslandskapet. Deretter g?r ordet til Alexander Jensenius (UiO, NOR-CAM), Johanne Raade (UiT) og Marte Qvenild (NFR), til ? diskutere korleis dei ser framtida for open publisering etter 2024, fr? perspektivet til ein forskar, institusjon og finansi?r, h?vesvis. Ser dei andre utfordringar enn dei som er fors?kt m?tt i den nye strategien?
-
-
Riaz, Maham
(2023).
An Investigation of Supervised Learning in Music Mood Classification for Audio and MIDI.
Show summary
This study aims to use supervised learning – specifically, support vector machines – as a tool for a music mood classification task. Four audio and MIDI datasets, each containing over four hundred files, were composed for use in the training and testing processes. Mood classes were formed according to the valence-arousal plane, resulting in the following: happy, sad, relaxed, and tense. Additional runs were also conducted with the linear discriminant analysis, a dimensionality reduction technique commonly used to better the performance of the classifier. The relevant audio and MIDI features were carefully selected for extraction. MIDI datasets for the same music generated better classification results than corresponding audio datasets. Furthermore, when music is composed with each mood associated with a particular key instead of mixed keys, the classification accuracy is higher.
-
Martin, Remy Richard; Cross, Ian; Upham, Finn; Bishop, Laura; S?rb?, Solveig & ?land, Frederik
(2023).
What can one learn from more naturalistic concert research?
-
Guo, Jinyue
(2023).
Automatic Recognition of Cascaded Guitar Effects.
-
Riaz, Maham
(2023).
Sound Design in Unity: Immersive Audio for Virtual Reality Storytelling.
Show summary
Research talk on sound design for games and immersive environments. The Unity game engine is used for environmental modeling. The Oculus Spatializer plugin provides control over binaural spatialization with native head related transfer functions (HRTF). Game scenes included C# scripts, which accounted for intermittent emitters (randomly triggered sounds of nature, critters and birds), crossfades, occlusion and raycasting. In the mixing stage, mixer groups, mixer snapshsots, snapshot triggers, SFX reverb sends, and low/high-pass filters were some of the tools demonstrated.
-
Bishop, Laura & Upham, Finn
(2023).
Bodies in Concert.
Show summary
Increasingly, research on music performance is moving out of controlled laboratory settings and into concert halls, where there are opportunities to explore how performance unfolds in high-arousal conditions and how performers and audiences interact. In this session, we will present findings from a series of live research concerts that we carried out with the Stavanger Symphony Orchestra. The orchestra performed the same program of classical repertoire for four audiences of schoolchildren and an audience of families. Orchestra members wore sensors that collected cardiac activity, respiration, and body motion data, and the conductor additionally wore a full-body motion capture suit and eye-tracking glasses. Audience members in some of the concerts were invited to wear reflective wristbands, and wristband motion was captured using infrared video recording. We will begin the session with a discussion of the scientific and methodological challenges that arose during the project, in particular relating to the large scale of data capture (>50 musicians and hundreds of audience members), the visible nature of research that is carried out on a concert stage, and the development of procedures for aligning data from different recording modalities. Next, we will present findings from two lines of analysis that investigate different aspects of behavioural and physiological coordination within the orchestra. One analysis investigates the effects of audience noise and musical roles on coherence in (i) cardiac rate and variability and (ii) respiratory phase and rate. The second analysis investigates the effects of musical demands on synchronization of body sway, bowing, and respiration in string sections. We will conclude the session with an open discussion of how live concert research might be optimized.
-
Ellefsen, Kai Olav
(2023).
Kunstig intelligens: Verden sett gjennom en maskins ?yne.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
Upham, Finn
(2023).
Using Metrically-entrained Tapping to Align Mobile phone sensor measurements from In-person and Livestream Concert Attendees.
Show summary
Music is often made and enjoyed in large groups, but simultaneously capturing measurements from dozens or hundreds of people is technically difficult. When measurements are not constrained to wired or continuous connected wireless systems, we can record much bigger groups, potentially taking advantage of the wearable sensors in our phones, watches, and more dedicated devices. However, aligning measurements captured by independent devices is not always possible, particularly to a precision relevant for music research. Phone clocks differ and update sporadically, wearable device clocks drift, and for online broadcast performances, exposure times can vary by tens of seconds across the remote audience. Many measurement devices that are not open to digital synchronisation triggers still include accelerometers; with a suitable protocol, participant movement can be used to imbed synchronisation cues in accelerometry measurements for alignment regardless of clock times. In this paper, we present a tapping synchronisation protocol that has been used to align measurements from phones worn by audience members and a variety sensors worn by a symphony orchestra. Alignment with the embedded cues demonstrate the necessity of such a protocol, correcting offsets of more than 700 ms for devices supposedly initialised with the same computer clock, and over 10 s for online audience participants. Audience tapping performance improved cell phone measurement alignment to a median of 100 ms offset, and professional musicians tappings improved alignment precision to around 40 ms. While the temporal precision achieved with entrained tapping is not quite good enough for some types of analyses, this improvement over uncorrected measurements opens a new range of group coordination measurement and analysis options.
-
Upham, Finn & Christophersen, Bj?rn Morten
(2023).
Bodies in Concert: RITMO project with the Stavanger symfoniorkester.
-
Laczko, Balint
(2023).
Fluid.jit.plotter: a Max abstraction for plotting and querying millions of points fast using the Fluid Corpus Manipulation library.
-
T?rresen, Jim
(2023).
What is Robotics?
-
Lartillot, Olivier
(2023).
Music Therapy Toolbox, and prospects.
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control to Human–Robot Interaction.
-
Lartillot, Olivier & Monstad, Lars L?berg
(2023).
Computational music analysis: Significance, challenges, and our proposed approach.
Show summary
Music is something that we mostly all appreciate, yet it remains a hidden and enigmatic concept for many of us. Music notation, in the form of music scores, facilitates practicing and enhances the understanding of the richness of musical works. However, acquiring musical scores for any music performance is a tedious and demanding task (called music transcription) that demands considerable proficiency. Hence the interest of computational automation. But music is not just notes, it is also melody, rhythm, themes, timbre, and very subtle aspects such as form. While many of us may not be consciously familiar with these concepts, they still have a subconscious influence on our aesthetic experience. Interestingly, it often happens that the more we consciously understand the underlying language of music, the more we tend to appreciate and enjoy it. Therefore, there is value in creating computational tools that can automate and enhance these types of analyses.
The presenters' past work resulted in the creation of Matlab's MIRtoolbox, which measures a broad range of musical characteristics directly from audio through signal processing techniques. Currently, the MIRAGE project prioritises music transcription (with a particular focus on Norwegian folk music), blending neural-network-based deep learning with conventional rule-based models. Through this project, they highlight the importance of acknowledging the interconnectedness between all musical elements. Additionally, they have crafted animated visualisations to make analyses more accessible to the general public and are aiming to make music transcription technology available to the public, with support from UiO Growth House.
-
T?rresen, Jim
(2023).
What is AI?
-
-
Keeley, Crocket & T?rresen, Jim
(2023).
Workshop on Ethical Challenges within Artificial Intelligence - From Principles to Practice.
-
Wosch, Thomas; Vobig, Bastian; Lartillot, Olivier & Christodoulou, Anna-Maria
(2023).
HIGH-M (Human Interaction assessment and Generative segmentation in Health and Music).
-
Maidhof, Clemens; Agres, Kat; Fachner, J?rg & Lartillot, Olivier
(2023).
Intra- and inter-brain coupling during music therapy.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2023).
Automatic Transcription Of Multi-Instrumental Songs: Integrating Demixing, Harmonic Dilated Convolution, And Joint Beat Tracking.
Show summary
In the rapidly expanding field of music information retrieval (MIR), automatic transcription remains one of the most sought-after capabilities, especially for songs that employ multiple instruments. Musscribe emerges as a state-of-the-art transcription tool that addresses this challenge by integrating three distinct methodologies: demixing, harmonic dilated convolution, and joint beat tracking. Demixing is employed to isolate individual instruments within a song by separating overlapping audio sources, thus ensuring each instrument is transcribed distinctly. Beat tracking is then run as a parallel process to extract the joint beat and downbeat estimations. These processes results in an output midi file, which is then quantized using information derived from the beat tracking. As such, this method paves the way for more accurate and sophisticated analyses, bridging the gap between human and machine understanding of music. Together, these methodologies allow us to produce transcriptions that are not only accurate but also highly representative of the original compositions. Preliminary tests and evaluations showcase the potential in transcribing complex musical pieces with high fidelity, outperforming many contemporary tools in the market. This innovative approach not only has implications for music transcription but also for broader applications in audio analysis, remixing, and digital music production. The model has been instrumental in accelerating the composition process for several Norwegian television shows. Moreover, its efficacy can be observed in the Netflix series "A Storm for Christmas." Renowned composer Peter Baden harnessed this tool to enhance his workflow, proving the demand for innovative tools like this in the professional music industry.
-
T?rresen, Jim
(2023).
Robots-assistants that Care about Privacy, Security and Safety.
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
Riaz, Maham; Upham, Finn; Burnim, Kayla; Bishop, Laura & Jensenius, Alexander Refsum
(2023).
Comparing inertial motion sensors for capturing human micromotion.
Show summary
The paper presents a study of the noise level of accelerometer data from a mobile phone compared to three commercially available IMU-based devices (AX3, Equivital, and Movesense) and a marker-based infrared motion capture system (Qualisys). The sensors are compared in static positions and for measuring human micromotion, with larger motion sequences as reference. The measurements show that all but one of the IMU-based devices capture motion with an accuracy and precision that is far below human micromotion. However, their data and representations differ, so care should be taken when comparing data between devices.
-
Riaz, Maham
(2023).
Using SuperCollider with OSC Commands for Spatial Audio Control in a Multi-Speaker Setup.
Show summary
With the ever-increasing prevalence of technology, its application in various music-related processes, such as music composition and performance, has become increasingly prominent. One fascinating area where technology finds utility is in music performance, offering opportunities for extensive sound exploration and manipulation. In this paper, we introduce an approach utilizing SuperCollider and Open Sound Control (OSC) commands in a multi-speaker setup, enabling spatial audio control for a truly interactive audio spatialization experience. We delve into the musicological dimensions of these distinct methods, examining their integration within a live performance setting to uncover their artistic and expressive potential. By merging technology and musicology, our research aims to unlock new avenues for immersive and captivating musical experiences.
-
C?mara, Guilherme Schmidt; Spiech, Connor & Danielsen, Anne
(2023).
To asynchrony and beyond: In search of more ecological perceptual heuristics for microrhythmic structures in groove-based music.
Show summary
There is currently a gap in rhythm and timing research regarding how we perceive complex acoustic stimuli in musical contexts. Many studies have investigated timing acuity in non-musical contexts involving simple rhythmic sequences comprised of clicks or sine waves. However, the extent to which these results transfer to our perception of microrhythmic nuances in multilayered musical contexts rife with complex instrumental sounds remains poorly understood. In this talk we will present an overview of a planned series of just-noticeable difference (JND) experiments that will generate ecologically valid perceptual heuristics regarding timing discrimination thresholds. The aim is to investigate the extent to which microrhythmic timing and sonic nuances are perceived in groove-based music and connect these heuristics to the pleasurable urge to move in groove-based contexts, as well as acoustic (e.g., intensity, duration, frequency) and musical features (e.g., tempo, genre), and listener factors (e.g. musical training, stylistic familiarity). Overall, we expect timing thresholds to be higher for polyphonic/musical than for monotonic/non-musical stimuli/contexts and higher for pulse attribution (whether one can perceive a “beat”; Madison & Merker 2002, Psychol Res) than for simple detection of asynchrony and anisochrony (whether one can perceive “rhythmic irregularities”). Thresholds will likely be modulated by intensity (Goebl & Parncutt 2002, ICMPC7), tempo (Friberg & Sundberg 1995, J Acous Soc Am), instrumentation (Danielsen et al. 2019, J Exp Psychol), and genre/stylistic conventions (C?mara & Danielsen 2019, Oxford). Musically trained/stylistically familiar listeners may also display style-typical sensitivity to microrhythmic manipulations (Danielsen et al. 2021 Atten Percept Psychophys; Jakubowski et al. 2022; Cogn). In terms of subjective experience, we expect that onset asynchrony exaggerations will likely elicit lower pleasure and movement ratings compared to performances with idiomatic timing profiles (Senn et al. 2018, PLoS One). Higher ratings should also be biased in favor of familiar styles (Senn et al. 2021) and rhythmic patterns that do not engender excessive metrical ambiguity are likely to elicit higher ratings (Spiech et al. 2022, preprint; Witek et al. 2014, PLoS One).
-
C?mara, Guilherme Schmidt; Sioros, Georgios; Danielsen, Anne; Nymoen, Kristian & Haugen, Mari Romarheim
(2023).
Sound-producing actions in guitar performance of groove-based microrhythm.
Show summary
This study reports on an experiment that investigated how guitarists signal the intended timing of a rhythmic event in a groove-based context via three different features related to sound-producing motions of impulsive chord strokes (striking velocity, movement duration and fretboard position). 21 expert electric guitarists were instructed to perform a simple rhythmic pattern in three different timing styles—“laidback,” “on-the-beat,” and “pushed”—in tandem with a metronome. Results revealed systematic differences across participants in the striking velocity and movement duration of chords in the different timing styles. In general, laid-back strokes were played with lower striking velocity and longer movement duration relative to on-the-beat and pushed strokes. No differences in the fretboard striking position were found (neither closer to the “bridge” [bottom] or to the “neck” [head]). Correlations with previously reported audio features of the guitar strokes were also investigated, where lower velocity and longer movement duration generally corresponded with longer acoustic attack duration (signal onset to offset).
-
Bernhardt, Emil
(2023).
Hva er musikk?
-
Upham, Finn
(2023).
Breathing Together in Music, a RESPY Workshop.
Show summary
Respiration is a subtle but inescapable element of real time musical experiences, sometimes casually accompanying whatever we are hearing, other times directly involved in the actions of sound generation. This workshop explores respiratory coordination in music listeners and ensemble musicians with respy, a new python library for evaluating respiration information from single belt chest stretch recordings. Following an introduction to the human respiratory system and breathing in music, the workshop demonstrates how the respy algorithms evaluate phase and breath type, and presents statistical tools for assessing shared information in these features of people listening to or making music together. Rather than only use aggregate statistics such as respiration rate, respy aims to elevate the details of the respiratory sequence to facilitate our exploration of how breathing is involved in musical experiences, second-by-second. Measurable coordination of the respiratory system to musical activities challenges our expectations for interacting oscillatory systems. This session will conclude with a discussion on the different categories of relationships possible between people breathing together in music.
-
Upham, Finn
(2023).
Insight into human respiration through the study of orchestras and audiences.
-
Upham, Finn & Oddekalv, Kjell Andreas
(2023).
Fingers and Tongues: Appreciating Rap Flows through Proprioceptive Interaction in Rhythm Hive.
Show summary
Rhythm games have been studied for their potential to develop interest in music making (Cassidy and Paisley, 2013) and transferable musicianship skills (Richardson and Kim, 2011), but how might they influence players appreciation for specific musical works? Proprioceptive interaction, a concept by game designer Matt Bloch (Miller, 2017), refers to changes in a game player's perception of music as they practice specific movements to it. By drawing attention to coincidental sounds, players can develop their hearing and appreciation for nuances of production and performance. Many fans of rap enjoy performances in languages they do not speak themselves. Without specific language skills, expertise in rap performance, and/or time to learn lyrics phonetically, their experience of a rap flow is hampered by an inability to imitate and imagine the generative action of performance. Rhythm Hive is a mobile rhythm game based on the music of BTS, Enhyphen, and TXT, Kpop groups with substantial followings outside of Korea. Game play presents players with finger choreographies to these groups’ hit songs, tapping sequences to the vocal performances across four to seven positions in a line. For these groups’ many non-rapping and non- Korean-speaking fans, playing Rhythm Hive may offer deeper understanding of performances by rappers like RM, Suga, and J-Hope. Through expert analysis of rap performance, transcriptions of game play, and reflections on the experience of playing Rhythm Hive, we consider shared structure between the prescribed finger choreographies and the rap flows they accompany. We studied rap verses from four BTS songs along side their Easy and Hard level tapping sequences (vocal versions only) to identify parallels in rhythm, segmentation, repetition, and accents. Easy mode choreographies tend to mark their relationship to rap vocals by hitting the start of lines and then articulating structure with repeated contours tapped on quarter and eighth notes. Hard mode choreographies tend to hit every rapped syllable and incorporate more gestural flourishes to mark pitch changes, ending and internal rhymes, and interesting breaks from a steady 16th note flow. Both Easy and Hard tappings sequences consistently follow the rap track when it deviates from a quantized beat. The finger choreographies of Rhythm Hive illuminate rap performances by directing and rewarding players’ attention to details of flows that may otherwise be missed. Game feedback pushes players to replicate delivery microtiming, while spatial patterns underline linguistic and rhythmic structure. Hard mode tapping sequences articulate distinguishing characteristics of specific rap styles, given players tangible sensitivity to degrees of technicality and nuances of genre. While fans may be motivated to play rhythm games like Rhythm Hive out of a preexisting love of the music and bands, tapping along offers them a chance to attend to, appreciate, and even rehearse key aspects of these rappers’ expert performance choices, regardless of how well they might follow by ear.
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
Laczko, Balint
(2023).
Video poster of the project entitled “The autophagic symphony – Unveiling the final rhythm”.
-
-
Laczko, Balint
(2023).
Two-part guest lecture about spatial audio and Ambisonics for MCT students.
-
Blenkmann, Alejandro Omar & Puppio, Daniel
(2023).
Interview at "La Ma?ana de la Radio" FM 97.3.
[Radio].
Neuquen, Argentina.
-
Blenkmann, Alejandro Omar; Ianni, Pablo & Verdugo, Rodrigo
(2023).
Interview for TV show "Salud en Movimiento", Radio Television Neuquén.
[TV].
Neuquen, Argentina.
-
Laczko, Balint
(2023).
Online guest lecture about The Hum - a real-time 3D audiovisual performance in Max.
-
Veenstra, Frank; Norstein, Emma Stensby & Glette, Kyrre
(2023).
Tutorial: Evolving Robot Bodies and Brains with Unity.
Show summary
The evolution of robot bodies and brains allows researchers to investigate which building blocks are interesting for evolving artificial life. Agnostic to the evolutionary approach used, the supplied building blocks influence how artificial organisms will behave. What should these building blocks look like? How should we associate control units with these building blocks? How should we represent the genomes of these robots? In this tutorial we discuss (1) previous approaches to evolving robots and virtual creatures, (2) outline how Unity simulations and Unity's ML-agents package can be used as an interface, and (3) our approach to evolving bodies and brains using Unity.
There are many existing solutions that are tailored to experimenting with body brain co-optimization and we have been using several simulation approaches to evolve modular robots that are represented by directed trees (directed acyclic graphs). Since evolving bodies can be relatively complex, we give participants an overview of existing methods and invite the participants to get some guided hands-on experience using the Unity ML-Agents for evolving robots. The Unity ML-Agents toolkit is an open-source toolkit for game developers, AI researchers, and hobbyists that can be used to train agents using various AI methods. Similar to OpenAI gym, it supplies a Python API through which one can optimize agents in a variety of environments. The Unity ML-Agents toolkit provides an easy-to-use interface that is flexible enough to allow for quick design iterations for evolving robot bodies and brains.
This tutorial is aimed at researchers that are interested in simulating the evolution of bodies and brains of robots. The tutorial will provide an overview of existing approaches to evolving bodies and brains of robots, and demonstrate how to design and incorporate control units, morphological components, environments and objectives. Participants will learn how to use Unity ML-Agents as a tool with evolutionary algorithms and learn how they can create incorporate their own robotic modules for evolving robots.
-
Blenkmann, Alejandro Omar
(2023).
Altered hierarchical auditory predictive processing after lesions to the orbitofrontal cortex - Quantifying Evoked Responses through Encoded Information.
-
-
Martin, Remy Richard
(2023).
Aesthetic Resonances: Senses of Self in Rhythm, Musical Time, and Space.
Show summary
Resonance is a rich concept that is receiving significant attention in current psychology, philosophy, and neuroscience. In ecologically-oriented literature its usage centres on perceivers’ adaptive detection of environmental information (Clarke, 2005; Raja, 2019). This is instructive of the modulating—and enhancing—nature of attention, awareness, and action. Distinct from perceptual notions of resonance in ecological psychology, physical understandings, and accounts of neural activity, appear in several related fields. These typically concern the oscillatory interactions of two systems including forms of phase locking, synchronisation, and entrainment. Elsewhere the metaphor of acoustic resonance, as manifest in political contexts, is receiving philosophical attention (James, 2019).
Resonance is also a central metaphor in the context of aesthetic subjectivities. Vernacular uses of the term in response to aesthetic entanglements (‘I resonate with this song’; ‘that artwork resonates with me’) are called to mind. Adopting the ecological approach, this paper foregrounds resonance as a means of understanding the relationship between the phenomenology of music reception and underlying perceptual and affective interactions. Particularly important, self- luminous aesthetic resonances – experienced as senses of agency, ownership, affirmation, and affiliation – form the focus of a discussion which draws empirical support from quantitative studies of live music spectatorship and rich reports of ‘private’ music listening gathered through media- stimulated, phenomenological interviews.
-
Laczko, Balint
(2023).
Guest lecture about granular synthesis with onset detection in Max.
-
Martin, Remy Richard
(2023).
Our Aesthetic Categories.
-
Martin, Remy Richard
(2023).
Ultima Listeners.
-
Martin, Remy Richard
(2023).
Sensing Contexts: An Audio Walk Through the Nasjonalmuseet.
-
T?rresen, Jim & Yao, Xin
(2023).
Tutorial: Ethical Risks and Challenges of Computational Intelligence.
-
Lartillot, Olivier; Swarbrick, Dana; Upham, Finn & Cancino-Chacón, Carlos Eduardo
(2023).
Video visualization of a string quartet performance of a Bach Fugue: Design and subjective evaluation.
-
Bishop, Laura; H?ffding, Simon; Laeng, Bruno & Lartillot, Olivier
(2023).
Mental effort and expressive interaction in expert and student string quartet performance.
-
Blenkmann, Alejandro Omar
(2023).
Neurophysiological Mechanisms of Human Auditory Predictions: From population- to single neuron recordings.
-
Br?vig, Ragnhild
(2023).
Digitalisering i musikkutdanningen.
-
-
Br?vig, Ragnhild
(2023).
Arousal, Expectancy Violation, and Pleasure.
-
Pleiss, Martin Peter
(2023).
Action cycles before ‘affordances’ in unfamiliar contexts within a playful Virtual Reality environment.
Show summary
Virtual realities as playful, encompassing and somewhat ecological experiences offer unique opportunities for phenomenological inquiries into subjectivity. This paper is presenting
1) an observed, shared cycle of relationships between perceptive (inter)actions, affect and degrees of familiarity in the emergence of affordances while experiencing novel and hard-to-grasp objects and dynamics within a Virtual Reality.
This description results from
2) a concrete experimental methodology to utilise the potential of interactive virtual realities as factual variations to investigate subjectivity and the phenomenology of aesthetic experiences in particular. Aesthetic experiences have been proposed (No?) to observe the ‘strange’ (Gallagher) in the quest for phenomenological descriptions. This paper highlights virtual realities as fringe forms of these factual variations, which help illuminate experiential structures (Merleau-Ponty). As part of the analysis of my PhD project, the paper describes the emergence of affordances in an initially unknown VR environment. My experiment used a multimodal VR artwork, which features a very abstract and unexpectedly inter-actable world, devoid of apparent contexts, symbolisms or real- world references. While intended as an aesthetic experience by its creators, the artwork is rich and playful in its underlying governing laws, design and in its
interaction dynamics.
The resulting ‘strange’ experience made it possible to observe the changing relationship of how-it-mattered: The unknown world gradually becoming familiar as something through an action-centred being-with its objects. Different stages of affect become apparent, outlined as an ‘affective cycle of orientation’. Further the paper describes an observable spectrum in the quality of orienting actions and their respective intentions and stances. At one pole of this spectrum actions serve in a mediating and enabling function, the other end falls in a more classical definition of ‘affordances’ (Gibson). I will discuss this in relation to gestures and habits (Merleau- Ponty) and our self-enabling capacities for intentional actions by being enacted, embedded and embodied (4E-d) in an aesthetic experience.
-
-
-
-
-
Bonetti, Leonardo; Foldal, Maja Dyhre; Leske, Sabine Liliana; Asko, Olgerta; Volehaugen, Vegard Akselsson & Solli, Sandra
[Show all 7 contributors for this article]
(2023).
Auditory perception, memory, and predictions .
-
-
Vogt, Yngve; Krauss, Stefan Johannes Karl; Mossige, Joachim; Dysthe, Dag Kristian; Angheluta, Luiza & Jensenius, Alexander Refsum
(2023).
Bereder grunnen for kunstige organer.
[Business/trade/industry journal].
Apollon.
-
-
Saplacan, Diana
(2023).
Presentation of the paper "Health Professionals’ Views on the Use of Social Robots with Vulnerable Users: A Scenario-Based Qualitative Study Using Story Dialogue Method".
-
-
Jensenius, Alexander Refsum
(2023).
Oppsummering av arbeidet med opphavsrett og lisenser i QualiFAIR.
Show summary
Forskere ofte stiller sp?rsm?l p? hvordan de skal h?ndtere opphavsrett n?r det samler inn data. Hvem eier data? Hvem har rettigheter og hvilke rettigheter har man som prosjektleder eller prosjektdeltaker? Hvilke lisenser skal man velge n?r man vil dele ulikt materiale slikt som artikler, datasett, kildekode, bilder, lyd- og videoopptak? Hvordan kan man bruke andres materiale som ikke har spesifikke lisenser? Hvordan kan UiO legge bedre til rette for at studenter og ansatte f?r et bevisst forhold til opphavsrett?
-
-
Saplacan, Diana
(2023).
Introduction on my background and on the University of Oslo, Robotics and Intelligent Systems Research Group, Norway for Human-Robot Interaction Lab, Department of Social Informatics, Kyoto University, Japan.
-
-
-
Herrebr?den, Henrik; Espeseth, Thomas & Bishop, Laura
(2023).
Cognitive load affects effort, performance, and kinematics in elite and non-elite rowers.
Journal of Sport & Exercise Psychology (JSEP).
ISSN 0895-2779.
45(S1),
p. S83–S83.
doi:
10.1123/jsep.2023-0077.
Show summary
The extent to which elite athletes depend on mental effort and attention to task execution has been a debated topic. Some studies have suggested that motor experts might be relatively unaffected in the face of distraction and that they might even perform better when they attend to extraneous cognitive stimuli (for example in a dual-task paradigm), as compared to single-task conditions where they concentrate fully on a sports task. However, task complexity and participants’ skill levels have so far been relatively modest in most dual-task studies. To address gaps in past research, a multi-method study was conducted using a rowing ergometer task. Participants were nine male elite rowers from the Norwegian national rowing team, preparing for the 2020 Olympic Games in Tokyo, as well as nine male recreational rowers. Participants engaged in three-minute rowing trials of varying task demands, including single-task conditions (focusing on rowing only) and dual-task conditions (focusing on rowing and solving arithmetic problems). Performance and mental effort were measured via ergometer data (i.e., rowing speed values) and eye-tracking measures (i.e., blink rates and pupil size measurements), respectively. Movement kinematics was measured by motion capture technology. The results suggested that adding extraneous cognitive load led to performance decline and increased mental effort across all participants. Both elites and non-elites demonstrated kinematic changes when going from single-task to dual-task performance. That is, kinematic events in participants’ lower-body and upper-body segments became more temporally coupled, and more in line with movement patterns associated with novice athletes when the extraneous cognitive load was added. This study contradicts several past findings and suggests that elite athletes rely on attentional resources to execute fundamental aspects of their performance. Funding source: Research Council of Norway.
-
Swarbrick, Dana
(2023).
Song Talk Radio: Interview with Dana Swarbrick and Alex Whorms.
-
-
Jensenius, Alexander Refsum & Rosenberg, Ingvild
(2023).
Unik forskningskonsert.
[Radio].
NRK P1.
-
Jensenius, Alexander Refsum & Burnim, Kayla
(2023).
Forskere inntok Konserthuset.
[Newspaper].
Stavanger Aftenblad.
Show summary
Hundrevis av elever kom for ? h?re p? Stavanger symfoniorkester. Mens orkesteret spilte, var musikerne, dirigenten og publikum del av et unikt forskningsprosjekt.
-
Serdar G?ksülük, Bilge
(2023).
Performative Quality of Aesthetics in Bio-Cultural Paradigm.
-
-
Jensenius, Alexander Refsum & Zürn, Christof
(2023).
Standing still with Alexander Refsum Jensenius.
[Internet].
The Power of Music Thinking.
Show summary
What is the use of standing still for 10 minutes? I was asking myself when I saw a post on social media. It was a double picture of a man with a mobile phone around his neck displaying some data, and another picture showed the view he saw at that moment. I learned that he stood there for 10 minutes without any movement, listening to the sound that was already there. There were many pictures like this, and I decided to get in contact.
So, today, we are in Oslo. We speak with Alexander Refsum Jensenius, a professor of music technology at the University of Oslo, a book author, a music researcher and researching musician working in the fields of embodied music cognition and new interfaces for musical expression.
Alexander shares with us his experiences while performing and testing with artistic methods of embodied listening and how people experience music and sound. This goes from experiments with and without the conductor of a Symphony Orchestra to the sounds of our kitchen appliances.
We talk about his motion capture lab, where a person’s exact location and micro-movements can be detected while they hear different kinds of music, and how the researchers can understand what moves them.
Alexander shares insights about the Norwegian Championship of Stand Still, where until now, 1000s of people have participated, and the winner is the person with the lowest average velocity on standing the stillest over some time.
Alexander explains the interplay of body and mind and reveals some secrets on how to move people, for example, on the dance floor or to calm them down. It all has to do with our bpm, the average heartbeat of about 60 beats a minute.
-
Jensenius, Alexander Refsum; Danielsen, Anne & S?ndergaard, Pia
(2023).
Hvor blir det av UiOs alumni-satsing?
Uniforum.
ISSN 1891-5825.
Show summary
Det snakkes i festlige lag om at v?re alumni er en ressurs. Dessverre viser praksis at man ikke bare ignorerer tidligere ansatte, men aktivt fors?ker ? fjerne alle spor av at de har forsket ved institusjonen.
-
-
Serdar G?ksülük, Bilge
(2023).
The Implications of Laban/Bartenieff Movement Studies in the Field of Dance Anthropology.
Show summary
In my presentation, I will discuss the application of LBMS in the field of dance anthropology and provide insights into its future implications. While kinetography (Labanotation) is commonly recognized and used in dance anthropology, the embodied aspects of Laban’s work are often overlooked. Therefore, I will focus on the embodied aspects of LBMS in dance anthropology, rather than just notation. To start, I will provide a general overview of how dance analysis is understood within the discipline of dance anthropology. Then, I will argue how LBMS impacts the discourse of dance analysis. In the second part of my presentation, I will bolster my argument with an example by analyzing a short segment of the Caucasian folk dance 'Zafak' performed by the Nalmes State Folk Dance Company of Adygea. Through this example, I aim to demonstrate how dance analysis using LBMS can contribute to anthropological research.
-
Serdar G?ksülük, Bilge
(2023).
Phenomenological Inquiry of Movement as a Methodology in Performing Arts Education.
-
Serdar G?ksülük, Bilge
(2023).
Hybrid Format Movement Training Under the Pandemic Measures: A Clash Between Physical and Digital Realm.
-
Serdar G?ksülük, Bilge
(2023).
From Ritualistic Dance to Political Act: Embodying Oppositions Through Dancing Halay in Mass Demonstrations of Turkey .
-
Serdar G?ksülük, Bilge
(2023).
Embodied Knowledge Production Through Telematics in the Hybrid Realm.
-
Swarbrick, Dana; Palmer, Caroline; Keller, Peter; Clayton, Martin; Henry, Molly & Toiviainen, Petri
(2023).
Entrainment Workshop Panel Discussion.
Show summary
Definitions of entrainment vary across disciplines including mechanics, behavioural psychology, neuroscience, and biology. Generally, entrainment involves the adjustment of rhythmic signals to each other. Neural entrainment and rhythmic entrainment are common terms to distinguish the types of entrainment that occur in the brain or behaviour, respectively. Some use the term emotional entrainment to describe how individuals align their emotions with one another. Can a single definition truly encompass all crucial elements and be used across disciplines or are these disciplines using the term in ways that are too different from each other to be unified? One general definition from empirical musicology is “the process by which independent rhythmical systems interact with each other” (Clayton, 2012). The importance of this definition is in specifying that the independent systems must generate their own, self-sustaining rhythmic fluctuations, and that entrainment is the process of their interaction and their adjustments, whether both adjust to each other (symmetrical) or one to another (asymmetrical) (ibid.). Coincidental alignment is not necessarily a marker of entrainment processes because measuring alignment does not imply that a system has adjusted to another (ibid.). Instead, measuring adjustments after perturbations may provide stronger evidence for entrainment (ibid.). Many of the measures used to capture entrainment capture some element of alignment, however they do not necessarily measure outcomes of perturbations. A panel discussion with experts on entrainment from various disciplines will aim to highlight the successes and shortcomings of the current body of literature on entrainment and how we can improve research and methods on this phenomenon. Questions will probe researchers’ definitions of entrainment and its correspondences and distinctions with other related phenomena including general coordination and synchrony. Finally, we will aim to highlight the gaps that still exist in the literature and how these can be addressed with the currently available methods.
-
Vuoskoski, Jonna Katariina
(2023).
Music and the experience of social connection.
Show summary
Musical engagement – ranging from group music-making to solitary music listening – is an inherently social activity that can facilitate communication, understanding, and connection between individuals and groups. The three studies presented in this talk shed light on the social dimension of musical experiences, focusing on the listener’s perspective. The first study explores the characteristics of virtual concerts and their impact on social connection and felt emotions during the COVID-19 pandemic. The second study compares live and livestreamed concerts and their effects on motion, emotion, the experiences of social connectedness. Finally, the third study investigates experiences of feeling moved in response to music listening, and shows that musically evoked experiences of feeling moved are associated with similar patterns of appraisals, physiological sensations, and empathic processes as feeling moved by videos depicting social scenarios. Together, these studies highlight the importance of social connection and empathy in musical experiences, demonstrating that music can serve as a powerful tool for promoting social bonding and experiences of connectedness.
-
Jensenius, Alexander Refsum & Poutaraud, Joachim
(2023).
Video Visualization.
Show summary
This workshop is targeted at students and researchers working with video recordings. Even though the workshop will be based on quantitative tools, the aim is to provide solutions for qualitative research. This includes visualization techniques such as motion videos, motion history images, and motiongrams, which, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes basic computer vision analysis modules, such as extracting quantity and centroid of motion, and using such features in analysis.
The participants will learn to use the Musical Gestures Toolbox for Python, a collection of high-level modules for easily generating all of the above-mentioned visualizations and analyses. This toolbox was initially developed for analyzing music-related body motion but is equally helpful for other disciplines working with video recordings of humans, such as linguistics, psychology, medicine, and educational sciences.
-
-
Ellefsen, Kai Olav
(2023).
More human robot brains with inspiration from biology, psychology and neuroscience.
-
Danielsen, Anne
(2023).
Decolonizing groove (panel discussion).
-
Danielsen, Anne
(2023).
Ain’t that a groove! Musicological, philosophical and psychological perspectives on groove (keynote).
Show summary
The notion of groove is key to both musicians’ and academics’ discourses on musical rhythm. In this keynote, I will present groove’s historical grounding in African American musical practices and explore its current implications by addressing three distinct understandings of groove: as pattern and performance; as pleasure and “wanting to move”; and as a state of being. I will point out some musical features that seem to be shared among a wide range of groove-based styles, including syncopation and counterrhythm, swing and subdivision, and microrhythmic qualities. Ultimately, I will look at the ways in which the groove experience has been approached in different disciplines, drawing on examples from musicology / ethnomusicology, philosophy, psychology and neuroscience.
-
-
Glette, Kyrre
(2023).
Adaptive robots through evolutionary algorithms and machine learning.
-
Jensenius, Alexander Refsum
(2023).
Rhythmic Data Science.
Show summary
Rhythm is everywhere, from how we walk, talk, dance and play to telling stories about our past and even predicting the future. Rhythm is key to how we interact with our world. Our heartbeat, nervous system, and other bodily cycles work through rhythm. As such, rhythm is a crucial aspect of human action and perception, and it is in complex interaction with the world's cultural, biological and mechanical rhythms. At RITMO, they research rhythmic phenomena and their complex relationships with the rhythms of human bodies and brains. In the talk, Alexander will present examples of how they record, synchronize, and analyze data of complex, rhythmic human behavior, such as real-world concerts.
-
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: An Embodied approach to a Digital Organology.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Jensenius, Alexander Refsum
(2023).
Wishful thinking about CVs: Perspectives from a researcher.
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Glette, Kyrre
(2023).
Bio-inspiration for robot design and adaptation
.
-
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my recent book "Sound Actions". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments.
-
Glette, Kyrre
(2023).
Evolutionary and adaptive robotics: from simulation to reality
.
-
Jensenius, Alexander Refsum
(2023).
Exploring large datasets of human, music-related standstill.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
-
Jensenius, Alexander Refsum
(2023).
CV-modul som grunnlag for NOR-CAM.
Show summary
Vurdering av forskning er p? dagsorden som aldri f?r. NIFU inviterer derfor til ?pent seminar om helhetlig vurdering av forskere og forskning. Bakteppet er den nye europeiske avtalen om evaluering av forskning og den nye norske veilederen for karrierevurdering av forskere. Seminaret arrangeres i 澳门葡京手机版app下载 mellom NIFU (R-Quest), UHR og Det nasjonale publiseringsutvalget.
-
-
-
-
-
Jónsson, Bj?rn Thór
(2023).
Live Streams From Evolutionary Search for Sounds.
Show summary
Here we present a web interface for navigating sounds discovered during runs of evolutionary processes. Those runs are performed as a part of investigations into the applicability of quality diversity search for sounds. This audible peek into the collected data supplements statistical analysis. Such a way of communicating the current results is intended to provide an engaging experience of the data. By either listening to automatic playback of the discovered sounds, or interacting with them, for example by changing their parameters, interesting, annoying, pleasing, and perhaps useful artefacts may be discovered, modified and downloaded for use in any creative work. The application can be accessed from desktops or mobile devices at: https: //synth.is/exploring-evoruns
-
Jónsson, Bj?rn Thór
(2023).
kromosynth.
Show summary
Sonic design with evolutionary algorithms: The engine behind synth.is and kromosynth-cli for audio waveform breeding with neuro-evolution of pattern producing networks and quality diversity search.
-
-
Jónsson, Bj?rn Thór
(2023).
Jukebox with research data.
Show summary
Evolution runs explorer: opening up access to current results from the application of quality diversity search algorithms to the discovery of synthesised sounds.
https://synth.is/exploring-evoruns
-
Lindblom, Diana Saplacan; T?rresen, Jim & Hakimi, Nina
(2024).
Dynamic Dimensions of Safety -
How robot height and velocity affect human-robot interaction: An explorative study on the concept of perceived safety.
University of Oslo.
-
-
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Show summary
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.
-
-
Noori, Farzan Majeed; T?rresen, Jim; Uddin, Md Zia & Riegler, Michael A.
(2023).
Multimodal Deep Learning Approaches for Human Activity recognition.
Universitetet i Oslo.
ISSN 1501-7710.
Full text in Research Archive
Show summary
Smart homes may be beneficial for people of all ages, but this is especially true for those with care needs, such as the elderly. To assist, monitor for emergencies, and provide companionship for the elderly, a substantial amount of research on human activity recognition systems has been conducted. Several algorithms for activity recognition and prediction of future events have been reported in the scientific literature. However, the majority of published research does not address privacy concerns or employ a variety of ambient sensors.
The objective of this thesis is to contribute to the progress in research relevant to activity recognition systems that use sensors that collect less privacy-related information. The following tasks are included in the work: assessment of sensors while keeping privacy concerns in mind, selection of cutting-edge classification methods, and how to fuse the data from multiple sensors. This thesis contributes to making progress on systems for analyzing human activity and state—or vital signs—for application in a mobile robot.
This dissertation examines two topics. First, it examines the privacy concerns associated with having a robot in the home. On a robot, an ultra-wideband (UWB) radar-based sensor and an RGB camera (for ground truth) were installed. An actigraphy device was also worn by the users for heart rate monitoring. The UWB sensor was selected to maintain privacy while monitoring human activities. Considering different ways to represent data from a single sensor is the second topic under investigation. That is, how data from multiple representations can be combined. For this purpose, we investigate various data representations from a single sensor’s data and analysis using cutting-edge deep learning algorithms.
The contributions provide considerations for equipping a mobile home robot with activity recognition abilities while reducing the amount of privacy-sensitive sensor data. The work also concerns examining the potential privacy restrictions that must be established for the analyzing systems. The thesis contains new methods for combining data from multiple information sources. To achieve our objective, convolutional neural networks and recurrent neural networks were applied and validated using conventional methods.
The conclusion of the thesis is that we can achieve good accuracy with limited sensors while maintaining privacy. It is, however, likely adequate for assisting healthcare personnel and caregivers in their work by indicating current activity status and measuring activity levels, providing alerts about abnormal activities. The results can hopefully contribute to older people being able to live alone in their homes with a larger chance of any unwanted events being quickly detected and notified to the caregivers and providers.