Abstracts

Abstracts of the presentations programmed in the Special Event on Norwegian Folk Music & Computational Analysis

Overview of the MIRAGE project

by Olivier Lartillot

One main goal is to improve computers' capability to listen to and understand music, and to conceive technologies to facilitate music understanding and appreciation. One main application is to make music more accessible and engaging. The methodology consists of a close interaction between automated music transcription and detailed musicological analysis of the transcriptions. Significant effort are dedicated to the design of applications of value for musicology, music cognition and the general public, with a particular focus on Norwegian folk music, in collaboration with the National Library of Norway. MIRAGE is funded by The Research Council of Norway under the program IKTPLUSS.

Presentation of the Norwegian Collection of Folk Music

by Hans-Hinrich Thedens

The Norwegian collection of folk music is a 70 year old institution that has had at least three different affiliations as well as purposes. It started as a pure music research institute analyzing folk music as sound, then its holdings were utilized for the revitalization of local music traditions. The most recent phase combines these and other goals and thus embraces the work of the Mirage project. In addition to disseminating old source materials we also document how traditional music lives today.
 

Segmentation of the collection into tunes, the AudioSegmentor software

by OL, HHT and Rasmus Kjorstad

The collection was initially in the form of a series of 600 tape recordings, each tape containing up to 2 hours of recordings, associated with metadata. We first needed to retrieve the individual recording associated with each tune, through the combination of an automated pre-segmentation based on sound classification and audio analysis, and a subsequent manual verification and fine-tuning of the temporal positions, using a home-made user interface.

Introduction to Hardanger fiddle (with live performance)

by Olav Lukseng?rd Mjelva

 

Manual note transcription (of Hardanger fiddle music in particular), the Annotemus software

by OL (and Anders Elowsson)

To facilitate the development of high-quality transcription algorithms, we collected high-quality annotations of recordings of music performances. Annotators were asked to indicate all the notes and determine their temporal position and pitch with high precision. Because it is easier to work on material that they are familiar with, the hired annotators are three skilled musicians (including Olav Lukseng?rd Mjelva), who annotated their own recorded tunes, they are deeply familiar with. To make the annotation work as efficient and valuable as possible, we designed our own software, called Annotemus, aimed at facilitating manual annotation of note onset, offset and pitch. The Annotemus software is made freely available.

Automated note transcription (of Hardanger fiddle music)

by Lars Monstad and OL

The aim of this transcription project is to make the process completely automated, so that anyone could get a music score from a music recording (here, of Hardanger fiddle music). We briefly present the general principles of the technological solution we are developing. It is based on deep learning and takes benefit of the recent progress in that domain. We show how, by training the neural network with the collection of high-quality transcriptions made by expert annotators (cf. previous presentation), we obtain a technology that surpasses the state of the art.

The current development of this work is partially funded by the Seed Grant program of the UiO Growth House.

Rhythm in Hardanger fiddle music

by Mats S. Johansson (with live performance by OLM)

Norwegian traditional fiddle music is characterized by great variety in rhythmic forms and expression. This presentation focuses on dance tunes in triple (springar) and duple (gangar/halling) meter, including the rhythmic intricacies of the associated performance styles. More particularly, we will discuss how beats are formed and identified, and some of the challenges associated with manual and automated beat transcription. These include

  1. The placement of the first beat (the One or the downbeat), that is, the start of the metric cycle.
  2. Structural ambiguity related to rhythmic grouping, implying that it may be unclear which note that belongs to which beat.
  3. Ambiguity as to whether a particular rhythmic articulation should be considered as the beat onset or as a syncopation against the beat onset (before or after).
  4. Microlevel ambiguities: intentionally unclear onsets or multiple competing onsets occurring more or less simultaneously, making the exact beat onset position ambiguous and/or experientially extended in time.

Automated rhythm transcription

by OL and MSJ

One core challenge (for a machine, but even for a listener that is not familiar with that music) is to be able to "feel" the beats (or in other words, to have an idea where the beats are in the music) for the styles of Hardanger fiddle music discussed in the previous talk. We show that previous beat tracking technologies are not able to understand this type of music, and we explain how we are trying to solve those issues.

Music analysis, visualisation; browsing into the music catalogue

by OL

The next step consists in carrying out detailed music analysis of the transcriptions, in order to reveal in particular intertextuality within the corpus. A last direction of research is aimed at designing tools to visualise each tune and the whole catalogue, both for musicologists and general public.

Published Apr. 16, 2023 11:04 PM - Last modified May 15, 2023 12:04 PM