Publications
-
Bentsen, Lars ?degaard; Simionato, Riccardo; Wallace, Benedikte & Krzyzaniak, Michael Joseph
(2022).
Transformer and LSTM Models for Automatic Counterpoint Generation using Raw Audio.
Proceedings of the SMC Conferences.
doi:
10.5281/zenodo.6572847.
Full text in Research Archive
Show summary
A study investigating Transformer and LSTM models applied to raw audio for automatic generation of counterpoint was conducted. In particular, the models learned to generate missing voices from an input melody, using a collection of raw audio waveforms of various pieces of Bach’s work, played on different instruments. The research demonstrated the efficacy and behaviour of the two deep learning (DL) architectures when applied to raw audio data, which are typically characterised by much longer sequences than symbolic music representations, such as MIDI. Currently, the LSTM model has been the quintessential DL model for sequence-based tasks, such as generative audio models, but the research conducted in this study shows that the Transformer model can achieve competitive results on a fairly complex raw audio task. The research therefore aims to spark further research and investigation into how Trans- former models can be used for applications typically dominated by recurrent neural networks (RNN). In general, both models yielded excellent results and generated sequences with temporal patterns similar to the input targets for songs that were not present in the training data, as well as for a sample taken from a completely different dataset.
-
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
A mini acoustic chamber for small-scale sound experiments.
In Iber, Michael & Enge, Kajetan (Ed.),
Audio Mostly 2022: What you hear is what you see? Perspectives on modalities in sound and music interaction.
ACM Publications.
ISSN 9781450397018.
p. 143–146.
doi:
10.1145/3561212.3561223.
Full text in Research Archive
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using ≤ 10-inch speakers. Testing with different types of speakers showed frequency responses of <?10?dB peak-to-peak (except the ”boxiness” range below 900?Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20?dB?SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael; Erdem, Cagri & Glette, Kyrre
(2022).
What Makes Interactive Art Engaging?
Frontiers in Computer Science.
4.
doi:
10.3389/fcomp.2022.859496.
Show summary
Interactive art requires people to engage with it, and some works of interactive art are more intrinsically engaging than others. This article asks what properties of a work of interactive art promote engagement. More specifically, it examines four properties: (1) the number of controllable parameters in the interaction, (2) the use of fantasy in the work, (3) the timescale on which the work responds, and (4) the amount agency ascribed to the work. Each of these is hypothesized to promote engagement, and each hypothesis is tested with a controlled user study in an ecologically valid setting on the Internet. In these studies, we found that more controllable parameters increases engagement; the use of fantasy increases engagement for some users and not others; the timescale surprisingly has no significant on engagement but may relate to the style of interaction; and more ascribed agency is correlated with greater engagement although the direction of causation is not known. This is not intended to be an exhaustive list of all properties that may promote engagement, but rather a starting point for more studies of this kind.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T?rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
In Gioti, Artemi-Maria (Eds.),
The Proceedings of 2nd Conference on AI Music Creativity.
AI Music Creativity (AIMC).
ISSN 9783200082724.
doi:
10.5281/zenodo.5137900.
Full text in Research Archive
Show summary
Musical Rhythms can be modeled in different ways. Usually the models rely on certain temporal divisions and time discretization. We have proposed a generative model based on Deep Reinforcement Learning (Deep RL) that can learn musical rhythmic patterns without defining temporal structures in advance. In this work we have used the Dr. Squiggles platform, which is an interactive robotic system that generates musical rhythms via interaction, to train a Deep RL agent. The goal of the agent is to learn the rhythmic behavior from an environment with high temporal resolution, and without defining any basic rhythmic pattern for the agent. This means that the agent is supposed to learn rhythmic behavior in an approximated continuous space just via interaction with other rhythmic agents. The results show significant adaptability from the agent and great potential for RL-based models to be used as creative algorithms in musical and creativity applications.
-
Krzyzaniak, Michael Joseph
(2021).
Musical robot swarms, timing, and equilibria.
Journal of New Music Research.
ISSN 0929-8215.
50(3),
p. 279–297.
doi:
10.1080/09298215.2021.1910313.
-
Erdem, Cagri; Jensenius, Alexander Refsum; Glette, Kyrre; Krzyzaniak, Michael Joseph & Veenstra, Frank
(2020).
Air-Guitar Control of Interactive Rhythmic Robots.
Proceedings of the International Conference on Live Interfaces (Proceedings of ICLI).
p. 208–210.
Show summary
This paper describes an interactive art installation shown at ICLI in Trondheim in March 2020. The installation comprised three musical robots (Dr. Squiggles) that play rhythms by tapping. Visitors were invited to wear muscle-sensor armbands, through which they could control the robots by performing ‘air-guitar’-like gestures.
-
Krzyzaniak, Michael Joseph
(2020).
Words to Music Synthesis.
In Michon, Romain & Schroeder, Franziska (Ed.),
Proceedings of the International Conference on New Interfaces for Musical Expression.
Birmingham City University.
ISSN 9781949373998.
p. 29–34.
Full text in Research Archive
-
Krzyzaniak, Michael Joseph; Frohlich, David & Jackson, Philip JB
(2019).
Six types of audio that DEFY reality! A taxonomy of audio augmented reality with examples,
AM'19: Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound on ZZZ.
Association for Computing Machinery (ACM).
ISSN 9781450372978.
doi:
10.1145/3356590.3356615.
Show summary
In this paper we examine how the term ‘Audio Augmented Reality’ (AAR) is used in the literature, and how the con- cept is used in practice. In particular, AAR seems to refer to a variety of closely related concepts. In order to gain a deeper understanding of disparate work surrounding AAR, we present a taxonomy of these concepts and highlight both canonical examples in each category, as well as edge cases that help define the category boundaries.
View all works in NVA
-
Krzyzaniak, Michael Joseph & Bishop, Laura
(2022).
Professor Plucky—Expressive body motion in human- robot musical ensembles.
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
A mini acoustic chamber for small-scale sound experiments.
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using ≤ 10-inch speakers. Testing with different types of speakers showed frequency responses of <?10?dB peak-to-peak (except the ”boxiness” range below 900?Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20?dB?SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael Joseph
(2021).
Dr. Squiggles AI Rhythm Robot.
In Senese, Mike (Eds.),
Make: Volume 76 (Behind New Eyes).
Make Community LLC.
ISSN 9781680457001.
p. 88–97.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T?rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
-
Krzyzaniak, Michael Joseph
(2020).
Interactive Rhythmic Robots.
View all works in NVA
Published
Sep. 12, 2019 2:19 PM
- Last modified
Dec. 6, 2021 7:59 PM