For mer informasjon, se den engelske versjonen av siden.
Publikasjoner
-
Wallace, Benedikte; Glette, Kyrre & Szorkovszky, Alexander
(2025).
How can we make robot dance expressive and responsive? A survey of methods and future directions.
Frontiers in Computer Science.
7.
doi:
10.3389/fcomp.2025.1575667.
Vis sammendrag
The development of robots that can dance like humans presents a complex challenge due to the disparate abilities involved and various aesthetic qualities that need to be achieved. This article reviews recent advances in robotics, artificial intelligence, and human-robot interaction toward enabling various aspects of realistic dance, and examines potential paths toward a fully embodied dancing agent. We begin by outlining the essential abilities required for a robot to perform human-like dance movements and the resulting aesthetic qualities, summarized under the terms expressiveness and responsiveness . Subsequently, we present a review of the current state-of-the-art in dance-related robot technology, highlighting notable achievements, limitations and trade-offs in existing systems. Our analysis covers various approaches, including traditional control systems, machine learning algorithms, and hybrid systems that aim to imbue robots with the capacity for responsive, expressive movement. Finally, we identify and discuss the critical gaps in current research and technology that need to be addressed for the full realization of realistic dancing robots. These include challenges in real-time motion planning, adaptive learning from human dancers, and morphology independence. By mapping out current methods and challenges, we aim to provide insights that may guide future innovations in creating more engaging, responsive, and expressive robotic systems.
-
Bravo, Pedro Pablo Lucas; Fasciani, Stefano; Szorkovszky, Alexander & Glette, Kyrre
(2025).
An Interactive Self-Assembly Swarm Music System in Extended Reality.
I Sei?a, Mariana & Wirfs-Brock, Jordan (Red.),
AM '25: Proceedings of the 20th International Audio Mostly Conference.
Association for Computing Machinery (ACM).
ISSN 9798400708183.
Vis sammendrag
This paper explores the music-making capabilities of a swarm intelligence type of algorithm known as self-assembly in an interactive context using Extended Reality (XR) technologies. We describe the modifications made to a fully autonomous version of this algorithm, which we proposed in a previous work, allowing us to adapt it for user-interactive music. Moreover, we present the design of an XR system that supports this adaptation, modelled as a human-swarm interactive music system, which is implemented in the Meta Quest 3 headset. An auto-ethnographic study was conducted to discover the affordances of the system in a music improvisation session. The study, supported by empirical measurements collected during the session, enables a comparison between the interactive version and the original autonomous offline version, providing valuable insights into how a user can influence the swarm's behaviour. The results are used to discuss the music performance possibilities and future directions for this type of interactive music system.
-
Bhandari, Shailendra; Silva, Pedro Rego Lencastre e; Mathema, Rujeena; Szorkovszky, Alexander; Yazidi, Anis & Lind, Pedro
(2025).
Modeling eye gaze velocity trajectories using GANs with spectral loss for enhanced fidelity.
Scientific Reports.
15(1).
doi:
10.1038/s41598-025-05286-5.
Vis sammendrag
Accurate modeling of eye gaze dynamics is essential for advancement in human-computer interaction, neurological diagnostics, and cognitive research. Traditional generative models like Markov models often fail to capture the complex temporal dependencies and distributional nuance inherent in eye gaze trajectories data. This study introduces a Generative Adversarial Network (GAN) framework employing Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) generators and discriminators to generate high-fidelity synthetic eye gaze velocity trajectories. We conducted a comprehensive evaluation of four GAN architectures: CNN-CNN, LSTM-CNN, CNN-LSTM, and LSTM-LSTM–trained under two conditions: using only adversarial loss ($$L_G$$) and using a weighted combination of adversarial and spectral losses. Our findings reveal that the LSTM-CNN architecture trained with this new loss function exhibits the closest alignment to the real data distribution, effectively capturing both the distribution tails and the intricate temporal dependencies. The inclusion of spectral regularization significantly enhances the GANs’ ability to replicate the spectral characteristics of eye gaze movements, leading to a more stable learning process and improved data fidelity. Comparative analysis with a Hidden Markov Model (HMM) optimized to four hidden states further highlights the advantages of the LSTM-CNN GAN. Statistical metrics show that the HMM-generated data significantly diverges from the real data in terms of mean, standard deviation, skewness, and kurtosis. In contrast, the LSTM-CNN model closely matches the real data across these statistics, affirming its capacity to model the complexity of eye gaze dynamics effectively. These results position the spectrally regularized LSTM-CNN GAN as a robust tool for generating synthetic eye gaze velocity data with high fidelity. Its ability to accurately replicate both the distributional and temporal properties of real data holds significant potential for applications in simulation environments, training systems, and the development of advanced eye-tracking technologies, ultimately contributing to more naturalistic and responsive human-computer interactions.
-
-
Bravo, Pedro Pablo Lucas; Fasciani, Stefano; Szorkovszky, Alexander & Glette, Kyrre
(2024).
Interactive Sonification of 3D Swarmalators,
Proceedings of the International Conference on New Interfaces for Musical Expression.
NIME.
s. 252–260.
doi:
10.5281/zenodo.13904846.
Fulltekst i vitenarkiv
-
Veenstra, Frank; Szorkovszky, Alexander & Glette, Kyrre
(2023).
Decentralized Control and Morphological Evolution of 2D Virtual Creatures.
I Iizuki, Hiroyuki; Suzuki, Keisuke; Uno, Ryoko; Damiano, Luisa; Spychala, Nadine; Aguilera, Miguel; Izquierdo, Eduardo; Suzuki, Reiji & Baltieri, Manuel (Red.),
ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference.
MIT Press.
doi:
10.1162/isal_a_00656.
-
-
-
Szorkovszky, Alexander; Veenstra, Frank & Glette, Kyrre
(2023).
Toward cultures of rhythm in legged robots.
I Iizuki, Hiroyuki; Suzuki, Keisuke; Uno, Ryoko; Damiano, Luisa; Spychala, Nadine; Aguilera, Miguel; Izquierdo, Eduardo; Suzuki, Reiji & Baltieri, Manuel (Red.),
ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference.
MIT Press.
doi:
10.1162/isal_a_00673.
Fulltekst i vitenarkiv
-
-
Szorkovszky, Alexander; Veenstra, Frank & Glette, Kyrre
(2023).
Central pattern generators evolved for real-time adaptation to rhythmic stimuli.
Bioinspiration & Biomimetics.
ISSN 1748-3182.
18(4).
doi:
10.1088/1748-3190/ace017.
Fulltekst i vitenarkiv
-
Gyllingberg, Linnéa; Szorkovszky, Alexander & Sumpter, David
(2023).
Using neuronal models to capture burst-and-glide motion and leadership in fish.
Journal of the Royal Society Interface.
ISSN 1742-5689.
20(204),
s. 1–13.
doi:
10.1098/rsif.2023.0212.
-
Szorkovszky, Alexander; Veenstra, Frank; Lartillot, Olivier Serge Gabriel; Jensenius, Alexander Refsum & Glette, Kyrre
(2023).
Embodied Tempo Tracking with a Virtual Quadruped,
Proceedings of the Sound and Music Computing Conference 2023.
SMC Network .
ISSN 9789152773727.
doi:
10.5281/zenodo.10060970.
Fulltekst i vitenarkiv
-
-
Levens, Watson; Szorkovszky, Alexander & Sumpter, David
(2022).
Friend of a friend models of network growth.
Royal Society Open Science.
9(10),
s. 1–17.
doi:
10.1098/rsos.221200.
Se alle arbeider i NVA
-
Szorkovszky, Alexander; Veenstra, Frank & Glette, Kyrre
(2022).
From real-time adaptation to social learning in robots.
Se alle arbeider i NVA
Publisert
22. sep. 2021 09:34
- Sist endret
14. okt. 2022 14:54