-
Lindblom, Diana Saplacan
(2025).
Presentation: ROBOts as Welfare Technologies and Actors for ELderLy Care: A Nordic Model for Integration of Advanced Assistive Technologies (ROBOWELL) pre-kickoff.
Show summary
ROBOWELL - meeting with the project partners (UiO, KTH, SDU)
-
Lindblom, Diana Saplacan & Murashova, Natalia
(2025).
AI in Society: Virtual and Physical AI.
doi:
https:/www.uio.no/om/澳门葡京手机版app下载/skole/fagped-dag/program.html.
Show summary
This talk explores the evolving landscape of artificial intelligence in various societal contexts, focusing on the integration and implications of AI (virtual) tools such as e.g., ChatGPT, Microsoft Copilot, but also the use of "physical AI", such as in social robots. It showcases practical cases, showing some insights from our initial fieldwork applying the Ethical Risk Assessment of AI in practice (ENACT) in various private and public (learning) organizations, as well as some insights from our work within Human-Robot Interaction and social robots area in the Vulnerability in Robot Societies (VIROS), Predictive and Intuitive Robot Companion (PIRC) research projects, and our recently funded ROBOts as Welfare Technologies and Actors for ELderLy Care: A Nordic Model for Integration of Advanced Assistive Technologies (ROBOWELL) project.
-
Lindblom, Diana Saplacan
(2025).
A Cross-Cultural Study between Norway and Japan on Consent in HRI; The Use of Social Robots in Public and Private Spaces – Users’ Perspectives.
doi:
https:/2025.roboethics.design/program.
-
-
T?rresen, Jim
(2025).
Keynote: Techno-Ethical Considerations when Applying Machine Learning in Real-world Systems.
doi:
https:/www.icmlt.org/2025.html.
-
T?rresen, Jim
(2025).
Invited talk: Intelligent Robotics in Healthcare.
-
T?rresen, Jim
(2025).
Guest lecture: Intelligent Robotics in (Home) Healthcare.
-
-
Lindblom, Diana Saplacan
(2025).
Overview of my research: background, results, experiences, publications and future directions.
-
Lindblom, Diana Saplacan
(2025).
"The Wooden Gripper Was Warmer and Made the Robot Less Threatening"– a Study on Perceived Safety Based on Robot Gripper’s Visual and Tactile Properties.
-
Rolfsjord, Sigmund Johannes Ljosvoll; Arnim, Hugh Alexander von; Fatima, Safia & Baselizadeh, Adel
(2025).
Multimodal Transfer Learning for Privacy in Human Activity Recognition.
-
Nerg?rd, Katrine Linnea; Ellefsen, Kai Olav & T?rresen, Jim
(2025).
Fast or Slow: Adaptive Decision Maiking in Reinfocement Learning with Pre-Trained LLMs.
-
Ellefsen, Kai Olav
(2025).
Fast and Slow Thinking AI.
-
Saplacan, Diana
(2025).
Overview of the Robotics and Intelligent Research Group work within Human-Robot Interaction from the past years.
-
Saplacan, Diana
(2025).
Overview of the Work Within HRI Focusing on Elderly Care and Healthcare Professionals Views on the Use of Robots within Home- and Healthcare: Lessons Learned.
-
Tapus, Adriana; Zhegong, Shangguan; T?rresen, Jim & S?raa, Roger Andre
(2024).
IEEE RO-MAN 2024 Workshop on Ethics Challenges in Socially Assistive Robots and Agents: Legality, Value Orientation, and Future Design for Human-Robot Interaction (HRI).
doi:
https:/perso.ensta-paris.fr/~shangguan/Ro-manWS/Home.html.
-
-
T?rresen, Jim
(2024).
Invited talk: Intelligent robots - the future of healthcare?
-
-
Lindblom, Diana Saplacan
(2024).
Healthcare Professionals’ Attitudes Towards Caregiving Through Teleoperation of Robots in Elderly Care. Seminar at RITMO Centre of Excellence for Time, Rhythm, and Motion.
Show summary
This week's Food and Paper will be given by Diana Saplacan.
-
Ellefsen, Kai Olav
(2024).
For KI er to pluss to like tidkrevende som ? l?se klimakrisen.
Morgenbladet.
ISSN 0805-3847.
-
Saplacan, Diana
(2024).
User Studies in HRI: A Qualitative Research Perspective.
-
Saplacan, Diana
(2024).
Social Robots and Socially Assistive Robots (SARs) within Elderly Care: Lessons Learned So Far.
-
Saplacan, Diana
(2024).
Human Robot Interaction: Studies with Users.
-
Saplacan, Diana
(2024).
Qualitative Observational Video-Based Study on Perceived Privacy in Social Robots’ Based on Robots Appearances.
-
T?rresen, Jim
(2024).
Ethical, Legal and Technical Challenges and Considerations.
-
T?rresen, Jim
(2024).
Invitert foredrag: Vil vi ha roboter til ? hjelpe oss n?r vi trenger hjelp?
-
T?rresen, Jim
(2024).
Kunstig intelligens og forskningsetiske vurderinger.
-
Torheim, Kevin Tran & Ellefsen, Kai Olav
(2025).
Exploring fast and slow thinking in artificial intelligence.
Universitetet i Oslo.
Show summary
Artificial intelligence systems have shown great success in several fields of study, but they still lack human abilities like adaptability and generalization. In 2011, psychologist Daniel Kahneman suggested that the human mind is composed of two different thinking systems, System 1 (fast thinking) and System 2 (slow thinking), in his book “Thinking, Fast and Slow”. System 1 is fast and effortless, and is often used to solve simple tasks. System 2 is slow and effortful, and is used to solve complex tasks that require reasoning. Kahneman’s theory has inspired artificial intelligence researchers to implement artificial intelligence agents that are composed of fast and slow thinking algorithms, which resemble System 1 and System 2 in humans. In reinforcement learning, this hybrid agent is called a meta-agent. It is worth exploring meta agents since they can be more adaptable to their environments. This thesis further explores the meta agent based on previous research in a new environment called Pushworld, which is a complex puzzle game that requires a lot of planning. Previous research has mostly focused on training a meta agent to achieve the highest reward in different environments, without considering that the meta agent can be prone to overusing a thinking system. The main objective of this thesis is therefore to implement and train a meta agent that is more adaptable to its environment by switching between fast and slow thinking autonomously, without overusing a thinking system in the Pushworld environment. To simulate that slow thinking is effortful and prevent the meta agent from overusing slow thinking, the meta agent receives a slow thinking penalty for using slow thinking. The thesis gives an analysis of how prone the meta agent is to overusing fast or slow thinking when given different amounts of slow thinking penalties. This thesis also explores to what extent slow thinking is advantageous for decision making, and to what degree the meta agent can benefit from having fast and slow thinking in Pushworld. The results show that the penalty affects the meta agent’s performance in solving puzzles and how much fast and slow thinking the meta agent uses. The meta agent uses more fast thinking when the penalty increases, and achieves less performance in solving Pushworld puzzles. The meta agent prefers to use more slow thinking when the penalty decreases and performs better in solving puzzles. The key message of this thesis is that the meta agent benefits from having fast and slow thinking when the slow thinking penalty is balanced. The findings are important because they contribute to implementing meta-agents that can balance the usage of fast and slow thinking.
-
Hasle, Viktor Ringvold & Ellefsen, Kai Olav
(2025).
Fast and Slow Reasoning with Model-Based Reinforcement Learning.
Universitetet i Oslo.
Show summary
After Daniel Kahneman authored the pop-psychology book Thinking, Fast and Slow, the psychological theory of viewing human reasoning as a dual-process system has gained popularity. The dual-process theory states that human reasoning can be divided into two main modes of thinking: System 1, which is fast, intuitive, and automatic, and System 2, which is slow, deliberate, and analytical. This thesis investigates applying a dual-system approach to solving Sokoban puzzles. Sokoban is a logic puzzle game that involves moving blocks onto targets. To model these systems computationally, a learned World Model is developed to predict the consequences of future actions within the Sokoban environment. A World Model is a generative model that learns to model the dynamics of the environment. This model is used in two distinct ways: (1) to train an Actor-Critic policy that acts reflexively within the environment, representing the intuitive behavior of System 1; and (2) as a component of a planning algorithm that embodies the deliberative reasoning of System 2. A higher-level Meta-Agent is trained to dynamically arbitrate between these two modes, selecting how much planning is optimal in any state. The Meta-Agent enables a nuanced allocation of cognitive resources by dynamically adjusting the extent of System 2 processing. This novel approach represents the System 1 and System 2 dichotomy as a spectrum, rather than a binary. Sokoban puzzles are notoriously hard to solve, both computationally as well as for humans. Because of this difficulty, a high degree of exactness is required to solve them reliably. As such, one would expect the most effective reasoning strategy would be to maximize the amount of System 2 thinking. The thesis finds that this is suboptimal, and strategically planning in critical states improves the performance. The high-level Meta-Agent performs better than any pure planning approaches by minimizing the likelihood of the learned World Model generating unwanted errors during planning. This finding suggests that optimal problem-solving in complex environments benefits from a hybrid strategy that balances intuitive and analytical reasoning.
-
-
-
Lindblom, Diana Saplacan; T?rresen, Jim & Hakimi, Nina
(2024).
Dynamic Dimensions of Safety -
How robot height and velocity affect human-robot interaction: An explorative study on the concept of perceived safety.
University of Oslo.
-
Chen, BiHui; T?rresen, Jim & Sanfilippo, Filippo
(2024).
Simulation-to-real-world reinforcement learning with the PAL TIAGo robot.
Universitetet i Oslo.