-
Ellefsen, Kai Olav
(2024).
Hva er Kunstig intelligens?
-
Saplacan, Diana
(2024).
Human Robot Interaction: Studies with Users.
-
Saplacan, Diana
(2024).
Qualitative Observational Video-Based Study on Perceived Privacy in Social Robots’ Based on Robots Appearances.
-
T?rresen, Jim
(2024).
Ethical and Regulatory Perspectives of Robotics and Automation.
-
Saplacan, Diana
(2024).
Social Robots and Socially Assistive Robots (SARs) within Elderly Care: Lessons Learned So Far.
-
-
T?rresen, Jim
(2024).
Kunstig intelligens og forskningsetiske vurderinger.
-
T?rresen, Jim
(2024).
Vil vi ha roboter til ? hjelpe oss n?r vi trenger hjelp?
-
T?rresen, Jim
(2024).
From Adaptation of the Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
-
-
-
-
-
T?rresen, Jim
(2024).
Ethical, Legal and Technical Challenges and Considerations.
-
T?rresen, Jim
(2024).
Sensing and Understanding Humans by a Robot – and vice versa.
-
Ellefsen, Kai Olav
(2023).
More human robot brains with inspiration from biology, psychology and neuroscience.
-
Saplacan, Diana; Pajalic, Zada & T?rresen, Jim
(2023).
Should Social and Assistive Robots Integrated within Home- and Healthcare Services Be Universally Designed?
Cambridge Handbook on Law, Policy, and Regulations for Human-Robot Interaction.
Cambridge University Press.
ISSN 000-0-000-00000-0.
doi:
ISBN%209781009386661.
-
-
T?rresen, Jim
(2023).
Future Intelligent and Adaptive Robots in Real-World Environments.
-
T?rresen, Jim
(2023).
Human Intuition and its Impact on Human–Robot Interaction Regarding Safety and Accountability.
-
T?rresen, Jim
(2023).
Artificial Intelligence – diverse in methods and applications.
-
T?rresen, Jim; Saplacan, Diana & Mahler, Tobias
(2023).
Ethical, Legal and User Perspectives on Social and Assistive Robots (ELUPSAR) workshop.
-
-
Saplacan, Diana
(2023).
Presentation of the paper "Health Professionals’ Views on the Use of Social Robots with Vulnerable Users: A Scenario-Based Qualitative Study Using Story Dialogue Method".
-
-
Saplacan, Diana
(2023).
Introduction on my background and on the University of Oslo, Robotics and Intelligent Systems Research Group, Norway for Human-Robot Interaction Lab, Department of Social Informatics, Kyoto University, Japan.
-
Bogen, Annika Celin & Ellefsen, Kai Olav
(2023).
Hvorfor mener noen at kunstig intelligens er farlig?
[Internett].
ung.forskning.no.
-
Strand, ?rjan; Reilstad, Didrik Spanne; Wu, Zhenying; Castro da Silva, Bruno; Torresen, Jim & Ellefsen, Kai Olav
(2023).
Learning When to Think Fast and When to Think Slow.
-
Ellefsen, Kai Olav
(2023).
Kunstig intelligens: Verden sett gjennom en maskins ?yne.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
T?rresen, Jim
(2023).
What is Robotics?
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control to Human–Robot Interaction.
-
T?rresen, Jim
(2023).
What is AI?
-
Keeley, Crocket & T?rresen, Jim
(2023).
Workshop on Ethical Challenges within Artificial Intelligence - From Principles to Practice.
-
T?rresen, Jim
(2023).
Robots-assistants that Care about Privacy, Security and Safety.
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
T?rresen, Jim & Yao, Xin
(2023).
Tutorial: Ethical Risks and Challenges of Computational Intelligence.
-
-
-
-
Saplacan, Diana
(2022).
AI & Ethics Workshop.
Vis sammendrag
Workshop on AI & Ethics organized in cooperation with Tekna Big Data professional network and The Norwegian Council for Digital Ethics. The theme of the workshop: Must AI be professionalized to ensure ethical development? Role: Organizer/Contact person.
-
Dahl, Heidi Elisabeth Iuell; Mehmandarov, Rustam Karim; Harkestad, Inge & Saplacan, Diana
(2022).
AI and Ethics workshop.
-
-
Saplacan, Diana
(2022).
What do we talk about when we
talk about social robots vs. robot
sociomorphism ? Empirical examples from previous and ongoing research.
-
-
-
-
-
Ellefsen, Kai Olav
(2022).
Towards more Human Robot Brains.
Vis sammendrag
Despite many large breakthroughs in Artificial Intelligence in the last decade, robots are still struggling to solve tasks that we as humans take for granted, like emptying a dishwasher or learning multiple skills in a sequence.
In this talk, Kai Olav Ellefsen will talk about work he and colleagues at the group of Robotics and Intelligent Systems (ROBIN), University of Oslo, have done with the goal of making robots more robust and better learners, by taking inspiration from how humans and animals learn.
-
-
van Otterdijk, Marieke; Song, Heqiu; Tsiakas, Konstantinos; van Zeijl, Ilka & Barakova, Emilia
(2022).
Nonverbal Cues Expressing Robot Personality - a Movement Analysts Perspective.
-
van Otterdijk, Marieke; Saplacan, Diana; Laeng, Bruno & Torresen, Jim
(2022).
Explorative Study on Human Intuitive Responses to Observing
Expressive Robot Behavior.
-
-
Saplacan, Diana
(2022).
Participatory Design and Human-Robot Interaction - An ethical and inclusive perspective on contemporary technologies.
-
Saplacan, Diana
(2022).
Healthcare Professionals’ Attitudes towards the Organization of Care Services and the Adoption of Welfare Robots in Norway.
-
-
Saplacan, Diana
(2022).
Guest Lecture: Ongoing research with Social and Assistive Robots: Research projects, empirical examples, and theory.
-
Saplacan, Diana; Baselizadeh, Adel & T?rresen, Jim
(2022).
Robot Demonstration: Meet TIAGo the robot! The robot showcases several tasks: brushing hair, putting the lipstick on, using a (plastic) knife, moving an object, carrying a bag.
-
-
Saplacan, Diana
(2022).
TIAGo wishes you welcome to a debate on the theme: Should a robot be involved in care tasks? (Norwegian title: TIAGo ?nsker velkommen til en debatt med tema: B?r en robot involveres i omsorgsoppgaver?).
-
Reilstad, Didrik Spanne; Strand, ?rjan; Wu, Zhenying; Castro da Silva, Bruno; Torresen, Jim & Ellefsen, Kai Olav
(2022).
RADAR: Reactive and Deliberative Adaptive Reasoning – Learning When to Think Fast and When to Think Slow.
-
T?rresen, Jim
(2022).
Sensing, acting and adapting in the real world.
-
T?rresen, Jim
(2022).
Ethical Perspectives of Robotics and AI – How to develop preferable system?
.
-
Aas, Bendik M?ller & Ellefsen, Kai Olav
(2024).
Minding the Gaps: Examining the Relationship Between World Modeling and Generalization in Deep Reinforcement Learning.
Universitetet i Oslo.
-
Lindblom, Diana Saplacan; T?rresen, Jim & Hakimi, Nina
(2024).
Dynamic Dimensions of Safety -
How robot height and velocity affect human-robot interaction: An explorative study on the concept of perceived safety.
University of Oslo.
-
-
Orten, Kristine & Ellefsen, Kai Olav
(2024).
Exploring Dual-System Reasoning through Model-Free and Model-Based RL.
Universitetet i Oslo.
-
Noori, Farzan Majeed; T?rresen, Jim; Uddin, Md Zia & Riegler, Michael A.
(2023).
Multimodal Deep Learning Approaches for Human Activity recognition.
Universitetet i Oslo.
ISSN 1501-7710.
Fulltekst i vitenarkiv
Vis sammendrag
Smart homes may be beneficial for people of all ages, but this is especially true for those with care needs, such as the elderly. To assist, monitor for emergencies, and provide companionship for the elderly, a substantial amount of research on human activity recognition systems has been conducted. Several algorithms for activity recognition and prediction of future events have been reported in the scientific literature. However, the majority of published research does not address privacy concerns or employ a variety of ambient sensors.
The objective of this thesis is to contribute to the progress in research relevant to activity recognition systems that use sensors that collect less privacy-related information. The following tasks are included in the work: assessment of sensors while keeping privacy concerns in mind, selection of cutting-edge classification methods, and how to fuse the data from multiple sensors. This thesis contributes to making progress on systems for analyzing human activity and state—or vital signs—for application in a mobile robot.
This dissertation examines two topics. First, it examines the privacy concerns associated with having a robot in the home. On a robot, an ultra-wideband (UWB) radar-based sensor and an RGB camera (for ground truth) were installed. An actigraphy device was also worn by the users for heart rate monitoring. The UWB sensor was selected to maintain privacy while monitoring human activities. Considering different ways to represent data from a single sensor is the second topic under investigation. That is, how data from multiple representations can be combined. For this purpose, we investigate various data representations from a single sensor’s data and analysis using cutting-edge deep learning algorithms.
The contributions provide considerations for equipping a mobile home robot with activity recognition abilities while reducing the amount of privacy-sensitive sensor data. The work also concerns examining the potential privacy restrictions that must be established for the analyzing systems. The thesis contains new methods for combining data from multiple information sources. To achieve our objective, convolutional neural networks and recurrent neural networks were applied and validated using conventional methods.
The conclusion of the thesis is that we can achieve good accuracy with limited sensors while maintaining privacy. It is, however, likely adequate for assisting healthcare personnel and caregivers in their work by indicating current activity status and measuring activity levels, providing alerts about abnormal activities. The results can hopefully contribute to older people being able to live alone in their homes with a larger chance of any unwanted events being quickly detected and notified to the caregivers and providers.
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Vis sammendrag
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.
-
-
Soma, Rebekka; Saplacan, Diana & Sikora, Magdalena Claudia
(2022).
Exploring personality and behavior with a robot table - a qualitative case study.
Universitetet i Oslo.
-
Roa Gran, Kristian & Ellefsen, Kai Olav
(2022).
Learning to drive by predicting the future: Direct Future Prediction.
Universitetet i Oslo.
Vis sammendrag
The use of artificial intelligence in systems for autonomous vehicles is growing in
popularity [1, 2]. Following the rapid development of deep learning techniques over the
past years, reinforcement learning has enabled automating the learning of prediction
abilities. Controlling an autonomous vehicle with reinforcement learning is typically
done by either learning a direct mapping from observations to actions, or by learning a
model of the environment and using the model to make decisions. Model-free approaches
have previously seen the most success as errors can easily propagate in an inaccurate
model.
The predictive reinforcement learning algorithm ?Direct Future Prediction? (DFP) won
the Visual Doom AI competition in 2016 with results 50% better than the second best
submission [3]. By learning a simpler model of the environment that only focuses on a few
measurable quantities this approach can efficiently solve challenging control tasks. Prior
to this thesis, the utility of the method had not yet been tested for relevant real world
tasks, such as sensorimotor control of an autonomous vehicle.
DFP is tested on a variety of traffic scenarios with the aim of investigating the potential for
predictive deep learning algorithms to learn to control an autonomous vehicle. The more
classical reinforcement learning algorithm ?Deep Q-Networks? (DQN) is also trained and
tested on the same scenarios, and is used as a reference for determining the performance
of DFP. Experiments are conducted in a more difficult version of the CarRacing simulator
from OpenAI gym, where DQN has previously performed well [4, 5].
DFP is able to solve all of the traffic scenarios from the conducted experiments, and is
able to drive around a challenging track while avoiding cleverly placed obstacles. The
performance of DFP is equal to, or better, in every experiment when compared to DQN.
The driving style of the DFP agents are calm and controlled, which is further highlighted
by the sporadic driving of the DQN agents. Performance is strong also in previously
unseen environments, indicating that the method has good generalization abilities, which
is further explained when visualizing the future DFP predictions.
-
Bordvik, David Andreas; Ellefsen, Kai Olav & Riemer-S?rensen, Signe
(2022).
Forecasting regulation market balancing volumes from market data and weather data using Deep Learning and Transfer Learning.
Universitetet i Oslo.
Vis sammendrag
The energy and power sector is a major value contributor to our society and our high
living standards. In recent times the power sector has gained increased complexity
while undergoing significant changes, with the increased share of renewable production
being one of the contributors. An increased portion of renewable contributors in the
power mix from, e.g., wind power, results in more volatile power production, increasing
the need for grid balancing, making the regulating power market more challenging
for power producers to participate in. The purpose of the regulating power market
is to compensate the gap between the planned production that has been settled in
the day-ahead market and the actual production and demand. The ability to forecast
the regulating power volumes and prices some hours in advance of the hour when
it is actually traded would enable power producers to balance their positions in the
market more optimally. This project exploits historical regulation data together with
different market data and weather data to train deep learning models to forecast future
regulation volumes. A thorough time-series analysis of regulating power volumes
revealed some predictive potential. Furthermore, Bidirectional LSTM showed satisfying
results when forecasting up to four hours into the future using data from 2016-to 2021.
No previous research was found that uses more than two years of data, no previous
research uses recent data, and no previous work has utilized deep learning to forecast
the Norwegian regulation market volumes. Additionally, this project did a deep analysis
of topographical weather images and transfer learning to evaluate the potential of
predicting regulating power volumes using weather images. Different weather forecasts,
actual weather, and weather uncertainties were all utilized. The weather data was
generally not found to have a considerable direct influence on regulation volumes.
However, the weather is considered to have an increasing influence in the future as more
volatile renewable power production is expected in the power markets. No previous
research has been found to investigate weather images in the context of the regulation
market.