-
Ellefsen, Kai Olav
(2024).
Hva er Kunstig intelligens?
-
Saplacan, Diana
(2024).
Human Robot Interaction: Studies with Users.
-
Saplacan, Diana
(2024).
Qualitative Observational Video-Based Study on Perceived Privacy in Social Robots’ Based on Robots Appearances.
-
T?rresen, Jim
(2024).
Ethical and Regulatory Perspectives of Robotics and Automation.
-
Saplacan, Diana
(2024).
Social Robots and Socially Assistive Robots (SARs) within Elderly Care: Lessons Learned So Far.
-
-
T?rresen, Jim
(2024).
Kunstig intelligens og forskningsetiske vurderinger.
-
T?rresen, Jim
(2024).
Vil vi ha roboter til ? hjelpe oss n?r vi trenger hjelp?
-
T?rresen, Jim
(2024).
From Adaptation of the Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
-
-
-
-
-
T?rresen, Jim
(2024).
Ethical, Legal and Technical Challenges and Considerations.
-
T?rresen, Jim
(2024).
Sensing and Understanding Humans by a Robot – and vice versa.
-
Ellefsen, Kai Olav
(2023).
Kunstig intelligens: Verden sett gjennom en maskins ?yne.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
T?rresen, Jim
(2023).
What is Robotics?
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control to Human–Robot Interaction.
-
T?rresen, Jim
(2023).
What is AI?
-
Keeley, Crocket & T?rresen, Jim
(2023).
Workshop on Ethical Challenges within Artificial Intelligence - From Principles to Practice.
-
T?rresen, Jim
(2023).
Robots-assistants that Care about Privacy, Security and Safety.
-
T?rresen, Jim
(2023).
From Adapting Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
T?rresen, Jim & Yao, Xin
(2023).
Tutorial: Ethical Risks and Challenges of Computational Intelligence.
-
-
-
Saplacan, Diana
(2023).
Presentation of the paper "Health Professionals’ Views on the Use of Social Robots with Vulnerable Users: A Scenario-Based Qualitative Study Using Story Dialogue Method".
-
-
Saplacan, Diana
(2023).
Introduction on my background and on the University of Oslo, Robotics and Intelligent Systems Research Group, Norway for Human-Robot Interaction Lab, Department of Social Informatics, Kyoto University, Japan.
-
Aas, Bendik M?ller & Ellefsen, Kai Olav
(2024).
Minding the Gaps: Examining the Relationship Between World Modeling and Generalization in Deep Reinforcement Learning.
Universitetet i Oslo.
-
Lindblom, Diana Saplacan; T?rresen, Jim & Hakimi, Nina
(2024).
Dynamic Dimensions of Safety -
How robot height and velocity affect human-robot interaction: An explorative study on the concept of perceived safety.
University of Oslo.
-
Orten, Kristine & Ellefsen, Kai Olav
(2024).
Exploring Dual-System Reasoning through Model-Free and Model-Based RL.
Universitetet i Oslo.
-
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Show summary
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.
-