Advancing Explainable and Fair AI
We propose three subthemes within this area, with the common goal of elevating AI to new standards of transparency, fairness, and sustainability. Application areas include healthcare and industrial applications, with a research focus ranging from foundations via implementation to work practices.
NB! Applicants are asked to apply for one of these sub-themes. Please indicate clearly which of the sub-themes you have chosen for your proposal by using one of the codes ASR1, ASR2, ASR3.
Mentoring and internship will be offered by a relevant external partner.
Theme ASR1. Explainable AI for Digital and Green Transition
- Contact person: Ingrid Chieh Yu
- Keywords: Explainable AI, Transparency and Trust in AI, Circular Economy, Sustainable Product Lifecycle
- Research group: Analytical Solutions and Reasoning (ASR)
This research theme explores the intersection of explainable AI (XAI) with sustainable digital and green industry practices. Focusing on circular product lifecycle management, this project aims to enhance transparency in AI-driven systems, allowing stakeholders to understand, trust, and engage with these technologies. Emphasis will be placed on assessing and evolving XAI methods to meet diverse stakeholder needs, fostering innovation that aligns with environmental and social goals. By advancing models for explainability and lifecycle adaptability, the project aspires to promote both sustainable production and informed, transparent AI usage.
Relevant topics from methodological research:
- Circular Product Lifecycle Management with XAI and Model Evolution (application of XAI for industry): Integrating XAI into circular product lifecycle management, focusing on adaptive models that evolve alongside product stages, from design and production to recycling. Explore AI methodologies that offer transparent decision-making insights, driving sustainable practices within the lifecycle framework.
- Framework of XAI Methods with Industrial Domain Emphasis (method and prototype development): how external knowledge from various sources, such as text, datasheets, time series, graphs, etc., can contribute to achieving unified explanations; applicability to diverse industrial scenarios; and identification of methods and prototypes that best support the transparency and accountability demands of the digital and green transition.
- Human-Centric Explainability for Stakeholders (evaluation framework): Define metrics and criteria for explainability from the perspective of various stakeholders, including designers, manufacturers, consumers, and regulators. Develop evaluation frameworks that gauge how XAI methods meet each group's needs, ensuring that insights align with the digital and green transition's technical, ethical, and environmental objectives.
Theme ASR2. Actionable Recourse and Fairness Over Time
- Contact person: Ingrid Chieh Yu
- Keywords: Explainability, Fairness, Feedback-Loop, actionable recourse, counterfactual explanation
- Research group: Integreat/Analytical Solutions and Reasoning (ASR)
Ensuring responsible AI use requires analyzing potential harms, which may arise not from malicious intent but from design choices. Addressing these subtle harms can be more challenging than combating deliberate misuse. AI systems can produce mixed outcomes, delivering benefits while causing unaddressed harm, such as stress, biases, or marginalization, particularly in specific subgroups. For example, an admissions AI may improve efficiency but lead to unequal resource distribution. While often accurate, decision support tools might lack the values needed to handle outlier cases compassionately. Actionable recourse enables individuals to adjust inputs to achieve favorable outcomes from AI models. Ensuring feasible recourses is crucial for trust in sectors like lending, insurance, and hiring. However, implementing recourse might provoke feedback loops that affect fairness over time. This Postdoc project will explore counterfactual actionable recourse and its effects on fairness over time, contributing to the study of responsible AI.
Relevant topics from methodological research:
- Decision support framework: Simulation framework that allows stakeholders to experiment and analyze the long-term impact of recourse decisions. This includes investigating recourse and fairness mechanisms in dynamic (e.g., varied resources, population groups, etc.), temporal settings, and benchmarking model performance over time.
- User-Centric Recourse: Approaches to integrate user preferences directly into recourse options to ensure that recommendations align with individual needs and promote fair, meaningful choices. Develop decision-support tools that empower users, foster trust, and enable informed decision-making through transparent, actionable, and user-friendly counterfactual recourse.
- Fairness and Recourse Evaluation: Metrics that evaluate the fairness, actionability, validity, and long-term impact of recourse. Validate these metrics on various dynamic decision scenarios.
- Applicability: Explore the applicability of explainability and fairness techniques in real-world contexts, such as industry and the public sector, to understand domain-specific challenges and optimize solutions for broader societal impact. Assess whether XAI methods meet legal and ethical intentions over time.
Theme ASR3. Explainable Generative AI for Time-Series Data
- Contact person: Thomas Plagemann
- Keywords: Foundation models, explainability, medical time-series data
- Research group: Analytic Semantic Reasoning (ASR)
The development of machine learning (ML) shows that newer approaches demand more training data and yield higher performance in classification and prediction. Foundation models and single-shot learning are especially promising in data-scarce domains like healthcare, where high data collection costs and privacy concerns limit available data. However, these models' complexity and lack of explainability pose challenges, particularly for building trust and meeting legal standards in medical applications, where model training is rarely controlled by application developers. This project focuses on time-series sensor data, which is critical for clinical diagnosis yet receives less attention in AI than image data. Post-hoc, model-agnostic methods like LIME and SHAP , commonly used for images, often fail to provide insightful explanations for sensor-based data, underscoring the need for tailored interpretability techniques in clinical contexts.
Relevant topics from methodological research:
- Combining sub-symbolic and symbolic knowledge representations (e.g., knowledge graphs) to avoid hallucinations and enable domain specific explanation
- Personalized and context aware explanations
- Multi-modal explanations