Upcoming talk (tomorrow Sep. 17th)

Hi everyone,

We are delighted to invite you to a causality talk by Dr. Sara Magliacane. 
Sara is an Assistant Professor at the Amsterdam Machine Learning Lab at the University of Amsterdam and a Research Scientist at the MIT-IBM Watson AI Lab, and she will be with us during September 17.

Talk Title: Causal Representation Learning in Temporal Settings

Speaker: Prof. Sara Magliacane, Amsterdam Machine Learning Lab, University of Amsterdam Research Scientist, MIT-IBM Watson AI Lab

Date and Time: Sept. 17th, 10:00

Location: OJD: Kristen Nygaards sal (5370)


In this talk, Sara will discuss about learning causal representations in temporal sequences, and will present her recent work (CITRIS, iCITRIS, and BISCUIT).  Below, you will find the abstract of the talk and a brief bio of Prof. Magliacane.  We believe this talk will be of great interest to anyone engaged in machine learning, AI, and causal inference research.

We look forward to your participation in what promises to be an enlightening and informative session.


Best regards,

-- adin



Abstract: Causal inference reasons about the effect of unseen interventions or external manipulations on a system. Similar to classic approaches to AI, it typically assumes that the causal variables of interest are given from the outset. However, real-world data often comprises high-dimensional, low-level observations (e.g., pixels in a video) and is thus usually not structured into such meaningful causal units. Causal representation learning aims at addressing this gap by learning high-level causal variables along with their causal relations directly from raw, unstructured data, e.g. images or videos. In this talk I will focus on learning causal representations in temporal sequences, e.g. sequences of images. In particular I will present some of our recent work on causal representation learning in environments in which we can perform interventions or actions. I will start by presenting CITRIS (https://arxiv.org/abs/2202.03169), where we leveraged the knowledge of which variables are intervened in each timestep to learn a provably disentangled representation of the potentially multidimensional ground truth causal variables, as well as a dynamic bayesian network representing the causal relations between these variables. I will then show iCITRIS (https://arxiv.org/abs/2206.06169), an extension that allows for instantaneous effects between variables. Finally, I will focus on our most recent method, BISCUIT (https://arxiv.org/abs/2306.09643), which overcomes one of the biggest limitations of our previous methods: the requirement to know which variables are intervened. In BISCUIT we instead leverage actions with unknown effects on an environment. Assuming that each causal variable has exactly two distinct causal mechanisms, we prove that we can recover each ground truth variable from a sequence of images and actions up to permutation and element-wise transformations. This allows us to apply BISCUIT to realistic simulated environments for embodied AI, where we can learn a latent representation that allows us to identify and manipulate each causal variable, as well as a mapping between each high-level action and its effects on the latent causal variables.

Bio: Sara Magliacane is an assistant professor in the Amsterdam Machine Learning Lab at the University Amsterdam and a Research Scientist at MIT-IBM Watson AI lab. Her research focuses on three directions, causal representation learning, causality-inspired machine learning and how causality ideas help RL adapt to new domains and nonstationarity faster. The goal is to leverage ideas from causality to make ML methods robust to distribution shift and generalizable across domains and tasks. She also continues working on causal discovery, i.e., learning causal relations from data. Previously she was a postdoctoral researcher at IBM Research NY, working on methods to design experiments that would allow one to learn causal relations in a sample-efficient and intervention-efficient way. She received a PhD at the VU Amsterdam on learning causal relations jointly from different experimental settings, even with latent confounders and small samples.

Published Sep. 16, 2024 8:33 AM - Last modified Sep. 16, 2024 8:33 AM