Speaker
Description
Artificial intelligence is increasingly relied on to assist with complex tasks by leveraging vast amounts of data. Building useful representations is a core ingredient to the performance of such systems, and arguably goes beyond the mere extraction of statistical information in observed data. One way to express desiderata for such representations using modifications to the data generation process: we "understand" and trust it when we comprehend its behavior in response to plausible and meaningful changes in the environment it is exposed to. Causality offers a comprehensive framework for modeling these changes through the concepts of interventions and counterfactuals. Focusing mainly on generative models, I will illustrate how causal desiderata can be used to guide representation learning, such that important aspects of the ground truth data generation process can be recovered. I will further elaborate on how these principles can also be applied in contexts that leverage domain knowledge in the form of scientific simulations instead of real data, and highlight some open questions raised by the use of AI in Science.
Contribution length | Long |
---|