27 novembre 2023 à 1 décembre 2023
Fuseau horaire Europe/Paris

An unfolding method based on conditional Invertible Neural Networks (cINN) using iterative training

30 nov. 2023, 09:45
25m

Orateur

Mathias Josef Backes (Kirchhoff Institut für Physik)

Description

The unfolding of detector effects is crucial for the comparison of data to theory predictions. While traditional methods are limited to representing the data in a low number of dimensions, machine learning has enabled new unfolding techniques while retaining the full dimensionality. Generative networks like invertible neural networks (INN) enable a probabilistic unfolding, which maps individual data events to their corresponding unfolded probability distribution. The accuracy of such methods is however limited by how well the experimental data is modeled by the simulated training samples.
We introduce the iterative conditional INN (IcINN) for unfolding that adjusts for deviations between simulated training samples and data. The IcINN unfolding is first validated on toy data and then applied to pseudo-data for the $pp \rightarrow Z \gamma \gamma$ process. Additionally, we validate the probabilistic unfolding with a novel approach using the traditional transfer matrix-based methods.

The main results of this project have been published in a paper (arXiv:2212.08674: https://arxiv.org/abs/2212.08674). A second paper with a stronger focus on the probabilistic unfolding will be published prior to the conference.

Auteurs principaux

Anja Butter (LPNHE) Bogdan MALAESCU (LPNHE, Paris, FRANCE) Mathias Josef Backes (Kirchhoff Institut für Physik) Prof. Monica Dunford (Kirchhoff Institut für Physik)

Documents de présentation