27 novembre 2023 à 1 décembre 2023
Fuseau horaire Europe/Paris

Using an adversary trained on a control sample to control systematic errors

30 nov. 2023, 17:00
25m
Architectures (Adversarial, Bayesian, ... ) Controlling uncertainties in generative models

Orateur

Gordon Watts (University of Washington)

Description

Machine Learning improved the sensitivity in searches for massive long-lived neutral particles decaying in the Calorimeter by over 30%. This was only after supressing a large increase in the systeamtic errors caused by the method. The largest contribution to this improvement in senstivity is the use of a Recurrant Neural Network that separates signal from standard QCD multijet background and Beam Induced Background. This classifier uses low-level data like realtive calorimeter cluster locations, tracks, and muon segments. We exploit the calorimeter cell energy deposit time as a powerful handle to reject beam induced background, which is poorly simulated by the ATLAS experiment's Monte Carlo simulation package. In addition, the beam induced background training dataset can only be drawn from data. Thus the RNN training set contains a mix of poorly simulated Monte Carlo data, and LHC collision data. A control dataset was used to train an adversary, along with signal and background samples for the RNN, simultaniously. The adversary is trained to tell the difference between collision data and simulated data, and its success is part of the main network's loss function. This dramatically reduced the systematic errors due to Monte Carlo mis-modeling. This presentatino will discuss the network design, how it was modified when the problem(s) were discovered, and its performance.

Auteur principal

Gordon Watts (University of Washington)

Documents de présentation