November 27, 2023 to December 1, 2023
Europe/Paris timezone

Session

Architectures

Dec 1, 2023, 9:00 AM

Conveners

Architectures

  • Gregor Kasieczka (Hamburg University)
  • David Rousseau (IJCLab, CNRS/IN2P3, Université Paris-Saclay)

Presentation materials

There are no materials yet.

  1. tommaso dorigo (INFN Sezione di Padova)
    12/1/23, 9:00 AM
    Architectures (Adversarial, Bayesian, ... )

    In this presentation I will discuss recent trends in the handling of systematic uncertainties in HEP analysis tasks, and techniques proposed to mitigate or remove their effect in the search for optimal selection criteria and variable transformations.
    The approaches discussed include nuisance-parametrized models, modified adversary losses, semi-supervised learning approaches, inference-aware...

    Go to contribution page
  2. Eiji Kawasaki (CEA)
    12/1/23, 9:45 AM
    Architectures (Adversarial, Bayesian, ... )

    The development of an effective Uncertainty Quantification method that computes the predictive distribution by marginalizing over Deep Neural Network parameter sets remains an important, challenging task. In this context, Markov Chain Monte Carlo algorithms do not scale well for large datasets leading to difficulties in Neural Network posterior sampling. During this talk, we'll show that a...

    Go to contribution page
  3. Sebastian Bieringer (Hamburg University, Institute for experimental physics)
    12/1/23, 10:15 AM
    Architectures (Adversarial, Bayesian, ... )

    Bayesian neural networks are a key technique when including uncertainty predictions into neural network analysis, be it in classification, regression or generation. Although being an essential building block for classical Bayesian techniques, Markov Chain Monte Carlo methods are seldomly used to sample Bayesian neural network weight posteriors due to slow convergence rates in high dimensional...

    Go to contribution page
  4. Gael Varoquaux (INRIA-Saclay)
    12/1/23, 11:15 AM
    Architectures (Adversarial, Bayesian, ... )

    Predictions from empirical evidence come with many sources of potential uncertainty and error. First, the specific choices of models and concepts that we tack onto the observation give a strong prism to the resulting
    conclusion. Uncertainty on which functional form to use in a model, naturally results in uncertainty of conclusions. Outside of mature (post-paradigmatic) quantitative sciences...

    Go to contribution page
Building timetable...