IN2P3/IRFU Machine Learning workshop 2024

Europe/Paris
Amphi Grïnewald (IPHC, Strasbourg)

Amphi Grïnewald

IPHC, Strasbourg

Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
Description

zoom link is available (bottom of this page, only after registration)

Registration is closed

The agenda is public

General Information

This workshop deals with all AI activity at IN2P3 and CEA/IRFU.
The workshop will take place in person at IPHC in Strasbourg from Wednesday 20th Nov 1:00pm till Friday 22nd Nov 1:00pm. 
We will have a free social gathering on Thursday night.

Introduction

Machine Learning is now potentially impacting many aspects of particle physics, nuclear physics and astroparticle physics.

This workshop covers current developments with Machine Learning at IN2P3 and CEA-IRFU.

The call for contribution is not limited to finished work, work in progress or even Expression of Interest are very welcome.

Note: results from a collaboration might need approval from the relevant collaboration board or national contact. Since this is a national workshop, contributions by students of on-going work within a collaboration might be possible, please refer to your national contact for approval.

Tracks

  1. Object detection, object identification and reconstruction 

  2. Analysis: event classification, statistical analysis and inference, anomaly detection

  3. Simulations and surrogate models: replacing an existing complex physical model

  4. Fast ML: DAQ/Trigger/Real Time Analysis

  5. Infrastructure : Hardware and software for Machine Learning

  6. Accelerator control

  7. Large / multi-model languages for Physics

  8. Theory and phenomenology

  9. Training, courses, tutorial, open datasets and challenges

  10. Others

Mailing list

Please make sure you are subscribed to MACHINE-LEARNING-L@in2p3.fr on IN2P3 listserv and visit our website machine-learning.in2p3.fr to keep up to date with ML at IN2P3. 

Organisation

Organisers

  • Alexandre Boucaud (APC/IN2P3)
  • Valérie Gautard (CEA/IRFU)
  • David Rousseau (IJCLab/IN2P3)
  • Thomas Vuillaume (LAPP/IN2P3)

Local organising committee

  • Eric Chabert (IPHC, IN2P3)
  • Giulio Dujany (IPHC, IN2P3)
  • Jérôme Pansanel (IPHC, IN2P3)

 

Inscription
Inscription
Participants
  • Abdelaziz Guelfane
  • Anne-Catherine Le Bihan
  • Boris Hippolyte
  • Brigitte PERTILLE RITTER
  • Charly Lassalle
  • Christian Bonnin
  • Damien Minenna
  • David Rousseau
  • Elio Sacchetti
  • Emmanuel Gangler
  • Eric CHABERT
  • Francis Osswald
  • Francoise BOUVET
  • Georges AAD
  • Giulio Dujany
  • Gourab Saha
  • Hayg Guler
  • ismail Cherkaoui
  • Jean-Michel Gallone
  • Jonathan COLLIN
  • Mojahed Abushawish
  • Nicolas CHEVILLON
  • Olivier DORVAUX
  • Pauline Lafoux
  • Quentin Bonnefoy
  • RAGANSU CHAKKAPPAI
  • Sebastien Geiger
  • Thomas Vuillaume
  • Valérie Gautard
  • Vera MAIBORODA
  • +27
    • 13:30 14:00
      Badge distribution 30m Amphi Grïnewald

      Amphi Grïnewald

      IPHC, Strasbourg

      Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
    • 14:00 18:00
      Wednesday afternoon Amphi Grïnewald

      Amphi Grïnewald

      IPHC, Strasbourg

      Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
      Président de session: Valérie Gautard (CEA-Irfu)
      • 14:00
        Welcome 15m
      • 14:15
        The development of innovative methods for fission trigger construction 25m

        The development of innovative methods for fission trigger construction is part of the FRØZEN project which aims to get a better understanding of the angular momentum generation and the energy partition between fragments in the fission process. The reconstruction of the very first moments after the scission point is essential and requires correlated neutron and gamma detection as well as measuring the kinematic properties of fission fragments. Such a measurement could be achieved thanks to the last generation of hybrid $\gamma$-spectrometer named $\nu$-Ball2, coupled to a double Frisch-Grid Ionization Chamber (dFGIC). In this experiment, a spontaneous 252Cf fission source was used. However, for other fissioning systems that require the use of a primary beam, fission could become a minor nuclear reaction compared to other processes. Additionally, with the increasing size of nuclear physics experimental setup, the need to recognize rarer reaction mechanism, one of the main challenges nowadays in nuclear physics is to develop more and more selective data analysis methods for more and more contaminated datasets.

        AI models are being developed to replace the usual data analysis techniques for reconstructing the fission events, exploring the limits of AI implementation in such context. The first implementation is deeply motivated by the computationally expensive and time-consuming characteristics of more usual trace (sampled signal) analysis approaches, currently used to analyze the dFGIC response and tag fission for $\nu$-Ball2 setup. Some promising regression and convolutional neural network models have been tested to obtain precise fission tag time, the deposited energy, and the electron drift time from the dFGIC traces. The second implementation tackles the challenge of recognizing fission events from a polluted dataset by developing an AI-based algorithm to recognize fission solely based on the $\nu$-Ball2 response function. The fission fragments de-excitation process is reconstructed from the correlations between the individual fission fragments pairs and observables, such as gamma and neutron energies and multiplicities. AI models can be used to evaluate the impact of each observable into identifying fission. Once the algorithm is trained, it could be applied to various fissioning systems without the need for an ancillary fission tag detector, such as the dFGIC.

        Orateur: Brigitte PERTILLE RITTER (IJCLab - Université Paris-Saclay)
      • 14:40
        Deep learning methods with uncertainity estimation for gamma photon interactions reconstruction in fast scintillators 25m

        This talk presents a physics-informed deep learning method for the quantitative estimation of the spatial coordinates of gamma interactions within a monolithic scintillator, with a focus on Positron Emission Tomography (PET) imaging. A Density Neural Network approach is designed to estimate the 2-dimensional gamma photon interaction coordinates in a fast lead tungstate (PbWO4) monolithic scintillator detector. We introduce a custom loss function to estimate the inherent uncertainties associated with the reconstruction process and to incorporate the physical constraints of the detector.
        This unique combination allows for more robust and reliable position estimations and the obtained results demonstrate the effectiveness of the proposed approach and highlights the significant benefits of the uncertainties estimation. We discuss its potential impact on improving PET imaging quality and show how the results can be used to improve the exploitation of the model, to bring benefits to the application and how to evaluate the validity of the given prediction and the associated uncertainties. Importantly, our proposed methodology extends beyond this specific use case, as it can be generalized to other applications beyond PET imaging.

        Orateur: Dominique Yvon (CEA Saclay - IRFU/SPP)
      • 15:05
        Embedded Neural Networks on FPGAs for Real-Time Computation of the Energy Deposited in the ATLAS Liquid Argon Calorimeter 25m

        The Phase-II upgrade of the LHC will increase its instantaneous luminosity by a factor of 5-7 leading to the HL-LHC. The ATLAS Liquid Argon (LAr) calorimeter measures the energy of particles produced in LHC collisions. In order to enhance the ATLAS physics discovery potential in the blurred environment created by the pileup, it is crucial to have an excellent energy resolution and an accurate detection of the energy-deposit time.

        The energy computation is currently done using optimal filtering algorithms that assume a nominal pulse shape of the electronic signal. Up to 200 simultaneous proton-proton collisions are expected at the HL-LHC which leads to a high rate of overlapping signals in a given calorimeter channel. This results in a significant energy degradation especially for low time-gap between two consecutive pulses. We developed several neural network (NN) architectures showing significant performance improvements with respect to the filtering algorithms. These NNs are capable to recover the degraded performance in the low-time gap region by using the information from past events.

        The energy computation is performed in real-time using dedicated electronic boards based on FPGAs. FPGAs are chosen for their capacity to treat large amount of data (O(1Tb/s) per FPGA) with low latency (O(1000ns)). The back-end electronic boards for the Phase-II upgrade of the LAr calorimeter will use the next high-end generation of INTEL FPGAs with increased processing power. This is a unique opportunity
        to develop more complex algorithms on these boards. Several hundreds of channels should be treated by each FPGA and thus several hundreds of NNs should run on one FPGA. The energy computation should be done at a fixed latency of the order of 100 ns. The main challenge is to meet these stringent requirements in the firmware implementation.

        Special effort was dedicated to minimize the needed computational operations while optimizing the NNs architectures. Each internal operation of the NNs is optimized during the firmware implementation. This includes complex mathematical functions implementation in LookUp Tables (LUTs), quantization of arithmetic operations using fixed-point representations and rounding, and optimisation of the usage of FPGA logic elements. The firmware implementation results are compared to software and the resolution due to firmware approximations was found to be better than 1%.

        RNN and dense NN architectures applied to a single cell in the calorimeter will be presented. The improvement of the energy resolution compared to the legacy filter algorithms will be discussed. The results of firmware implementation in VHDL and Quartus HLS will be presented. The implementation results on Stratix 10 and Agilex INTEL FPGAs, including the resource usage, latency, and operation frequency will be reported. Optimized implementations in VHDL are shown to fit the stringent requirements of the LASP firmware specifications. The steps taken towards implementing the NNs for the full LAr detector and to quantify the impact on the reconstruction of electrons and photons will be also discussed.

        Orateur: Georges AAD (CPPM)
      • 15:30
        Pause café 30m
      • 16:00
        RAG-LAB: LLMs dans les laboratoires 10m

        RAG-LAB est un groupe de travail sur le développement et l'utilisation de large language models (type chat bots) dans les laboratoires pour des usages spécifiques.

        Orateur: Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)
      • 16:10
        Projet IN2P3 Machine Learning 15m
        Orateurs: David Rousseau (IJCLab, Université Paris-Saclay), Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)
      • 16:25
        Quelles actions futures pour le projet IN2P3 Machine Learning ? 1h
    • 09:00 13:00
      Thursday morning Amphi Grïnewald

      Amphi Grïnewald

      IPHC, Strasbourg

      Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
      Présidents de session: Emmanuel Gangler (LPC), Julien DONINI (UBP/LPC/IN2P3)
      • 09:00
        GammaLearn : deep learning applied to CTAO event reconstruction 25m

        GammaLearn is a project to develop deep learning solutions for Imaging Atmospheric Cherenkov Telescopes data analysis and in particular for the Cherenkov Telescope Array Observatory (CTAO) currently under construction. Its first application is event reconstruction based on the images or videos recorded by Cherenkov telescopes.
        In this talk, I will present the recent results obtained applying domain adaptation to compensate for some of the issues arising from data vs simulation discrepancies. I will also present the associated DIRECTA project that aims at applying DL in real-time.

        Orateur: Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)
      • 09:25
        Stereograph: stereoscopic event reconstruction using graph neural networks applied to CTAO 25m

        The CTAO (Cherenkov Telescope Array Observatory) is an international observatory currently under construction. With more than sixty telescopes, it will eventually be the largest and most sensitive ground-based gamma-ray observatory.

        CTAO studies the high-energy universe by observing gamma rays emitted by violent phenomena (supernovae, black hole environments, etc.). These gamma rays produce an atmospheric shower upon entering the atmosphere, which emits faint blue light, observed by CTAO’s highly sensitive cameras. The event reconstruction consists of analyzing the images produced by the telescopes to retrieve the physical properties of the incident particle (mainly direction, energy, and type).

        A standard method for performing this reconstruction consists of combining traditional image parameter calculations with machine learning algorithms, such as random forests, to estimate the particle's energy and class for each telescope. A second step, called stereoscopy, combines these monoscopic reconstructions into a global one using engineered weighted averages.

        In this work, we explore the possibility of using Graph Neural Networks (GNNs) as a suitable solution for combining information from each telescope. The "graph" approach aims to link observations from different telescopes, allowing analysis of the shower from multiple angles and producing a stereoscopic reconstruction of the events. We apply GNNs to CTAO-simulated data from the Northern hemisphere and show that they are a very promising approach to improving event reconstruction, providing a more performant stereoscopic reconstruction. In particular, we observe better energy and angular resolutions and enhanced separation between gamma photons and protons compared to the Random Forest method.

        Orateur: Mlle Hana Ali Messaoud (LAPP, Univ. Savoie Mont-Blanc)
      • 09:50
        Interpretability of anomalies in featurized data with signatures 25m

        Machine learning is often viewed as a black box when it comes to understanding its output, be it a decision or a score. Automatic anomaly detection is no exception to this rule, and quite often the data analyst is left to independently analyze the data in order to understand why a given event is tagged as an anomaly. Worst, the expert may end up scrutinizing over and over the same kind of rare phenomena which all share a high anomaly score (quite often due to noisy or bad quality data), while missing anomalies of physical interest. In this presentation, I’ll introduce the idea of anomaly signature, whose aim is to help the interpretability of anomalies by highlighting which features contributed to the decision. I’ll present concrete applications to the search of anomalies in time domain astrophysics within the framework of the SNAD team.

        Orateur: Emmanuel Gangler (LPC)
      • 10:15
        Pause café 30m
      • 10:45
        Gamma-ray spectrometry of fission fragments : ML analysis of multi-dimensional spectra 25m

        The analysis of gamma radiation emitted by fission fragments has become an essential tool for studying the nuclear fission process. It allows probing the intrinsic properties of the fragments or exploring effects that are little studied experimentally, such as the sharing of excitation energy between fragments during nuclear fission.

        However, the analysis of experimental fission gamma-ray data using traditional techniques is time-consuming and complex. The main task is to find and extract peak intensities on 2D or 3D distributions (gamma-ray energies measured in coincidence), which are filled with thousands of peaks of variable amplitude, often overlapping with significant background noise. Classical methods rely on large models that can be difficult to fit. To overcome this, we implemented a Convolutional Neural Network (UNET-like architecture) and trained it using synthetic data that closely imitate experimental data. To account for uncertainties in the input histograms and provide uncertainty estimates for the predicted intensities, we use an approach based on resampling and ensemble methods.

        Preliminary results of applying the neural network to synthetic data indicate promising accuracy in identifying peak intensities, but further investigation is required to determine if this approach outperforms classical fit methods. The final goal is to apply the trained model to real data obtained with the FIPPS instrument (a high-resolution HPGe spectrometer) at the nuclear facility of the Laue-Langevin Institute (ILL) to provide experimental verification of fission-delayed gamma-ray modelling.

        Orateur: Mattéo Ballu
      • 11:10
        Utilizing machine learning for the Data Analysis of AGATA’s PSA database. 25m

        In-beam gamma-ray spectroscopy, particularly with high-velocity recoil nuclei, necessitates precise Doppler correction. The Advanced GAmma Tracking Array (AGATA) represents a groundbreaking development in gamma-ray spectrometers, boasting the ability to track gamma-rays within the detector. This capability leads to exceptional position resolution which ensures optimal Doppler corrections.

        AGATA's design features high-purity germanium crystals, with each crystal divided electrically into 36 segments for enhanced detection accuracy. The core of AGATA's position resolution lies in the Pulse Shape Analysis (PSA) algorithm, responsible for pinpointing gamma-ray interaction locations. This algorithm functions by matching observed signals with a pre-established database of signals. However, the current model of relying solely on simulated signals for the PSA database presents limitations. In contrast, utilizing experimental data for building the PSA database promises significant improvements in accuracy and efficiency.

        The experimental data is acquired by scanning the crystal using collimated gamma-ray sources. Utilizing what is known as the Strasbourg Scanning Table, the crystal is scanned both horizontally and vertically, the gathered signals are then matched using the Pulse Shape Coincidence Scan (PSCS) algorithm to be assigned to a unique 3D position. The PSCS is notably time-intensive, requiring approximately several days to analyse entire datasets.

        In this work, we propose a new algorithm to replace the PSCS, based on machine learning techniques. Specifically, we employed Long Short-Term Memory (LSTM) networks, renowned for their robustness and their ability to decipher time series. The loss function has been adapted to incorporate Strasbourg’s scanning table specificities. The processing time of the signals was brought down to only about an hour using this model. Different metrics were used to compare our new results to the PSCS reference, indicating a greater consistency and accuracy.

        Orateur: Mojahed Abushawish (Lyon-IP2I)
      • 11:35
        Grants for AI project 50m
        Orateurs: David Rousseau (IJCLab, Université Paris-Saclay), Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)
    • 14:00 18:20
      ML for accelerators Amphi Grïnewald

      Amphi Grïnewald

      IPHC, Strasbourg

      Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
      Présidents de session: Francis Osswald (IPHC), Hayg Guler (IJCLAB)
      • 14:00
        Anomaly detection and noise reduction in Turn by Turn BPMs signals of SuperKEKB main rings 25m

        SuperKEKB and the future circular colliders aim at luminosity as high as of $10^{35} cm^{–2}s^{–1}$. This requires very high beams current and very small beam sizes (nano-beams). In order to reach such beam sizes the accelerator physicist needs to control beam quality and accelerator optics. In particular, controlling even small linear and nonlinear effects that can perturb the optics is crucial. This is possible thanks to turn-by-turn Beam Position Monitors (BPM) signals. They allow us to reconstruct the optics parameters and to identify the presence of imperfections. Therefore, precise BPMs signal processing becomes increasingly important for present and future collider. Recent advancements in artificial intelligence (AI) and machine learning (ML) offer new methods to enhance the quality of BPM data, leading to better diagnostics and to results that are more accurate. Here we present an exploratory study of advanced AI techniques applied to superKEKB turn-by-turn BPMs data, aiming to detect faulty BPMs and reduce noise levels in their FFT spectra.

        Orateurs: M. Abdelaziz Guelfane (CentraleSupélec), Ismail Cherkaoui (CentraleSupélec)
      • 14:25
        Exploration de données et apprentissage automatique pour la détection d'anomalies sur l'accélérateur Arronax 25m

        ARRONAX, Accélérateur pour la Recherche en Radiochimie et Oncologie à Nantes Atlantique, est un cyclotron multi-particules capable de produire des protons à haute intensité (2 × 375 μA) et à haute énergie (70 MeV). Il assure la précision de la livraison des faisceaux ioniques à la cible en garantissant leur énergie et leurs propriétés requises. Cependant, des anomalies peuvent survenir, compromettant la fiabilité du système et entraînant des interruptions coûteuses. Dans ce contexte, une étude comparative des méthodes de détection d’anomalies, incluant des approches statistiques (Méthode interquartile (IQ) et le test de Grubbs) et des méthodes d'apprentissage automatique (OCSVM) et d'apprentissage profond (Autoencodeur), est réalisée sur les séries temporelles d’intensité du faisceau sur cible. Les premiers résultats montrent que les deux méthodes statistiques étudiées présentent des limites significatives dans la détection des anomalies, notamment en termes de rappel et de score F1, tandis que les méthodes d'apprentissage automatique, qu'elles soient classiques ou modernes, montrent une meilleure efficacité pour identifier les variations anormales d'intensité.

        Orateur: Fatima Basbous (Arronax)
      • 14:50
        High-resolution image reconstruction with unsupervised learning and noisy data applied to ion-beam dynamics 25m

        We want to develop a new numerical analysis tool using artificial intelligence techniques to improve the denoising, segmentation, and reconstruction of images that characterize the beams of accelerated particles. These improvements aim to increase the accuracy of measurements, in particular to better characterize the halo of the beams, and reduce beam losses by ultimately making processes more sustainable. This is a disruptive innovation because the number of works on an international scale in the field is very limited. The exploratory project is at the feasibility study stage, and the next step is to conduct a comparative study with existing AI/ML tools.

        The variability of experimental conditions and in particular of the background noise on our installations requires a specific approach with systematic training and relearning. Furthermore, the absence of noise-free images and standardized noise model, as well as the limited number of data for training will orient the study towards recent solutions requiring reduced, unsupervised learning such as that proposed by deep generator networks.

        The applications of these developments are numerous and go far beyond the scope of the project in its current version. Indeed, the restoration of corrupted and noisy images using reduced data sets, using unsupervised methods, are useful in astronomy (celestial object detection), in the field of health, medical imaging (in vivo functional imaging to visualize the development of pathology, drug progression in organs, microscopy), life sciences (classification of animal species), and in high-energy physics (trajectory reconstruction for charged particle detectors).

        Orateur: Francis Osswald (IPHC)
      • 15:15
        Heat load observer studies on SPIRAL2 data 25m

        We present works on heat load neural observers for the SPIRAL2 superconducting linear accelerator at GANIL. This virtual diagnostic focuses on superconducting (SC) radiofrequency (RF) cavities, which accelerate the particle beam. The cavities are housed in cryomodules, structures that ensure their cryogenic and radiofrequency operation in a superconducting state. Actuators control the pressure and the level of the liquid helium baths. Along with additional process measurements such as temperatures, liquid helium levels and pressures, these provide valuable information that normally is not accessible during beam operation : the heat load dissipated by the RF cavities. In addition to the RF data from the low-level radio frequency system, dynamic heat loads would enable to get a continuous indirect estimation of their quality factor $Q_0$, to monitor the SC state and anticipate on potential efficiency degradation of the accelerator. In order to achieve this target, we apply neural networks using multivariate time series. Several architectures have been studied (multilayer perceptron, convolutional, recurrent), as well as a stacked generalization method. Work is in progress to improve and test the generalization of these models for different dynamics and/or cryomodules by adding additional information such as the cryomodule identifier. Subsequently, and with the aim of making the installation more reliable for experimenters, work will be carried out on anomaly detection, using RF and high-sampling pressure data.

        Orateur: Charly Lassalle (Université de Caen Normandie / GANIL)
      • 15:40
        pause café 30m
      • 16:10
        Echo State Network for Dynamic Aperture prediction 25m

        The technological advance of today’s storage rings and colliders elevated nonlinear beam dynamics to the forefront of accelerator design and operation. In the field of single-particle beam dynamics, the concept of dynamic aperture (DA), that is, the extent of the phase-space region where bounded motion occurs, is a key observable to guide the design of present, e.g. the CERN Large Hadron Collider (LHC) [2], and future machines (see e.g. [8–15]). Determining how to describe and efficiently predict the value of the DA might solve some fundamental problems in accelerator physics, linked to performance optimization of storage rings and colliders. The high computational cost of direct numerical simulations would be significantly reduced if a reliable model for the time evolution of the DA were available.
        In [1], we have investigated the ability of an ensemble reservoir computing approach based on Echo State Networks (ESN) to predict the long-term evolution of DA in hadrons storage rings. We present here further studies aiming to automatize as much as possible the search for optimal data splitting and ESN hyper-parameters for DA application.
        References :
        [1] M. Casanova, B. Dalena, L. Bonaventura, and M. Giovannozzi,
        Ensemble reservoir computing for dynamical systems: Prediction of phase-space stable region for hadron storage rings, The European Physical Journal Plus, 138 (2023).
        https://doi.org/10.1140/epjp/s13360-023-04167-y
        [2] O.S. Brüning, P. Collier, P. Lebrun, S. Myers, R. Ostojic, J. Poole, P. Proudlock, LHC design report. CERN Yellow Rep. Monogr. CERN, Geneva (2004).
        https://doi.org/10.5170/CERN-2004-003-V-1
        [3] R. Appleby, et al., Dynamic aperture studies of the nuSTORM FFAG ring, in Proceedings of IPAC’14 (JACoW Publishing, Geneva), pp. 1574–1577.
        https://doi.org/10.18429/JACoW-IPAC2014-TUPRI013.pdf
        https://jacow.org/IPAC2014/papers/TUPRI013.pdf
        [4] Y.C. Jing, V. Litvinenko, D. Trbojevic, Optimization of dynamic aperture for hadron lattices in eRHIC, in Proceedings of IPAC’15 (JACoW Publishing, Geneva, pp. 757–759.
        https://doi.org/10.18429/JACoW-IPAC2015-MOPMN027.pdf
        https://jacow.org/IPAC2015/papers/MOPMN027.pdf
        [5] B. Dalena, et al., First evaluation of dynamic aperture at injection for FCC-hh, in Proceedings of IPAC’16 (JACoW Publishing, Geneva), pp. 1466–1469.
        https://doi.org/10.18429/JACoW-IPAC2016-TUPMW019.pdf
        https://jacow.org/ipac2016/papers/TUPMW019.pdf

        Orateur: Valérie Gautard (CEA-Irfu)
      • 16:35
        Modelling dynamical systems: Learning ODEs with no internal ODE resolution 25m

        Accurately modeling and simulating dynamic systems remains a central challenge in computational physics and numerical engineering. Traditional approaches, such as time series prediction and ordinary differential equation (ODE) modeling, have been widely explored in the literature. However, these methods fall short when applied to the complex and potentially discontinuous behavior of particle accelerator beams.
        To address this challenge, we introduce a novel method called Inode, which leverages integral operators to handle discontinuities in beam dynamics. Inode reformulates the problem as a classical regression task, where pre-processed data defines the system behavior. The overall model is then cast as the solution to an ODE, while the regression component allows the computationally intensive ODE resolution to be removed from the training process.
        We provide a formal analysis demonstrating Inode's consistency and convergence under reasonable assumptions. Experimental results further validate the method's robustness across both standard dynamic systems and particle accelerator data, showing significant advantages in computational efficiency and modeling flexibility.
        This approach opens new avenues for accurate, scalable modeling of particle accelerator beams, addressing key limitations in existing techniques.

        Orateur: Hayg Guler (IJCLAB)
      • 17:00
        ALESIA: Superconducting magnet design through multi-physics optimisation 25m

        Designing superconducting magnets presents a challenge due to their multi-physics complexity, diverse analytical tools, and often imprecise specifications. To streamline this process, we introduce ALESIA, a novel optimisation and data management toolbox developed at CEA-IRFU.

        ALESIA leverages advanced algorithms, including nonlinear programming techniques, evolutionary algorithms, active learning strategies, and surrogate modelling, to accelerate the design process. By intelligently exploring the parameter space, ALESIA enables rapid convergence towards optimal solutions while minimizing computational cost.

        ALESIA's flexible architecture allows integration with any physics simulation software, encompassing magnetic field calculations (OPERA), and mechanical analysis (CAST3M), but its applicability can be broadening beyond magnet design. Crucially, ALESIA's automated optimisation loop simultaneously considers all stages - magnetism, conductor properties, mechanics, and quench behaviour - ensuring holistic and robust design solutions.

        Orateur: Damien Minnena (CEA/Irfu/DACM)
    • 19:30 22:30
      Dinner at Brasserie Meteor 3h
    • 09:00 13:00
      Friday morning Amphi Grïnewald

      Amphi Grïnewald

      IPHC, Strasbourg

      Batiment 27, BP28, 67037 Cedex 2, 23 Rue du Loess, 67200 Strasbourg
      Président de session: Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)
      • 09:00
        Parameter Estimation with Neural Simulation-Based Inference in ATLAS 25m

        Neural Simulation-Based Inference (NSBI) is a powerful class of machine learning (ML)-based methods for statistical inference that naturally handle high dimensional parameter estimation without the need to bin data into low-dimensional summary histograms. Such methods are promising for a range of measurements at the Large Hadron Collider, where no single observable may be optimal to scan over the entire theoretical phase space under consideration, or where binning data into histograms could result in a loss of sensitivity. This work develops an NSBI framework that, for the first time, allows NSBI to be applied to a full-scale LHC analysis, by successfully incorporating a large number of systematic uncertainties, quantifying the uncertainty coming from finite training statistics, developing a method to construct confidence intervals, and demonstrating a series of intermediate diagnostic checks that can be performed to validate the robustness of the method. As an example, the power and feasibility of the method are demonstrated for an off-shell Higgs boson couplings measurement in the four lepton decay channel, using ATLAS experiment simulated samples. The proposed method is a generalisation of the standard statistical framework at the LHC, and can benefit a large number of physics analyses. This work serves as a blueprint for measurements at the LHC using NSBI.

        Orateur: David Rousseau (IJCLab, Université Paris-Saclay)
      • 09:25
        Fair Universe HiggsML Uncertainty Challenge 25m

        The Fair Universe project organised the HiggsML Uncertainty Challenge, is taking place from September 2024 to 15 March 2025. This is a [NeurIPS 2025 competition] (https://blog.neurips.cc/2024/06/04/neurips-2024-competitions-announced/).

        This groundbreaking competition in high-energy physics (HEP) and machine learning was the first to place a strong emphasis on uncertainties, focusing on mastering both the uncertainties in the input training data and providing credible confidence intervals in the results.

        The challenge revolved around measuring the Higgs to tau+ tau- cross section, similar to the HiggsML challenge held on Kaggle in 2014, using a dataset representing the 4-momentum signal state. Participants were tasked with developing advanced analysis techniques capable of not only measuring the signal strength but also generating confidence intervals that included both statistical and systematic uncertainties, such as those related to detector calibration and background levels. The accuracy of these intervals was automatically evaluated using pseudo-experiments to assess correct coverage.
        Techniques that effectively managed the impact of systematic uncertainties were expected to perform best, contributing to the development of uncertainty-aware AI techniques for HEP and potentially other fields. The competition was hosted on Codabench, an evolution of the Codalab platform, and leveraged significant resources from the NERSC infrastructure to handle the thousands of required pseudo-experiments.

        Link to the competition
        Link to white paper

        Orateur: RAGANSU CHAKKAPPAI (IJCLab-Orsay)
      • 09:50
        The application of modular neural networks in map reconstruction 25m

        In this presentation, we will explore the application of machine learning techniques in cosmology, focusing on the analysis of Cosmic Microwave Background (CMB) maps. Accurately calculating the tensor-to-scalar ratio from CMB data is a crucial yet challenging task, as it holds the key to understanding primordial gravitational waves and the early universe's inflationary period. I will discuss the use of informed learning with the goal of precise reconstruction, which can be readily reapplied in other areas of cosmology and astrophysics studies. These methods offer robust tools for dealing with the complexities and high-dimensional nature of data. By leveraging machine learning, we can enhance our ability to simulate, analyze, and interpret CMB observations, providing deeper insights into the universe's fundamental properties. The versatility and potential of machine learning in advancing our understanding of the cosmos will be highlighted, by showing data analysis techniques applicable in all scientific disciplines. A special focus is given to the application of physics-quided networks, their advantages and integration into the works of scientists.

        Orateur: Leonora Kardum
      • 10:15
        Deep Learning for Non-Invasive Identification, Social Network Analysis and Behavioral Recognition of Japanese Macaques: Toward a Comprehensive AI-Driven Primate Society Study 25m

        The use of deep learning in ecology and ethology offers transformative possibilities, enabling non-invasive and more efficient methodologies for individual identification and behavioral analysis on video. A first study focused on the development of tools with deep learning to automatically detect and identify individual Japanese macaques (Macaca fuscata) with the goal of generating a reliable social network based on co-occurrences of individuals across video data. Utilizing YOLOv8n models, we have achieved face detection with 98.3% accuracy and individual recognition at 87.9% accuracy within the Kōjima Island population. These advances pave the way for automated, large-scale analysis of primate social structures across several populations and over different seasons (Paulet et al., 2024).
        Building on these identification tools, initial steps have been taken to extend this approach toward automated behavioral recognition, targeting complex behaviors such as grooming and stone-handling. Early trials using the recently released software LabGym show promise (Ardon & Sueur, 2024). This ongoing research forms the foundation of a broader thesis project, aimed at improving individual recognition and network analysis tools in the perspective of investigating organisational and behavioral diversity within and across groups of japanese macaques. By integrating cutting-edge AI technologies, we aim to significantly enhance the study of social and cultural dynamics in primate populations, offering new, scalable insights into their social complexity.

        https://doi.org/10.1007/s10329-024-01137-5
        https://doi.org/10.1007/s10329-024-01123-x

        Orateur: Julien Paulet
      • 10:40
        Pause café 30m
      • 11:10
        AISSAI center : AI for Science, Science for AI 25m

        The CNRS AI Center for Science and Science for AI (AISSAI, https://aissai.cnrs.fr/) aims to structure and organize cross-disciplinary actions involving all CNRS institutes at the interfaces with AI. AISSAI became in January 2024 a CNRS support and research unit (UAR2036). The center fosters dialogue between scientific disciplines interacting with AI, addresses domain-specific strategic issues and more generally aims to accelerate scientific discovery in all CNRS scientific fields (physics, chemistry, materials, biology, ecology, human sciences, etc.). After a brief presentation of the AISSAI center, I will discuss its main scientific actions, focusing on the successful scientific program of the past IN2P3 semester.

        Orateur: Julien Donini (UBP/LPC/IN2P3)
      • 11:35
        IntheArt 25m

        Currently, artificial intelligence (AI) is a rapidly growing field. These methods can assist in solving very challenging problems, thus providing significant time savings in finding solutions. The methods are diverse and evolve quickly, making it very beneficial to come together, share knowledge, train, and capitalize on our expertise. This is the purpose of the IntheArt group focused on AI methods. It consists of over a hundred researchers, ranging from experts to beginner users, who meet regularly to:
        • Exchange ideas about AI: case studies and associated issues
        • Train and learn from each other, for example, through specific training sessions
        • Work on research topics
        • Organize and participate in workshops
        • Learn together about situations where machine learning can be useful, as well as where it has limitations, particularly regarding trust, interpretability, and explainability of responses or decisions.
        This group and the associated research are what I will present to you.

        Orateur: Valérie Gautard (CEA-Irfu)
      • 12:00
        Farewell 5m
        Orateurs: David Rousseau (IJCLab, Université Paris-Saclay), Dr Thomas Vuillaume (LAPP, Univ. Savoie Mont-Blanc, CNRS)