IN2P3/IRFU Machine Learning workshop

Remote only

Remote only


The workshop will take place remotely only, on mornings : 9AM to 1PM

Machine Learning is now potentially impacting many aspects of physics.

This workshop covers current development with Machine Learning at IN2P3 and CEA-IRFU, following up from the Jan 2020 workshop

Submission of contributions is now closed. 

The following non exclusive Tracks have been defined (a contribution can be relevant to 2 tracks, preferably not more):

  1. ML for data reduction : Application of Machine Learning to data reduction, reconstruction, building/tagging of intermediate object

  2. ML for analysis : Application of Machine Learning to analysis, event classification and fundamental parameters inference

  3. ML for simulation and surrogate model : Application of Machine Learning to simulation or other cases where it is deemed to replace an existing complex model

  4. Fast ML : Application of Machine Learning to DAQ/Trigger/Real Time Analysis

  5. ML algorithms : Machine Learning development across applications

  6. ML infrastructure : Hardware and software for Machine Learning

  7. ML training, courses and tutorials

  8. ML open datasets and challenges

  9. ML for astroparticle

  10. ML for experimental particle physics

  11. ML for nuclear physics

  12. ML for phenomenology and theory

  13. ML for particle accelerators

  14. Special contribution

The workshop will be on zoom. Connection details are sent by mail to registrants.

Please make sure you are subscribed to on IN2P3 listserv to keep up to date with ML. 

Organisation : Valérie Gautard (CEA/IRFU), David Rousseau (IJCLab)


  • Adnan GHRIBI
  • Adrien Hourlier
  • Aldo Deandrea
  • Alexandre Boucaud
  • Alexis VALLIER
  • Alizée Brouillard
  • Amine AFIRI
  • Amine Boussejra
  • Ana Elena Dumitriu
  • Anne-Catherine Le Bihan
  • Arnaud Lucotte
  • Arnaud MAURY
  • Arthur THALLER
  • Artur Trofymov
  • Arturo Sanchez Pineda
  • Barbara Dalena
  • Bastien Arcelin
  • Benjamin Remy
  • Bernadette Maria REBEIRO
  • Bertrand Laforge
  • Bertrand Rigaud
  • Catherine Biscarat
  • Charles Boreux
  • Charline Rougier
  • Christophe Deroulers
  • Christophe Haquin
  • Corentin Hanser
  • Cristina Carloganu
  • Cécile Renault
  • Céline LE BOHEC
  • Damien Berriaud
  • Damien TURPIN
  • David Etasse
  • David Rousseau
  • Denis PUGNERE
  • Denise Lanzieri
  • Dominique Fouchez
  • Dominique Yvon
  • Ece Asilar
  • Emille Ishida
  • Emmanuel Gangler
  • Emmanuel GOUTIERRE
  • Emmanuel Le Guirriec
  • Emmanuel Monnier
  • Eric Armengaud
  • Eric Aubourg
  • Eric Cogneras
  • Etienne FORTIN
  • Fabrice Guilloux
  • Fadi Nammour
  • Fares DJAMA
  • Fatih Bellachia
  • Feifei Huang
  • Francesco Stacchi
  • François LANUSSE
  • Françoise Bouvet
  • Frederic DERUE
  • Frederic Druillole
  • Frédéric Déliot
  • Frédéric Déliot
  • Gabriele MAINETTI
  • Geoffrey Daniel
  • Georges AAD
  • Gilles GRASSEAU
  • Giulio Dujany
  • Guillaume BAULIEU
  • Guillaume Bourgatte
  • Guillaume Dilasser
  • Guillaume MENTION
  • Gustavo Conesa Balbastre
  • Hayg Guler
  • Hossein AFSHARNIA
  • Hubert Bretonnière
  • Hubert Hansen
  • Jad Zahreddine
  • Jan Stark
  • Jay Sandesara
  • Jean-Baptiste Charraud
  • Jean-Baptiste de Vivie
  • jean-bernard maillet
  • Jean-Christophe WEILL
  • Jean-Michel Alimi
  • jean-Michel gallone
  • Jean-Pierre Cachemiche
  • Jerome Pansanel
  • Jessica Leveque
  • Joao Coelho
  • Johan Bregeon
  • Jona Motta
  • Julien Donini
  • Julien Zoubian
  • Jérémie Dudouet
  • Jérôme ALLARD
  • Kenza Makhlouf
  • Koryo Okumura
  • Lara Mason
  • Lauri Laatu
  • Louis PORTALES
  • Louis Vaslin
  • Maitha Alshamsi
  • Marc ERNOULT
  • Marco Leoni
  • Marie Paturel
  • Mario Sessini
  • Mehdi Ben Ghali
  • Meriem Krouma
  • Michel MUR
  • Mouad Ramil
  • Mykola Khandoga
  • Nemer CHIEDDE
  • Nemer Chiedde
  • Olivier Stezowski
  • Patrice Verdier
  • Pierrick HAMEL
  • Qiufan Lin
  • Rachid GUERNANE
  • Rémi BARBIER
  • Rémy KOSKAS
  • Sabine Crépé-Renaudin
  • Samuel Calvet
  • Sassia HEDIA
  • sebastien geiger
  • Stéphane Schanne
  • Sylvain Caillou
  • Sébastien Dubos
  • Sébastien Gadrat
  • Taylor Faucett
  • Thibault CHARPENTIER
  • Thomas CALVET
  • Thomas Cartier-Michaud
  • Thomas Vuillaume
  • Tibor KURCA
  • Valeria Pettorino
  • Valerio Calvelli
  • Valérie Gautard
  • Viacheslav Kubytskyi
  • Viatcheslav Sharyy
  • Victor Planas-Bielsa
  • Virginie Grandgirard
    • 09:00 13:05
      Mardi matin
      Convener: David Rousseau (IJCLab, CNRS/IN2P3, Université Paris-Saclay)
      • 09:00
        Introduction 15m


        Speakers: Valérie Gautard (CEA-Irfu) , David Rousseau (IJCLab, CNRS/IN2P3, Université Paris-Saclay)
      • 09:15
        Towards a realistic track reconstruction algorithm based on graph neural networks for the HL-LHC 15m

        The physics reach of the HL-LHC will be limited by how efficiently the experiments can use the available computing resources, i.e. affordable software and computing are essential. The development of novel methods for charged particle reconstruction at the HL-LHC incorporating machine learning techniques or based entirely on machine learning is a vibrant area of research. In the past two years, algorithms for track pattern recognition based on graph neural networks (GNNs) have emerged as a particularly promising approach. Previous work mainly aimed at establishing proof of principle. We present new algorithms, implemented in the ACTS framework, that can handle complex realistic detectors. This work aims at implementing a realistic GNN-based algorithm that can be deployed in an HL-LHC experiment.

        Speaker: Sylvain Caillou (L2I Toulouse, CNRS/IN2P3)
      • 09:30
        Mass and energy calibration of hadronic jets using DNN in ATLAS 15m

        Because of the nature of QCD interactions with matter, the measured energies and masses of hadronic jets have to be calibrated before they are used in physics analysis. The correction depends on many characteristics of the jets, including the energy and mass themselves. Obtaining the correction is thus a multidimensionnal regression problem for wich DNN is a well suited approach.

        In practice, several difficulties have to be solved, leading to envisage doubled NN, dedicated loss functions or introducing input features annotations. We describe these difficulties, present the solutions we tested and describe the overall performances compared to the standard approach of the ATLAS jet calibration.

        Speaker: Pierre-Antoine Delsart (LPSC)
      • 09:45
        Reconstruction of di-tau mass using deep neural networks 15m

        Reconstruction of di-$\tau$ mass in a faster and more accurate way than the existing methods is crucial to test any theory involving Higgs boson and Z boson which are decaying to $\tau^+ \tau^-$. However, it is an arduous task due to existence of neutrinos as decay product of each $\tau$ lepton which are invisible to detectors at LHC.

        The present ongoing work aims at obtaining a di-$\tau$ mass estimator using ML techniques. Its use in the CMS MSSM $H\to\tau\tau$ analysis on the full Run II will be discussed.

        Speaker: Lucas TORTEROTOT ({UNIV CLAUDE BERNARD}UMR5822)
      • 10:00
        break 30m
      • 10:30
        Mapping Machine-Learned Physics into a Human-Readable Space 15m

        Machine Learning methods are extremely powerful but often function as black-box problem solvers, providing improved performance at the expense of clarity. Our work describes a new machine learning approach which translates the strategy of a deep neural network into simple functions that are meaningful and intelligible to the physicist, without sacrificing performance improvements. We apply this approach to benchmark high-energy problems of fat-jet classification and find simple new jet substructure observables which provide improved classification power and novel insights into the nature of the problem.

        Speaker: Taylor Faucett (Université Clermont Auvergne)
      • 10:45
        Reconstruction of generic decay trees using a Graph Neural Network 15m

        The decays of a B-meson with neutrinos or other undetected particles in the final state cannot be fully reconstructed without the information coming from the rest of the event. The Belle II experiment benefits from the clean environment of electron-positron collisions where B mesons are produced in pairs without other particles in the event. A complete reconstruction of the other B meson of the event allows then to constrain the undetected particles. The challenge lies in the thousand of generic decays that are possible and the complex combinatorial nature of the problem. In the current algorithm used at Belle II, the Full Event Interpretation, the possible decay channels are explicitly hard-coded which limits its scope of action. In this talk, we present an alternative method to reconstruct a generic decay tree from its final state particles using a graph neural network.

        Speaker: Arthur Thaller (IPHC)
      • 11:00
        Fitting a spectrum using ML 15m

        Many of the searches for new physics consist in a bump hunt on invariant mass spectrum. In the cases for which the turn-on region may contain signal the usual fit methods do not apply.
        This talk presents the first ingredients towards a fitting method, based on DNN, that would allow to fit the entire spectrum, from the turn-on to the tail.

        Speaker: Samuel Calvet (LPC)
      • 11:15
        Auto-Encoder based algorithms for anomaly detection 15m

        Among all of the applications of Machine Learning in HEP, anomaly detection
        methods have been receiving a growing interest over the last years. Their use
        is especially promising in the development of model independent search tech-
        niques. Following this trend line, we propose new algorithms based on the arti-
        ficial neural network concept of the Auto-Encoder, augmented with adversarial
        training schemes, flow-based approaches, and variable decorrelation techniques.
        The performance of our methods is going to be evaluated on the data designed
        for the LHC Olympics 2020 challenge [1]. We will present results for both the
        RnD dataset and the Black Box datasets proposed for this anomaly detection


        Speakers: Louis Vaslin (LPC Clermont) , Ioan Dinu (INFIN-HH / LPC)
      • 11:30
        Break 30m
      • 12:00
        Artificial Intelligences for measuring energy deposits in the ATLAS LAr calorimeter in real time 15m

        Within the Phase-II upgrade of the LHC, the readout electronics of the ATLAS Liquid Argon (LAr) Calorimeters is prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous pp interactions.

        The Liquid Argon (LAr) calorimeters measure the energy of particles produced by LHC collisions, especially electrons and photons. The digitized signals from the LAr 182468 channels are analysed in real time, at 40 MHz, by high-end Field Programmable Gate Array (FPGA) to provide a detailed map of energy deposits from up to 200 simultaneous collisions (pile-up). In order to maintain high precision event reconstruction at HL-LHC in these challenging conditions, the LAr readout electronics and its embarked algorithms are to be improved.

        The growing processing power of FPGAs and many advances in Artificial Intelligence systems provide cutting-edge opportunities. We have developed new algorithms for real time energy deposit measurements based on Recurrent Neural Networks (RNN). These algorithms are compared to the conventional algorithms based on Optimal Filtering using realistic simulation of single LAr channels.

        We demonstrate that RNNs, especially those based on Long Short-Term Memory (LSTM), outperform current algorithms, even with limited parameter counts. Furthermore, RNNs offer possibilities to improve the resilience of the fundamental LAr measurements against pile-up and proton beam conditions. The latest results of these studies are also presented.

        Speaker: Lauri Laatu
      • 12:15
        RNNs on Intel FPGAs for real time signal processing in the ATLAS LAr calorimeter 15m

        Within the Phase-II upgrade of the LHC, the readout electronics of the ATLAS Liquid Argon (LAr) Calorimeters is prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous pp interactions. The Liquid Argon calorimeters measure the energy of particles produced by LHC collisions, especially electrons and photons. The digitized signals from the LAr 182468 channels are analysed in real time, at 40 MHz, to provide a detailed map of energy deposits from up to 200 simultaneous collisions (pile-up). These measurements are to be performed by high-end Field Programmable Gate Array, each processing O(1TB/s) of data. In order to maintain high precision event reconstruction at HL-LHC in these challenging conditions, the LAr readout electronics and its embarked algorithms are to be improved.
        The growing processing power of FPGAs and many advances in Artificial Intelligence systems provide cutting-edge opportunities to combine real-time data processing with high bandwidth, low latency and advanced algorithms. To cope with the signal pile-up, new machine learning approaches are explored: recurrent neural networks outperform the optimal signal filter currently used.
        In this talk we present the first implementation of RNNs, especially those based on Long Short-Term Memory (LSTM), in the hls4ml software for generating firmware for Intel Stratix 10 FPGAs. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The FPGA resource usage, the latency and the operation frequency are analysed. Latest performance results and experience with prototype implementations will be reported.

        Speaker: Etienne FORTIN (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
      • 12:30
        Fink broker, enabling time-domain astronomy with ML 15m

        Next generation experiments such as the Vera Rubin Observatory Legacy Survey of Space and Time (LSST) will provide an unprecedented volume of time-domain data opening a new era of big data in astronomy. To fully harness the power of these surveys, we require analysis methods capable of dealing with large data volumes that can identify promising transients within minutes for follow-up coordination. In this talk I will present Fink, a broker developed to face these challenges. Fink is based on high-end technology and designed for fast and efficient analysis of big data streams. I will highlight the state-of-the-art machine learning techniques used to generate early classification scores for a variety of time-domain phenomena including supernovae and microlensing events. Such methods include Deep Learning advances and Active Learning approaches to coherently incorporate available information, delivering increasingly more accurate added values throughout the duration of the survey.

        Speaker: Anais Moller (CNRS / LPC Clermont)
      • 12:45
        GPUs @CC : utilisation et évolution 15m

        On se propose de faire un point sur les GPUs disponibles au CC-IN2P3 et leur utilisation actuelle. On présentera également quelques évolutions à venir.

        Speaker: Bertrand Rigaud (CC-IN2P3)
  • Wednesday, 17 March
    • 09:00 13:00
      Mercredi matin
      Convener: Valérie Gautard (CEA-Irfu)
      • 09:00
        IN2P3 School of Statistics 2021 15m

        The 2021 edition of the School of Statistics SOS2021 was held online for the first time (postponed from May 2020 in Carry-le-Rouet) from 18 to 29 January 2021. The school targets PHD students, post-docs and senior scientists wishing to strengthen their knowledge or discover new methods in statistical analysis applied in particle and astroparticle physics and cosmology.

        The programme covers from fundamental concepts to advanced topics. A special focus was put on machine learning techniques and tools. A significant amount of time was dedicated to hands-on sessions to introduce advanced tools (scikit-learn, keras, Tensorflow, Jupyter notebook, ...) and their applications to our domain.

        All slides, notebooks and video recordings of the Zoom sessions are available at:

        Speaker: Yann Coadou (CPPM, Aix-Marseille Université, CNRS/IN2P3)
      • 09:15
        Rainfrog: Automatic detector recognition on the Atlas/NSW fabrication facility at Saclay 20m

        The Saclay site is one of the 4 production sites for the New Small Wheels, a new Micromegas detector system intended to be installed end ’21 in the Atlas experiment at CERN. The detector modules are made of a sandwich assembly of 5 composite panels. These panels are built on instrumented granite tables in a dedicated clean room, and are scanned in place for planarity with a mobile gantry carrying an optical contactless sensor, at various stages of fabrication and for the final quality control.
        With the aim to improve the process and make it more reliable, an exploratory automatic recognition system was developed, installed and tested in 2020. This system relies on the analysis of a series of image patches of the element to measure, after independent submission to a common neural network for classification. The classification scores obtained for each image patch are then fused together to automatically define the suitable machine settings corresponding to the observed configuration.
        The presentation first covers the architectural decisions for the multiple categories classification problem, the image collection and data augmentation campaigns, the transfer learning training strategy and the analysis of fusion results. Then it describes the integration of the trained network for inference in the actual control program of the machine, and finally gives a summary of the obtained results.

        Speaker: Michel MUR (CEA Irfu)
      • 09:35
        Comparison of GraphCore IPUs and Nvidia GPUs for cosmology applications 15m

        I will present a first investigation of the suitability and performance of IPUs in deep learning applications in cosmology.
        As upcoming photometric galaxy surveys will produce an unprecedented amount of observational data, more and more people turn to deep learning for fast and accurate data processing. In this work I tested typical examples of tasks that will be required to process and prepare for future photometric galaxy surveys.
        I will present the benchmark between a Nvidia V100 GPU and a Graphcore IPU on three cosmological use cases: a deterministic deep neural network and a Bayesian neural network (BNN) for galaxy shape estimation, and a generative network for galaxy images production. Results suggests that IPUs perform better than GPUs at training neural networks but, regarding inference, the choice depends on the task to realise.

        Speakers: Bastien Arcelin (APC) , Alexandre Boucaud (APC / IN2P3)
      • 09:50
        Classification of KM3NeT online events with ONNX C++ API 10m

        The neutrino telescopes KM3NeT search for cosmic neutrinos from distant
        astrophysical sources such as supernovae, gamma ray bursters or
        colliding stars flaring blazars. Once the events are received, they are
        rapidly reconstructed online. The online events must be classified to
        identify signal neutrinos from atmospheric muon background events.
        Dedicated applications will then analyse the neutrino sample to look for
        correlation with astrophysical sources and so that to send neutrino alerts
        to the astro community.
        The initial pipeline was running the reconstruction in C++ and classifying
        the events in Python. The classification model has been trained with
        LightGBM, a gradient boosting framework. To simplify the pipeline, I
        integrated ONNX Runtime in the reconstruction code. The LightGBM model has
        been converted in ONNX format. I first compared the results of LightGBM
        with ONNX runtime in Python. Then, C++ implementation has been done and
        the new pipeline is now running in production.

        Speaker: Emmanuel Le Guirriec (CPPM)
      • 10:00
        pause 30m
      • 10:30
        Extended sources reconstructions by means of Coded mask aperture systems and Deep learning algorithm 15m

        The localization of radioactive sources provides mandatory information for the monitoring and the diagnostic of radiological scenes and it still constitutes a critical challenge. Gamma-ray imaging is performed through coded mask aperture imaging when the energy of the photons is sufficiently low to insure photoelectric interactions into the mask. Then, classically, a deconvolution algorithm is applied to reconstruct the position of the source. However, this deconvolution problem is non-injective and classical methods do not provide any relevant information when the source cannot be associated to a point, with respect to the angular resolution of the imaging system. In this presentation, we introduce a new method based on Deep Learning algorithms and Convolutional Neural Network. We evaluate its performances on extended sources with real measurements acquired with Caliste, a CdTe pixelated detector, and compare them to MLEM, a classical iterative algorithm.

        Speaker: Dr Geoffrey Daniel (CEA/DES/ISAS/DM2S/STMF/LGLS)
      • 10:45
        A machine learning technique for dynamic aperture computation 15m

        Currently, dynamic aperture calculations of high-energy hadron colliders are
        generated through computer simulation, which is both a resource-heavy and
        time-costly process.
        The aim of this research is to use a reservoir computing machine learning
        model in order to achieve a faster extrapolation of dynamic aperture values. In
        order to achieve these results, a recurrent echo-state network (ESN) architecture
        is used as a basis for this work. Recurrent networks are better ?fitted to extrapo-
        lation tasks while the reservoir echo-state structure is computationally e?ective.
        Model training and validation is conducted on a set of "seeds" corresponding to
        the simulation results of di?erent machine con?gurations. Adjustments in the
        model architecture, manual metric and data selection, hyper-parameter tuning
        (using a grid search method and manual tuning) and the introduction of new
        parameters enabled the model to reliably achieve target performance on exam-
        ining testing sets. Alternative readout layers in the model architecture are also

        Speaker: Mehdi Ben Ghali (IRFU - CEA)
      • 11:00
        SNAD: Machine learning assisted discovery in astronomy 15m

        The next generation of astronomical surveys will completely change the discovery process in astronomy. Faced with millions of possible new sources per night, serendipitous discoveries will not occur. At the same time, given the significant improvement in detection efficiency it is also reasonable to expect that unforeseen astrophysical sources will be detected. However, if we do not have tools to identify them we may lose the opportunity without realizing it. The SNAD team is an international collaboration who has been working in the past 3 years to prepare for the arrival of such data and ensure the maximum exploitation of astronomical surveys. In this talk, I will describe how SNAD is using state of the art traditional and adaptive anomaly detection techniques to identify unusual objects in simulations, catalog data and the data stream from the Zwicky Transient Facility (ZTF). Finally I will describe the efforts currently in place to prepare these tools to deal with the alert stream coming from the Vera Rubin Observatory Legacy Survey of Space and Time through the connection between SNAD and the Fink broker.

        Speaker: Dr Emille Ishida (LPC-UCA)
      • 11:15
        Estimating Photometric Redshifts with Convolutional Neural Networks and Galaxy Images: A Case Study of Resolving Biases in Deep Learning Classifiers 15m

        Deep Learning neural networks are powerful tools to extract information from input data, and have been increasingly applied in astrophysical studies. However, without proper treatment, data-driven algorithms such as neural networks usually cannot fully capture salient information concerned for certain tasks and thus result in a biased output harmful for subsequent analyses. It is therefore essential to resolve biases due to such imperfectness. Using galaxy photometric redshift estimation as an example, we demonstrate the approaches that we exploit to tackle the two major forms of biases in the existing Deep Learning methods of photometric redshift estimation, namely redshift-dependent residuals and mode collapse. Experiments with galaxy images from the SDSS survey and the CFHT survey show that these approaches are effective and potentially useful in real astrophysical analyses. They are also meaningful in helping us understand the training of neural networks for general classification and regression problems in computer science applications.

        Speaker: Qiufan Lin (CPPM)
      • 11:30
        pause 30m
      • 12:00
        Updates on the GammaLearn project: application to real data 15m

        The Cherenkov Telecope Array (CTA) is the future of ground-based gamma astronomy and will be composed of tens of telescopes divided in two arrays in both hemispheres.
        GammaLearn is a project started in 2017 to develop innovative analysis for CTA event reconstruction based on deep learning.
        Here we present a status report of the project, the network architecture developed for event reconstruction from a single telescope and its performances on simulated and real data.

        Speaker: Dr Thomas Vuillaume (LAPP, CNRS)
      • 12:15
        Probabilistic Mapping of Dark Matter by Neural Score Matching 15m

        We present a novel methodology to address ill-posed inverse problems, by providing a description of the posterior distribution instead of a point estimate solution. Our approach combines Neural Score Matching for learning a prior distribution from physical simulations, and an Annealed Hamiltonian Monte-Carlo technique to sample the full high-dimensional posterior of our problem.
        In the astrophysical problem we address, by measuring the lensing effect on a large number of galaxies, it is possible to reconstruct maps of the Dark Matter distribution on the sky. However, presence of missing data and noise dominated measurement makes the inverse problem non-invertible.
        We propose to reformulate the problem in a Bayesian framework, where the target becomes the posterior distribution of mass given the galaxies shape observations. The likelihood factor, describing how light-rays are bent by gravity, how measurements are affected by noise, and accounting for missing observational data, is fully described by a physical model. The prior factor is learned over simulations using a recent class of Deep Generative Models based on Neural Score Matching and takes into account theoretical knowledge. We are thus able to obtain samples from the full Bayesian posterior of the problem and can provide Dark Matter map reconstruction alongside uncertainty quantifications.
        We present an application of this methodology on the first deep-learning-assisted Dark Matter map reconstruction of the Hubble Space Telescope COSMOS field.

        Speaker: Benjamin Remy (CEA Paris-Saclay)
      • 12:30
        Automatically Differentiable Physics for Maximizing the Information Gain of Cosmological Surveys 15m

        Weak gravitational lensing is one of the most promising tools of cosmology to constrain models and probe the evolution of dark-matter structures. Yet, the current analysis techniques are only able to exploit the 2-pt statistics of the lensing signal, ignoring a large fraction of the cosmological information contained in the non-Gaussian part of the signal. Exactly how much information is lost, and how it could be exploited is an open question.
        In this work, we propose to measure the information gain from using higher-order (i.e. non-Gaussian) statistics in the analysis of weak gravitational lensing maps. To achieve this goal, we implement fast and accurate lensing N-body simulations based on the TensorFlow framework for automatic differentiation. By implementing gravitational lensing ray-tracing in this framework, we are able to simulate lensing lightcones to mimic surveys like the Euclid space mission or the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). These simulations being based on differentiable physics, we can take derivatives of the resulting gravitational lensing maps with respect to cosmological parameters, or any systematics included in the simulations. Using these derivatives, we can measure the Fisher information content of various lensing summary statistics on cosmological parameters, and thus help maximize the scientific return of upcoming surveys.

        Speaker: Denise Lanzieri
      • 12:45
        Deep Learning for Galaxy Image Reconstruction with Problem Specific Constraint 15m

        Telescope images are corrupted with blur and noise. Generally, blur is represented by a convolution with a Point Spread Function and noise is modelled as Additive Gaussian Noise. Restoring galaxy images from the observations is an inverse problem that is ill-posed and specifically ill-conditioned. The majority of the standard reconstruction methods minimise the Mean Square Error to reconstruct images, without any guarantee that the shape objects contained in the data (e.g. galaxies) is preserved. Here we introduce a shape constraint, exhibit its properties and show how it preserves galaxy shapes when combined to Machine Learning reconstruction algorithms.

        Speaker: Fadi Nammour (CosmoStat, CEA Paris-Saclay)