Journées de Rencontre des Jeunes Chercheurs 2021

Europe/Paris
Village La Fayette - La Rochelle

Village La Fayette - La Rochelle

Avenue de Bourgogne, 17041 La Rochelle, France http://www.seminaire-conference-la-rochelle.org https://goo.gl/maps/c2X8hqd9maRShkCm8 The centre is located at about 5 km from the La Rochelle train station (Gare de La Rochelle) and at about 5 km from the La Rochelle airport (Aéroport de La Rochelle-Ile de Ré). The organization will provide a shuttle transportation from both the train station and the airport to the site in the evening of the first day, and from the site to the train station and the airport in the morning of the last day.
Description

(English version, for French see below)

Organised by the sections "Fields and Particles" and "Nuclear Physics" of the Société Française de Physique (SFP), the "Journées de Rencontre des Jeunes Chercheurs 2021" welcomes all PhD students (from the first to the last year) and young postdocs.

This year it will be held from October 17 to October 23, 2021, at village La Fayette at La Rochelle (17) – France.

The JRJCs are an occasion for each participant to present their work in a convivial atmosphere and to obtain from their colleagues an overview of the current research going on in France in the domain.

This year the following subjects are proposed : - Nuclear Energy - Nuclear Structure and Dynamics - Nuclear Astrophysics - Medical Physics - Hadronic physics - Heavy Ion Collisions - Cosmology - Instrumentation - Standard Model (electroweak) - Beyond the Standard Model - Theoretical Physics - Neutrinos - Astroparticles - Heavy Flavour

Presentations can be given either in English or French. The conference social program foresees a half-day trip in the nearby area, as well as one or two public seminars. For any other information please feel free to contact the secretary or any member of the organising committee (see below). The deadline for registration is September 3, 2021.


(Français)

Organisées par les divisions "Champs et Particules" et "Physique Nucléaire" de la Société Française de Physique (SFP), les Journées de Rencontre des Jeunes Chercheurs 2020 s'adressent à tous les étudiants en thèse (de la première à la dernière année) et aux jeunes post-doctorants.

Elles auront lieu du 17 au 23 octobre 2021 et se tiendront au village La Fayette at La Rochelle (17) – France.

Les JRJC sont l'occasion pour chaque participant de présenter ses travaux de recherche dans une ambiance conviviale et de partager avec ses collègues une vue d'ensemble des différentes recherches menées à l'heure actuelle dans sa spécialité et dans des domaines proches.

Les thèmes proposés cette année sont les suivants : - Énergie nucléaire - Structure et dynamique nucléaire - Astrophysique nucléaire - Physique médicale - Physique hadronique - Collisions d'ions lourds - Cosmologie - Instrumentation - Modèle standard électrofaible - Au-delà du modèle standard - Physique théorique - Neutrinos - Astroparticules - Saveurs lourdes

La langue de travail des JRJC est le français, mais les non-francophones peuvent donner leur exposé en anglais. Le programme social comprend, outre une excursion dans la région, une ou deux conférences en soirée pouvant être ouvertes au public. Le date limite d'inscription est fixée au 3 septembre 2021. Pour tout renseignement complémentaire, n'hésitez pas à contacter notre secrétariat ou bien un membre du comité d'organisation (voir ici de suite).


Pauline Ascher (CENBG) ascher@cenbg.in2p3.fr
Francois Brun (CEA Saclay) francois.brun@cea.fr
Emmanuel Chauveau (CENBG) chauveau@cenbg.in2p3.fr
Rachel Delorme (LPSC) rachel.delorme@lpsc.in2p3.fr
Romain Gaior (LPNHE) romain.gaior@lpnhe.in2p3.fr
Julien Masbou (SUBATECH)  masbou@subatech.in2p3.fr
Laure Massacrier (IJCLab) massacrier@ijclab.in2p3.fr
Antonio Uras (IP2I) antonio.uras@cern.ch
Dimitris Varouchas (IJCLab) dimitris.varouchas@cern.ch
Laura Zambelli (LAPP) laura.zambelli@lapp.in2p3.fr

Participants
  • Alexandre Bigot
  • Alexandre PORTIER
  • Alexis Boudon
  • Amine Boussejra
  • Ang Li
  • Antonio Uras
  • Arnaud MAURY
  • Arthur Beloeuvre
  • Benjamin Quilain
  • Bianca DE MARTINO
  • Claudia De Domincis
  • Deby Treasa KATTIKAT MELCOM
  • Denis Comte
  • Diego Gruyer
  • Elisa Nitoglia
  • Emanuelle Pinsard
  • Fatima HOJEIJ
  • Florian Mercier
  • Florian Ruppin
  • Halime SAZAK
  • Hoa Dinh Thi
  • isabelle COSSIN
  • Jean-Baptiste FILIPPINI
  • Johannès Jahan
  • Jonathan Kriewald
  • Juan Salvador TAFOYA VARGAS
  • Julien Masbou
  • Keerthana KAMALAKANNAN
  • Kinson Vernet
  • Laura Zambelli
  • Laure MASSACRIER
  • Lauri Laatu
  • Leo Lavy
  • Linghua Guo
  • Louis Vaslin
  • Luc Darmé
  • Lucas Martel
  • Lucile Mellet
  • Lucrezia Camilla Migliorin
  • Luka Selem
  • Léonard Imbert
  • Mahbobeh JAFARPOUR
  • Majdouline Borji
  • Malak Hoballah
  • Marco Palmiotto
  • Mario Sessini
  • Mathieu de Bony de Lavergne
  • Matteo Pracchia
  • Maxime Guilbaud
  • Maxime Jacquet
  • Maxime PIERRE
  • Melissa Amenouche
  • Michael Winn
  • Mohamad Kanafani
  • Mykola Khandoga
  • Nemer CHIEDDE
  • océane Perrin
  • Pablo KUNZE
  • Pauline Chambery
  • Philippe Da costa
  • Pu-Kai Wang
  • Romain Bouquet
  • Sabrina Sacerdoti
  • Sacha Daumas
  • Sami Caroff
  • Sara Maleubre Molinero
  • Sihem Sayah
  • Simon Chiche
  • Sullivan Marafico
  • Theraa Tork
  • Thomas CZUBA
  • Thomas Strebler
  • Victor Lebrin
  • Vincent Cecchini
  • Vincent Juste
  • Vlad-George Dedu
  • Xalbat Aguerre
  • Yajun He
  • Yanchun Ding
  • Yasmine DEMANE
  • Yizheng WANG
  • Zechuan Zheng
  • Zhen Li
Secrétariat
    • 09:00 09:15
      Introduction 15m
      Orateurs: Julien Masbou (SUBATECH), Laura Zambelli (LAPP), Laure MASSACRIER (Institut de Physique Nucléaire d'Orsay)
    • 09:15 10:38
      Astroparticle
      Président de session: Sami Caroff (LLR)
      • 09:20
        Session overview 30m
        Orateur: Sami Caroff (LLR)
      • 09:50
        3D Volcano Imaging Using Transmission Muography 23m

        Muography is a recent technique in particle physics where atmospheric muons are used to study the interior of large targets such as volcanoes. In the case of transmission muography, a detector is used to count and track muons that survive after propagation through the target. To a first approximation, the number of muons that survive after propagation through the target depends directly on the amount of integrated matter along their path. The 2D map of the number of muons
        needs to be converted into a 2D map of density. To do this, the number of muons measured with the detector in each direction is compared to the expected number of muons for different target models by varying the density. For each direction, the simulated density that best reproduces the data is chosen. To estimate the muon survival probability, many experiments use an analytical approximation called CSDA (Continuous Slow Down Approximation) giving the range of matter a particle may cross for a given energy. In the MIM (Muon IMaging) experiment, we use a Monte-Carlo treatment. Using the CSDA approximation, thus neglecting the stochastic character of the high-energy interactions of the particles with matter, underestimates their survival probability and thus induces systematics on the reconstructed density. In the range of kilometer of standard rock, the effect is about 3%.

        Orateur: M. Kinson VERNET
      • 10:15
        Radio Morphing: Towards a fast computation of air-shower radio signals 23m

        Incoming large-scale radio experiments for cosmic-ray detection require to run massive air-shower simulations to evaluate the radio-signal at any antenna position. The modeling of the radio-emission can be performed either based on microscopic or macroscopic approaches. The former is fast but relies on many free parameters that limits accuracy, the latter consists of Monte-Carlo simulations that are usually accurate but computationally demanding.

        We present here Radio Morphing, a semi-analytical tool designed for a fast and accurate computation by any air-shower at any location from the simulation data of a few template ZHAireS showers at given positions. The method provides mean relative differences < 20% on the peak amplitude and mean differences < 5 ns on the peak time compared to usual Monte Carlo simulations while the time computation is reduced by several orders of magnitude. We will discuss here the methodology and performances of this innovant tool.

        Orateur: Simon Chiche (Institut d'Astrophysique de Paris)
    • 10:38 11:08
      Pause café 30m
    • 11:08 12:43
      Astroparticle
      Président de session: Sami Caroff (LLR)
      • 11:08
        Study of the origins of ultra high energy cosmic rays 23m

        The Pierre Auger Observatory is the largest cosmic-ray observatory to date. It has been built in order to study the most energetic particles in the universe, commonly known as Ultra High Energy Cosmic Rays (UHECR). With a surface of 3,000$\,{\rm km^{2}}$(30 times Paris), the observatory detects cosmic rays from $10^{17.5}\text{ to }10^{20.5}\,{\rm eV}$. The energy, the shower depth ${\rm X_{{\rm max}}}$ (which is linked to the mass), and the arrival direction are reconstructed. In 2017, the observatory observed a large-scale anisotropy at ${\rm E\geq8\times10^{18}\,eV}$, described as a dipole with 5.2$\sigma$ confidence level, pointingto right ascension $\alpha_{{\rm d}}=100\pm10$ and declination $\delta_{{\rm d}}=-24_{-13}^{+12}$.
        This direction gives strong evidence for an extra galactic origin of UHECRs. Moreover, in 2018, the collaboration published an indication of intermediate-scale anisotropy at ${\rm E\geq39\times10^{18}\,eV}$ with a 4.0$\sigma$ significance level. The intermediate-scale anisotropy is found comparing the UHECR sky map with the flux pattern of extragalactic gamma-ray sources (especially Starburst galaxies & Active Galactic Nuclei).
        To interpret the data, an astrophysical model has been compared to UHECR spectrum and shower depth data, through a method called the Combined Fit. Nuclei are injected according to a production rate and following a distribution of sources. The nuclei propagate through space interacting with the cosmic microwave and infrared backgrounds.
        The Combined Fit then enables to determine the relative importance of propagation and acceleration in shaping the UHECR composition and spectrum.
        Starting from the Combined Fit and from arrival directions, I will present how we can include the anisotropies in the Combined Fit to have a model that describes the three main observables: Xmax, spectrum, arrival directions. Such an astrophysical model could constrain the sources in an unprecedented way and could be a key in understanding them.

        Orateur: Sullivan Marafico
      • 11:31
        First observations of gamma-ray burst with the Large Sized Telescope 23m

        The Large Sized Telescope (LST) prototype is currently under commissioning at La Palma. It's the first on-site telescope of the Cherenkov Telescope Array (CTA). CTA is the new generation of Imaging Atmospheric Cherenkov Telescopes (IACT) for the ground based detection of Very High-Energy gamma-ray (VHE).

        GRBs are short explosions, they are one the most energetic phenomena in the universe. Their detection at very high energies is recent, and so far only four events have been detected. Detecting more bursts would help to better understand the emission mechanism in play. One of the key science driver of LST is the detection at VHE of these short explosions. In this perspective, the LST already has started to follow GRBs after alerts sent by satellite. In this presentation, I will show the first results of these observations.

        Orateur: M. Mathieu de Bony de Lavergne (LAPP)
      • 11:54
        A joint GW–GRB Bayesian study for low-luminosity short GRB population 23m

        We perform a joint gravitational waves/gamma-ray bursts (GW/GRB) Bayesian analysis in order to put constraints on the low-luminosity end of the short gamma-ray burst (sGRB) population. For this purpose we exploit the results of the modeled search for GW transients associated to short and ambiguous GRBs detected during the O1, O2, O3a and O3b runs of the LIGO/Virgo network and a broken power law to describe the luminosity function of our sGRB population. We use then the results obtained to have an estimate on the rate distribution of the low-luminosity sGRB population and on the joint detection rate between sGRB and GW for the future O4 run.

        Orateur: Matteo Pracchia (Virgo group at LAPP)
      • 12:18
        In search of TeV halos, new astrophysical objects to reveal our gamma sky map 23m

        TeV halos are astrophysical objects recently discovered by the H.AW.C. which extend around pulsars. These sources are electron and positron accelerators that interact with the surrounding magnetic field. Their recent detection is due to the fact that they are only visible in the gamma ray region, their size represents several degrees in the sky and they are very faint. To study them, it is therefore necessary to have instruments that are both very sensitive and have a wide field of view, which is technically difficult to achieve. However, their study is important because they dominate TeV emissions in the galaxy and compete with dark matter in explaining the observation of an excess of positrons arriving on Earth.
        Today some of these objects are revealed by the array of imaging atmospheric Cherenkov telescopes H.E.S.S. but the construction of the new array CTA, ten times more sensitive, and the implementation of an associated analysis system could reveal hundreds of them and thus allow us to better understand our sky map in gamma.
        This presentation aims to understand what is the nature of TeV halos, how do they evolve, with which instruments do we detect them and how do we analyze them? This will be done by approaching gamma astronomy and particle accelerators in astrophysics more broadly.

        Orateur: Pauline Chambery (CENBG)
    • 14:00 16:10
      Instrumentation
      Président de session: Sabrina Sacerdoti (APC-Paris,France)
      • 14:00
        Session overview 30m
        Orateur: Sabrina Sacerdoti (APC-Paris,France)
      • 14:30
        Characterization of light scattering point defects in high-performance mirrors for gravitational wave detectors 23m

        The high reflective mirrors of the gravitational waves detector LIGO & Virgo present in the coating many micrometer size defects that scatter the light in the interferometer. This scattered light induces a loss of the laser power of the order of a few tens of parts per million (ppm) and a phase noise because of the recombination with the main beam after reflection on the tube walls. This phenomenon limits the sensitivity of the detector and impacts the ability to detect astrophysical events. A reduction of the scattered light is thus required in order to improve the optical performances of the coatings for the new mirrors of the Advanced LIGO and Virgo plus upgrade. For this purpose we studied the point defects for each material and we analyzed the impact of different parameters in order to compare the density and the size distribution of the defects.

        Orateur: Sihem Sayah (LMA-IP2I)
      • 14:53
        Conception of a PG detector for hadrontherapy online monitoring 25m

        Proton therapy is a tumor treatment taking advantage of the Bragg Peak, a very sharp peak that enable a highly localized energy deposition at the end of particle range. However, the determination of the Bragg peak position is subjected to uncertainties that requires the establishment of safety margins during the irradiation of the patient, therefore decreasing the targeting efficiency in favor of a safer treatment. An online monitoring of proton therapy would allow real-time localization of the position of the Bragg peak, thus maximizing treatment accuracy. Proton range measurement can be provided by the the detection of prompt gamma (PG), secondary particles generated almost instantaneously following a proton-matter nuclear collision.

        We propose a new system for real-time imaging of the Bragg Peak, based on the time-of-flight measurement of the PG with Cerenkov-based detectors : the Prompt Gamma Time Imaging. The precision in the Bragg peak location is directly related to the time resolution of our detection system, and simulations proved that a 100 ps rms time resolution would enable to obtain a millimetric monitoring precision. Through experimental tests, the time resolution of a detection system prototype has already been estimated at 135 ps rms, that resulted in a sensitivity of 4 mm on a deviation of the Bragg peak location with only 600 PG measured.

        Orateur: Maxime Jacquet (LPSC)
      • 15:18
        Development of a monolithic diamond ΔE-E telescope for particle identification and characterization of diamond detector using the ToF-eBIC technique 23m

        Diamond is a promising material for particle detection due to its very high resistivity, excellent charge transport properties (high mobility and long lifetime) and its high radiation hardness. Therefore, diamond is a particularly interesting material for studying charged particles such as alpha particles or fission fragments. For this purpose, a monolithic diamond ΔE-E telescope is under development. In that kind of detector, an incident particle will first depose a part of its energy (proportional to Q²/v²) in the ΔE stage before stopping in the E stage by depositing its remaining energy (proportional to A.v2). The correlation between the two energy deposits will lead to the identification of the incident particle. Thus, the ΔE-E telescope can have various applications in nuclear physics such as fission fragment detection or the identification of particles generated in radiative environment.
        In addition, a time resolved eBIC (electron Beam Induced Current) setup has been developed in order to study the signals resulting by the interaction of low range charge particles in diamond detectors and the charge transport properties in diamond. The strength of this setup is its ability to monitor easily the energy deposited in the detector, the number of interactions per second and the spatial position of the area of interest. Thus, charge collection mappings of diamond detectors have been done and the homogeneity of the response has been studied. Moreover, diamond charge properties have been investigated in a large range of temperatures (down to 4 K) and some very interesting effects due to the anisotropy of the conduction band of diamond has been highlighted.

        Orateur: Alexandre Portier (LPSC et Institut Néel)
      • 15:41
        Simulation and instrumentation for the future Electron-Ion Collider 25m

        In order to study the internal structure of nucleons and nuclei and address some important outstanding questions in nuclear physics, a new Electron-Ion Collider (EIC) is planned to be built at Brookhaven National Lab (NY, USA). The EIC will collide a high energy proton/ion beam with a high energy electron beam. High performance detectors will be used to detect the particles created in the collisions. Detailed simulations and instrumentation developments are still required to better define the detectors that will soon start to be constructed.

        Orateur: pu-kai WANG
    • 16:10 16:45
      Pause café 35m
    • 16:45 19:32
      Instrumentation
      Président de session: Sabrina Sacerdoti (APC-Paris,France)
      • 16:45
        Machine Learning for Real-Time Processing of ATLAS Liquid Argon Calorimeter Signals with FPGAs 23m

        The ATLAS experiment at the Large Hadron Col-
        lider (LHC) is operated at CERN and measures proton-proton
        collisions at multi-TeV energies with a repetition frequency
        of 40 MHz. Within the phase-II upgrade of the LHC, the
        readout electronics of the liquid-argon (LAr) calorimeters
        of ATLAS are being prepared for high luminosity operation
        expecting a pileup of up to 200 simultaneous proton-proton
        interactions. Moreover, the calorimeter signals of up to 25
        subsequent collisions are overlapping, which increases the
        difficulty of energy reconstruction by the calorimeter de-
        tector. Real-time processing of digitized pulses sampled at
        40 MHz is performed using field-programmable gate arrays
        (FPGAs).
        To cope with the signal pileup, new machine learning
        approaches are explored: convolutional and recurrent neu-
        ral networks outperform the optimal signal filter currently
        used, both in assignment of the reconstructed energy to the
        correct proton bunch crossing and in energy resolution. The
        improvements concern in particular energies derived from
        overlapping pulses.
        Since the implementation of the neural networks targets
        an FPGA, the number of parameters and the mathematical
        operations need to be well controlled. The trained neural
        network structures are converted into FPGA firmware us-
        ing automated implementations in hardware description lan-
        guage and high-level synthesis tools.
        Very good agreement between neural network imple-
        mentations in FPGA and software based calculations is ob-
        served. The prototype implementations on an Intel Stratix 10 FPGA reach maximum operation frequencies of 344–
        640 MHz. Applying time-division multiplexing allows the
        processing of 390–576 calorimeter channels by one FPGA
        for the most resource-efficient networks. Moreover, the la-
        tency achieved is about 200 ns. These performance param-
        eters show that a neural-network based energy reconstruc-
        tion can be considered for the processing of the ATLAS LAr
        calorimeter signals during the high-luminosity phase of the
        LHC.

        Orateur: Lauri Laatu
      • 17:10
        MACHINE LEARNING FOR REAL-TIME PROCESSING OF ATLAS LIQUID ARGON CALORIMETER SIGNALS WITH FPGAS 23m

        The Phase-II upgrade of the Large Hadron Collider (LHC) will increase its instantaneous luminosity by a factor of around 10 leading to the High Luminosity LHC (HL-LHC). At the HL-LHC, the number of proton-proton collisions in one bunch crossing, also known as pileup, increases significantly, putting more stringent requirements on the LHC detectors electronics and real-time data processing capabilities. The ATLAS Liquid Argon (LAr) calorimeter measures the energy of particles produced in LHC collisions and identify interesting events. The computation of the deposited energy is performed in real-time using dedicated data acquisition electronic boards based on FPGAs. FPGAs are chosen for their capacity to treat large amount of data with very low latency. The computation of the deposited energy is currently done using optimal filtering algorithms that assume a nominal pulse shape of the electronic signal. These filter algorithms are adapted to the ideal situation with very limited pileup and no timing overlap of the electronic pulses in the detector.

        However, with the increased luminosity and pileup, the performance of the filter algorithms decreases significantly and no further extension nor tuning of these algorithms could recover the lost performance. The back-end electronic boards for the Phase-II upgrade of the LAr calorimeter will use the next high-end generation of INTEL FPGAs with increased processing power and memory. This is a unique opportunity to develop the necessary tools, enabling the use of more complex algorithms on these boards.

        Making possible the artificial intelligence implementation in this phase. We developed neural networks (NNs) algorithms based on CNN (Conv-3 and 4) and RNN (Vanila-RNN and LSTM) with significant performance improvements. Especially for overlapping pulses, NNs outperform optimal filtering algorithms.

        Also, the implementation were done in VHDL and Quartus HLS code. The implementation results on Stratix 10 INTEL FPGAs, including the resource usage, the latency, and operation frequency can be seen in the table represented in the poster, also we did studies about time multiplexing to reduce resource usage. Because multiplexing is used as strategy to enables one network instance to handle multiple calorimeter cells.

        Further optimisation of resource usage and execution frequency are ongoing, as well as hardware tests on Stratix-10 FPGAs.

        Orateur: Nemer Chiedde
      • 17:33
        Improvement of the vertex detector resolution in the Belle II experiment 23m

        The Belle II Silicon Vertex Detector (SVD) is part of the Super B factory composed of the asym-
        metric energy e + e − collider SuperKEKB and the Belle II experiment and is used to identify decay
        vertices as well as reconstruct tracks and provide particle identification information.
        In order to correctly reconstruct tracks, the position of the hits created by charged particles passing
        through the detector needs to be known with precision. It is also important to estimate the resolution
        of the hits position measurement, in order to correctly propagate the error on hits position to track
        fitting, as well as developing methods to optimize this resolution.
        Since 2019 and the start of the data taking, the SVD has demonstrated a reliable and highly ef-
        ficient operation, even running in an environment with harsh beam backgrounds that are induced by
        the world’s highest instantaneous luminosity. The cluster position resolution has been estimated in
        simulation, then on data using a dataset of approximately 16 f b −1 integrated luminosity collected by
        Belle II. While the SVD performance is already very good, there is still room for improvement of the
        estimation of the cluster position resolution.
        This talk will present the latest studies to improve the hit position estimation in the vertex detector by correcting charge couplings between silicon strips, a refined estimation of cluster position errors as well as the work done on simulation to better describe the detector, in order to improve data and
        simulation agreement.

        Orateur: Lucas Martel (CNRS - IPHC)
      • 17:56
        Silicon trackers for neutrino tagging at long baseline experiments 23m

        With the project of neutrino tagging, we propose a new way to study the yet unknown parameters of neutrino physics. The neutrino tagging technique consists in instrumenting the beamline of a long baseline experiment with Silicon trackers, that can precisely measure the properties of the charged particles that participate in a two-body decay, producing neutrinos. The properties of a state-of-the-art Silicon tracker, especially its time resolution, allow for a one-to-one match with neutrinos detected after oscillations at the far detector, granting unprecedented precision on the measurement of the neutrino energy and of the parameters of neutrino physics, such as the CP violating phase. The aim of the project is to demonstrate the feasibility of this technique, first by studying the factors that affect the time resolution of Silicon trackers, and then by performing the neutrino tagging technique on data from the NA62 experiment, exploiting the properties of its performing tracker and its calorimeters.

        Orateur: Bianca DE MARTINO ({CNRS}UMR7346)
      • 18:19
        Etude et développement de l’électronique cryogénique de lecture des détecteurs à très bas seuil de l’expérience Ricochet pour la recherche de nouvelle physique via la mesure de l’interaction cohérente neutrino noyaux (CENNS) 23m

        L'expérience Ricochet a pour but de mesurer le processus CENNS (interaction élastique cohérente neutrino noyau) à basse énergie avec une précision de l’ordre de 1% afin d'y confronter le modèle standard et de rechercher de possibles signes de nouvelle physique. Elle sera située proche du réacteur nucléaire de l'institut Laue Langevin à Grenoble fin 2022. L'expérience sera composée de deux séries de détecteurs : CryoCube (Ge) et Q-array (Zn). Le CryoCube se compose de 27 détecteurs de 38g équipés d'un senseur thermique NTD et d'électrode pour une double mesure ionisation-chaleur. Les performances des détecteurs sont à améliorer pour mesurer avec précision le processus CENNS. Pour cela, une électronique bas bruit basée sur des transistors HEMTs, développés par le CNRS/C2N, est en train d'être développée. Les travaux présentés montreront les modèles de bruits utilisés, la caractérisation des HEMTs ainsi que les premières mesures sur des détecteurs.

        Orateur: Jean-baptiste Filippini (IP2I/UCBL)
    • 14:00 16:02
      Beyond Standard Model
      Président de session: Thomas Strebler (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
      • 14:00
        Session overview 30m
        Orateur: Thomas Strebler (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
      • 14:30
        Search for exotic tensor couplings in the nuclear beta decay of 6He: b-STILED project 23m

        The Standard Model of particle physics is the set of quantum theories that governs the behavior of elementary particles and fundamental forces. It describes the strong, weak, and electromagnetic interactions. However, despite the huge successes of the Standard Model, there are many strong observational and theoretical reasons to believe that it is not the ultimate model to describe the nature. Actually, it can be considered as a low energy limit of a wider theory. Which motivates physicists to search for a new theoretical framework that involve new physics behind the Standard Model. This search is being held on three frontiers: the cosmological frontier, the high-energy frontier and the high precision frontier.
        This work belongs to that last category, which consists on executing high precision measurements at very low energy. The aim of these measurements is to detect any small deviations from the Standard Model predictions, which if existed, will be a proof of the presence of new physics.
        The b-STILED (b: Search for Tensor Interactions in nucLear bEta Decay) project aims to extract the Fierz interference term in pure Gamow-Teller transition (bGT) of 6He with a total uncertainty of ΔbGT=10-3 at 1σ [1].

        References:
        [1] X. Fléchard, E. Liénard, X. Mougeot, O. Naviliat-Cuncic, G. Quemener, J.C. Thomas. Improved Search for Tensor Interactions in Nuclear Beta Decay. Proposal to the AAPG2020, CE31.

        Orateur: Mohamad Kanafani
      • 14:53
        Search for CP violation in nuclear beta decay: The MORA project 23m

        P. Delahaye$^{1} $, E. Liénard$^{2}$ , I. Moore$^{3}$ , G. Ban$^{2}$ , M.L. Bissell$^{4}$ , S. Daumas-Tschopp$^{2}$, R.P. De Groote$^{3}$ ,
        F. De Oliveira$^{1}$ , A. De Roubin$^{5}$ , T. Eronen$^{3}$ , A. Falkowski$^{6}$ , C. Fougères$^{1}$, X. Fléchard$^{2}$ , S. Geldhof$^{7}$ ,
        W. Gins$^{3}$ , N. Goyal$^{1}$ , M. Gonzalez – Alonso$^{8}$ , A. Jaries$^{3}$ , A. Jokinen$^{3}$, A. Kankainen$^{3}$ , M. Kowalska$^{9}$, A. Koszorus$^{3}$ , N. Lecesne$^{1}$ , R. Leroy$^{1}$ , Y. Merrer$^{2}$ , G. Neyens$^{7,9}$ , G. Quéméner$^{2}$ , M. Reponen$^{3}$, S.Rinta-Antila$^{3}$ , A. Rodriguez–Sanchez$^{6}$ , N. Severijns$^{7}$ , A. Singh$^{1}$ , J. C. Thomas$^{1}$, V. Virtanen$^{3}$
        ${}^1 \! $GANIL, Bd H. Becquerel, 14000 Caen, France
        ${}^2 \! $ Normandie Univ, ENSICAEN, UNICAEN, CNRS/IN2P3, LPC Caen, 14000 Caen, France
        ${}^3 \! $ Department of Physics, University of Jyväskylä, Survontie 9, 40014 Jyväskylä, Finland
        ${}^4 \! $ School of Physics and Astronomy, University of Manchester, Manchester, UK
        ${}^5 \! $ CENBG, 19 chemin du Solarium, 33175 Gradignan, France
        ${}^6 \! $ IJCLab, 15 Rue Georges Clemenceau, 91400 Orsay, France
        ${}^7 \! $ Instituut voor Kern- en Stralingsfysica, KU Leuven, B-3001 Leuven, Belgium
        ${}^8 \! $ IFIC - University of Valencia, 46980 Paterna, Spain
        ${}^9 \! $ CERN, Esplanade des Particules,
        CH-1211 Geneva, Switzerland

        Why are we living in a world of matter? What is the reason for the strong matter – antimatter asymmetry we observe in the Universe?
        A large CP violation has therefore to be discovered to account for this large matter-antimatter asymmetry, at a level beyond the CP violation predicted to occur in the Standard Model via the quark-mixing mechanism.
        The MORA (Matter’s Origin from RadioActivity of trapped and oriented ions) project aims to search for new CP violations in beta decay thanks to the measurement of the triple D correlation, with an unprecendented precision to the order of 10 −5 [1]. The MORA experiment uses an elegant polarisation technique, which implements the high efficiency of ion trapping[2] and the orientation of a laser. A ring
        of detectors allows the measurement of the coincidences of the beta particles with the recoil ions coming from the trapped radioactive ions. The D parameter can be determined by the asymmetry in the counting rate when inverting the polarisation. This measurement should potentially enable, for the first time, a
        probe of the Final State Interaction effect which mimics a non-zero D correlation at or below 10 −4 .
        The MORA apparatus is currently tested either at the LPC Caen and at GANIL before moving to JYFL, where adequate lasers are available for the polarization of 23 Mg + ions. MORA will be installed later in the DESIR hall at GANIL where better production rates are expected, offering the opportunities to reach unprecedented sensitivities to New Physics.

        References
        [1] P. Delahaye et al., Hyperfine Interact 240, 63 (2019).
        [2] M. Benali et al., Eur. Phys. J. A 56, 163 (2020).
        This presentation aims to showcase the current status of the MORA experiment (commissionning, test on detectors ...) and the perspectives beyond it (moving to JYFL, first measurements ...).

        Orateur: Sacha Daumas-Tschopp (CNRS(LPC Caen))
      • 15:16
        Search for Light Dark Matter with DAMIC-M 23m

        DAMIC-M (Dark Matter in CCDs at Modane) is a near-future experiment that aims at searching for low-mass dark matter particles through their interactions with silicon atoms in the bulk of charge-coupled devices (CCDs). Pioneer in this technique was the DAMIC experiment at SNOLAB. Its successor, DAMIC-M, will have a detector mass 25 times larger and will employ a novel CCD technology (skipper amplifiers) to achieve sub-electron readout noise. Strengthened by these characteristics, DAMIC-M will reach unmatched sensitivity to the dark matter candidates of the so-called hidden sector. A challenging requirement is to control the radiogenic background down to the level of a fraction of events per keV per kg-day of target exposure. To meet this condition, Geant4 simulations are being exploited to optimize the detector design, drive the material selection and handling, and test background rejection techniques. Furthermore, precise measurements are being carried out with skipper CCDs to characterize the spectrum of Compton scattered electrons, which represent a dominant source of environmental background at low energy.
        This talk gives an overview of the project, including the estimated background, and the strategies implemented for its mitigation and characterization.

        Orateur: Claudia De Domincis
      • 15:39
        Searches for effects Beyond the Standard Model in semileptonic decays of B mesons at LHCb 23m

        Search for Charge-Parity violation (CPV) in $B\to D^* \ell \nu$ transitions. In the Standard model (SM) there is no CP-asymmetry in this type of decays, however two New Physics (NP) possible ways to obtain CPV are investigated in this thesis. One possibility is given by the triple product asymmetries in four-body decays of B mesons, while the other possibility is the interference of two (or more) decay amplitudes with overlapping $D^{**}$ resonances. A Monte-Carlo (MC) study of $B\to D^* \mu \nu$ was conducted to investigate the sensitivity to CPV in different NP scenarios. For this purpose, HAMMER (tool that reweights MC SM distributions to NP scenarios) was used. The analysis includes studies of stripping lines, trigger lines, offline selection, data-simulation comparison of quantities of interest and studies of systematic uncertainties. A kinematic reconstruction of the events with missing neutrinos using the full refit of the decay tree was implemented. This gives a 10-20% improvement in angular resolution with respect to techniques previously used for semileptonic decays.

        Orateur: Vlad Dedu (Aix-Marseille-University, CNRS/IN2P3, CPPM, Marseille, France)
    • 16:02 16:30
      Pause café 28m
    • 16:30 17:16
      Beyond Standard Model
      Président de session: Thomas Strebler (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
      • 16:30
        Higgs pair production in bb\gamma\gamma final state with ATLAS Run 2 at LHC 23m

        The Higgs boson self-coupling provides information about the structure of the Higgs potential.
        A direct probe of the self-coupling of the Higgs boson is possible by studying Higgs boson pair production.
        Furthermore, an enhancement of the Higgs boson pair production rate with respect to the Standard Model (SM) prediction would point to new beyond-the-Standard-Model (BSM) physics and may be within the sensitivity reach of the proton-proton collision data collected at $\sqrt{s} = 13$ TeV during the Run 2 of the Large Hadron Collider (LHC).
        On the other hand, many BSM theories predict existence of new heavy scalar particles, analysis has been perfermed by seaching for new scalar with different masses in range between 251 GeV and 1 TeV.

        Orateur: Linghua Guo
      • 16:53
        Search for New Physics with unsupervised Machine Learning 23m

        The Stnadard Model of particle physics is the model that best describe our current knowledge of elementary particles and their interactions. However, it can't explain everything. For this reason, experiments like ATLAS tries to find the constituents of New Physics beyond the Standard Model.

        In order to analyse the data produced by these experiments, Machine Learning is a very popular tool. This talk will present a new way to search for New Physics combining an anomaly detection algorithm based on unsupervised Machine Learning and a model independent bump hunting tool. A concrete example of application will be given using the data from the LHC Olympics 2020 challenge[1].

        [1] https://lhco2020.github.io/homepage/

        Orateur: Louis Vaslin (LPC Clermont)
    • 17:20 18:40
      Standard Model
      Président de session: Mykola Khandoga (LPNHE)
      • 17:20
        Session overview 30m
        Orateur: Mykola Khandoga (LPNHE)
      • 17:50
        Mesure de la polarisation des bosons vecteurs dans le canal WZ avec le détecteur ATLAS au LHC 23m

        Les bosons vecteurs de la théorie électrofaible $W^{\pm}$ et $Z$ sont les seuls à posséder un degré de polarisation longitudinal du fait de leur masse non nulle. Cette polarisation longitudinale présente un intérêt théorique car elle est liée au mécanisme de brisure spontanée de la symétrie électrofaible. Par ailleurs, l'étude de leurs polarisations est un moyen de tester le Modèle Standard dans un secteur encore peu exploré : les couplages triples des bosons de jauge $W^{\pm}$ et $Z$. Nous montrerons ici comment on mesure la polarisation des bosons vecteurs dans la canal $WZ$ leptonique au LHC avec l’expérience ATLAS. Nous verrons en particulier la mesure de la polarisation des deux bosons simultanément, mettant en évidence l'existence de corrélations.

        Orateur: Luka Selem
      • 18:13
        Mesure des propriétés CP du boson de Higgs 23m

        Un nouveau boson dont les propriétés s’apparentent à celles du boson de Higgs standard a été découvert en 2012 par les collaborations CMS et ATLAS au CERN. Depuis, les efforts se concentrent sur la mesure de précision de certaines de ses caractéristiques. Cette présentation a pour objectif de présenter la mesure des propriétés CP du boson de Higgs à travers ses différents couplages aux autres particules du modèle standard de la physique des particules.

        Certaines études ont d’ores et déjà exclu un état CP purement pseudo-scalaire du boson de Higgs à travers ses couplages aux bosons vecteurs. En revanche, l’observation d’une fraction pseudo-scalaire dans la structure CP de ses couplages de Yukawa aux fermions n’est pas exclue. Chaque fermion possédant un couplage de Yukawa unique, une analyse séparée dans chaque canal qui le permet est nécessaire. En particulier seront présentés ici les résultats de deux études dont une dans le canal de production $t\overline{t}H$ et une dans le canal de désintégration $H\rightarrow\tau\tau$ à laquelle l’équipe CMS de Strasbourg contribue.

        Seront également présentés dans le cadre de ma thèse les projets d’amélioration de la sensibilité à l´état CP du boson de Higgs en vue du Run 3 du LHC dont le lancement est prévu pour mars 2022. Suite aux précédents résultats de l´équipe, cette thèse s’inscrit dans l’étude du canal de désintégration $H\rightarrow\tau\tau$ et dans les divers canaux hadroniques issus de la désintégration de la paire de leptons $\tau$ ainsi que dans le canal où l’un des taus se désintègre en muon.

        Orateur: Mario Sessini
    • 09:00 10:32
      Standard Model
      Président de session: Mykola Khandoga (LPNHE)
      • 09:00
        Electron energy resolution corrections for calibration of the ATLAS Liquid Argon Calorimeter 23m

        The calibration of the Liquid Argon Electromagnetic Calorimeter at the ATLAS experiment is done with $\mathrm{Z}\rightarrow ee$ Data and MC. While the continuous efforts of the collaboration have improved the agreement between both samples, there is a remaining non-negligible discrepancy between the Data and MC dilepton invariant mass lineshape that has not been accounted for by existent corrections. As measurements coming from the tracker (and their simulation) are highly precise, the energy measurement at the calorimeter seems to be the most likely culprit.

        This study aims to better understand the mass lineshape discrepancy by performing energy resolution corrections on MC. These are performed on an event-by-event basis with scalings of $\Delta = E_\mathrm{reco} - E_\mathrm{truth}$ via some parametrization $\Delta' = f_\eta(\Delta,E^\mathrm{T}_\mathrm{truth})$, where the explicit dependence on $E^\mathrm{T}_\mathrm{truth}$ seeks to account for the changing kinematics of the electron-pair across different regions of the calorimeter. As the $\Delta'$ correction translates into a shape deformation of the energy resolution distribution, it allows to account for specific effects, such as tails and negative smearing corrections, which have an important effect on the lineshape agreement.

        Orateur: Juan Salvador TAFOYA VARGAS (IJCLab / Université Paris-Saclay)
      • 09:23
        Boosted H->bb tagging in ATLAS 23m

        The Standard Model (SM) of Particle Physics summarises the fundamental interactions of strong,
        weak and electromagnetic forces and it has been proven successfully from decades. However, there
        are a lot of phenomena which can not be explained within SM. Therefore, it is necessary to search
        for new physics at the high energy frontier, being probed by the Large Hadron Collider (LHC)
        currently or in future. The Higgs Boson, giving mass to elementary particles in SM is one of the
        keys to Standard Model measurements as well as New Physics searches. Boosted Higgs bosons
        decaying via the dominant $H \rightarrow b \bar{b}$ mode are an essential ingredient to a number of LHC physics
        signatures. This talk is dedicated to the identification of boosted $H \rightarrow b \bar{b}$ in current and in future
        ATLAS. In current physics analyses, the large-R jets with two b-tagged associated Variable Radius
        (VR) track jets are taken as $H \rightarrow b \bar{b}$ events. The performance, calibrations and applications have
        been studied based on individual b-jet. Lots of interesting results have been produced. However, at
        very high energy, the two b-jets from the Higgs Boson are highly collimated, with the result that the
        separation of two b-jets becomes less efficient. Therefore, we’re motivated to develop more
        efficient tagging techniques for future studies. The boosted $X \rightarrow b \bar{b}$ tagger, which is recently
        developed in ATLAS, focuses on tagging one large radius jet which contains two b-hadrons. The
        background rejection efficiency is significantly improved. The calibrations of the tagger are
        published recently so that the tagger can be used in physics analyses. We’re looking forward to the
        performance of the tagger in physics analyses and to the new ideas in the boosted $H \rightarrow b \bar{b}$ tagging.

        Orateur: Yajun HE
      • 09:46
        Off-shell Higgs into 4 leptons & electron tracking in ATLAS 23m

        The Higgs decay into two Z bosons, each Z decaying into two charged leptons (hence Higgs-to-4-lepton decay) is called the golden channel as it is one of the Higgs decay channels with the cleanest signal. The study of the off-shell Higgs offers new possibilities of analysis beyond the on-shell data. The off-shell region is defined as a centre-of-mass energy of more than 220 GeV.
        We thus have the aim of analysing ATLAS’ Run 2 data in this channel and in the off-shell region using the framework of EFTs (Effective Field Theory), and in particular the SMEFT (Standard Model EFT), which aims to better understand the deviations of the data relative to the SM. The big picture goal is to generate trustworthy Monte-Carlo samples for the relevant EFT operators in order to fit data to the SMEFT and measure the Wilson coefficients for those operators. In my work so far, I have focussed on the Monte-Carlo generation process in order to compare and validate several software versions.
        As part of my ATLAS Qualification Task (QT), I am also working on the software aspect of ITk: for HL-LHC, ATLAS will replace its Inner Detector with an all-silicon Inner Tracker (ITk). Along the instrumental upgrade, ATLAS will also (re)introduce the ACTS software, designed for better efficiency at tracking charged particles. ACTS (Acts Common Tracking Software) is an experiment-independent software currently under development and its integration to ATLAS-ITk is ongoing.
        Electron tracking is particularly challenging because of bremsstrahlung, as the particle loses energy as it progresses in the tracker. A new tracking algorithm is currently developed in the team in order to better address this issue. Before working to implement and integrate this algorithm in a release build of ACTS, the performances (correctness on the one hand and performance on the other hand) of this new tracking algorithm must be compared with the reference one. In order to do this, pull plots are a useful tool to gauge the physical correctness of the algorithm.

        Orateur: Arnaud MAURY ({UNIV PARIS-SACLAY}UMR9012)
      • 10:09
        Measurement of CKM angle gamma in Open Charm B Decays at LHCb 23m
        Orateur: Halime Sazak (PhD student)
    • 10:32 11:02
      Pause café 30m
    • 11:02 12:11
      Standard Model
      Président de session: Mykola Khandoga (LPNHE)
      • 11:02
        b-jet energy scale calibration with ATLAS Run 2 data using ttbar lepton+jets events 23m

        For many Standard Model (SM) measurements and searches for new phenomena, jet-calibration is crucial and b-jets are involved in some important final states as for example the main decay of the Higgs boson is into pairs of b-quarks (H→bb).

        To properly reconstruct the kinematics of those processes, an accurate calibration of the b-jet energy is hence required.

        Up until now, there was no dedicated b-Jet energy scale (b-JES) calibration and such energy correction was assumed to be equal to the light jets calibration. Moreover the related b-JES uncertainties were only estimated from the Monte Carlo simulations.

        For the first time in ATLAS, the b-jet energy correction and its uncertainties have been measured in Data. For this never performed study, the energy correction was obtained from a template method that compares top invariant mass distributions in Data and MC using hadronic top decays (t→Wb→qqb) in ttbar lepton+jets events.

        Orateur: Romain Bouquet (LPNHE)
      • 11:25
        Mesurement of the Higgs self coupling throug 2$\ell$SS channel analysis 23m

        A bosonic particle with a mass equals to 125GeV was observed in 2012, by ATLAS and CMS collaborations at the Large Hadron Collider (LHC). This particle was associated with the Higgs Boson or BEH boson, predicted fifty years before its discovery by François Englert, Robert Brout and Peter Higgs. This particle validates the BEH mechanism, explaining the origin of the mass of known particles and the electroweak symmetry breaking.
        Since this discovery, physicists have been trying to probe the various properties of the Higgs boson such as the Higgs self-coupling. The success in probing Higgs self-coupling will bring another probe of the standard model and will give a direct measurement of the Higgs field potential in the vacuum. This measure is performed through a global analysis of the di-Higgs (HH) production at LHC, decaying into various channels.The aim of this work is to build a discriminant variable to maximise the distinction between the signal (HH) and the background (all processes ending in the same signature).
        In this study, the analysis has been done through the signature 2$\ell$SS, it represents around 10% of the leptonic decays of the Higgs pair. Given this signature, the main backgrounds will be Di-boson production, ttbar production and single boson production. The distinction could be measured through the significance. In order to maximise this last named, multivariate techniques targeting the main background processes have been used. The study has been realised using Monte Carlo simulations and data from the Run II. Finally the main instrumental backgrounds, especially the contribution from misidentified leptons, are derived from data.

        Orateur: océane Perrin (LPC Clermont)
      • 11:48
        Perspectives for Higgs measurements at Future Circular Collider 23m

        abstract will be provided as a separate file

        Orateur: Ang LI (APC Paris)
    • 12:15 12:45
      Cosmology
      Président de session: Florian Ruppin (IP2I Lyon)
    • 14:00 16:02
      Cosmology
      Président de session: Florian Ruppin (IP2I Lyon)
      • 14:00
        Accuracy of Power Spectrum measurements using Scale-Free cosmologies 23m

        We exploit a suite of large \emph{N}-body simulations (up to N=$4096^3$) performed with \Abacus, of scale-free models with a range of spectral indices $n$, to better understand and quantify convergence of the matter power spectrum in dark matter only cosmological \emph{N}-body simulations. Using self-similarity to identify converged regions,
        we show that the maximal wavenumber resolved at a given level of accuracy increases monotonically as a function of
        time. At the 1\% level it starts at early times from a fraction of $k_\Lambda$, the Nyquist wavenumber of the initial grid, and
        reaches at most, if the force softening is sufficiently small,
        $\sim 2 k_\Lambda$ at the very latest times we evolve to. At
        the $5\%$ level accuracy extends up to slightly larger wavenumbers, of order $5k_\Lambda$
        at late times. Expressed as a suitable function of the scale-factor, accuracy shows a very simple $n$-dependence, allowing a straightforward extrapolation to place conservative bounds on the accuracy of \emph{N}-body simulations of non-scale free models like LCDM. Quantitatively our findings are broadly in line with the conservative assumptions about resolution adopted by recent studies using large cosmological simulations (e.g. Euclid Flagship) aiming to constrain the mildly non-linear regime. On the other hand, we note that studies of the matter power spectrum in the literature have often used data at larger wavenumbers,
        where convergence to the physical result is poor.
        Even qualitative conclusions about clustering at small scales, e.g concerning the validity of the stable clustering approximation, may need revision in light of our results.

        Orateur: Sara Maleubre Molinero (Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE))
      • 14:23
        Dynamique gravitationnelle des champs scalaires pour la matière noire 23m

        FR:
        La nature de la matière noire est l'un des problèmes les plus importants de la cosmologie et de la physique théorique. Cette composante, qui constitue plus de 80 % de la matière de l'Univers, n'a jusqu'à présent été détectée que par ses effets gravitationnels. Le scénario habituel est celui des "Weakly Interacting Massive Particles" (WIMP). Cependant, de telles particules n'ont toujours pas été détectées et ce modèle semble rencontrer quelques difficultés pour rapporter des données à l'échelle galactique. Cela a ravivé l'intérêt pour des scénarios alternatifs, parmi lesquels la matière noire en tant que champ scalaire. Nous nous concentrerons sur l'étude de la friction dynamique, qui est défini comme la perte d'impulsion de corps en mouvement par des interactions gravitationnelles. Nous présenterons ici le cas de la friction dynamique entre un halo de matière noire et un trou noir en mouvement à l'intérieur de ce halo en considérant, ou non, des auto-interactions entre les particules de matière noire.

        EN:
        The nature of dark matter is one of the most important problems in cosmology and theoretical physics. This component, which constitutes more than 80% of the matter of the Universe, has so far been detected only by its gravitational effects. The usual scenario is that of weakly interacting massive particles (WIMPs). However, such particles have still not been detected and this model seems to encounter some difficulties in reporting data at galactic scales. This has revived interest in alternative scenarios, among which scalar field dark matter. We will focus on the study of dynamical friction, which is define as the loss of momentum of moving bodies through purely gravitational interactions. Here we will present the case of the dynamical friction between a dark matter halo and a moving black hole inside this halo by considering, or not, self-interactions between the dark matter particles.

        Orateur: Alexis Boudon (Institut de Physique Théorique - CEA, Saclay)
      • 14:46
        Detection of Compact Binary Coalescences and the Multi-Band Template Analysis 23m

        The Multi-Band Template Analysis (MBTA) is a pipeline suited for searching for
        gravitational waves (GWs) emitted by coalescing compact binary systems (CBCs) in LIGO-Virgo data. It has been used ever since the first generation of interferometric GW detectors in its online configuration, and over the past years it has been improved to provide contributions to GW transient catalogues by developing an offline configuration. MBTA performs a template-based search by splitting the analysis in two frequency bands to reduce computational costs. It has been used in both its offline and online configuration to analyse data from the third observing run (O3) in the standard search, investigating for signals emitted by coalescing Binary Black-Holes (BBHs), Neutron Star Binaries (BNSs) and Neutron-Star-Black-Hole Binaries (NSBHs). At the moment, MBTA is contributing in the Sub-Solar Mass (SSM) search, seeking for signals emitted by compact binaries with at least one component with mass smaller than the mass of the Sun.

        Orateur: Elisa Nitoglia
      • 15:09
        False Alarm Rate computation for MBTA single detector triggers 23m

        The LIGO-Virgo collaboration is using three interferometers and several analysis pipelines in order to observe the sky in search of gravitational waves of different origins. Detected gravitational waves events were usually required to be found in coincidence in at least two detectors in order to be selected as candidates. The increase in sensitivity now enables the search of candidates within single detector triggers to increase statistics. The work presented here aims to identify astrophysical events within the MBTA pipeline single detector triggers by assigning them a false alarm rate.

        Orateur: Vincent Juste (CNRS - IPHC (DRS))
      • 15:32
        Probing local anisotropies using Type Ia Supernovae data 23m

        A large variety of cosmological observations has validated the Λ CDM model as the leading one in driving the dynamics of the Universe. This model requires the validity of several assumptions : the Cosmological Principle (homogeneity and isotropy at large scales). Despite numerous successes, the standard model is facing some challenges like the detection of large scale velocity flows.
        Type Ia supernovae (SNe Ia) are cosmological probe that allows to map the Universe at different scales and measure its dynamics. The new data set from the Zwicky Transient Facility (ZTF) at z < 0.1 constitutes a unique sample to investigate potential anisotropies in the nearby Universe. I will present my current work on the detection of bulk flows and the systematics involved using ZTF data and simulations.

        Orateur: Melissa Amenouche (Laboratoire de Physique de Clermont)
    • 16:02 16:30
      Pause café 28m
    • 16:30 18:32
      Neutrinos
      Président de session: Benjamin Quilain (CNRS In2p3, Ecole Polytechnique, Laboratoire Leprince-Ringuet)
      • 16:30
        Session overview 30m
        Orateur: Benjamin Quilain (CNRS In2p3, Ecole Polytechnique, Laboratoire Leprince-Ringuet)
      • 17:00
        Préparation par l’analyse et la R&D de l’expérience Hyper-Kamiokande pour des mesures précises des paramètres d’oscillations des neutrinos. 23m

        La compréhension du phénomène d'oscillation des neutrinos est un sujet de recherche très actif depuis une trentaine d'années et est au coeur de nombreuses questions ouvertes telles que leur système d’acquisition de masse au delà du modèle standard, l’origine de l’asymétrie matière-antimatière, ou le mécanisme d’explosions des supernovae …Les expériences d’oscillation de neutrinos de faisceau sur longue distance permettent aujourd’hui les mesures les plus précises des paramètres d’oscillation par le contrôle des autres facteurs tels que l’énergie et la composition initiale du faisceau. Tokaï to Kamioka (T2K) en est une et a, en 2020, publié une première contrainte sur la violation de le symétrie CP (un des paramètres de l’oscillation) dans le secteur leptonique, qui joue un rôle dans l’asymétrie matière/antimatière.
        Mais pour aller plus loin dans ces mesures il faut d’une part réduire les différents effets systématiques dans l'analyse, par des études et une mise à niveau du détecteur proche avec une prise de données courant 2023 (T2K-II). En particulier, je m’attarderai sur la prise en compte de l’incertitude sur l’énergie de liaison du nucléon interagissant avec le neutrino lors de l’interaction permettant la détection selon les modèles nucléaires considérés. Par ailleurs, la construction d’un détecteur lointain Hyper-Kamiokande (HK) avec un volume officiel 10 fois plus grand et un système de détection plus performant est prévue pour une mise en service en 2027. Je participe à la R&D et les études de sensibilité associées pour le développement du système de synchronisation d’horloges de HK qui sera basé sur des horloges atomiques et la réception de signaux GPS. En effet, la précision en temps est cruciale pour la reconstruction des évènements mais aussi pour d’autres applications telles que la participation à un réseau de veille des explosions de supernovae.

        Orateur: Lucile Mellet (LPNHE,Sorbonne Université)
      • 17:23
        Analyse des données cosmiques du prototype double phase ProtoDUNE pour l'expérience de physique des neutrinos DUNE 23m

        Les neutrinos n'ont pas fini de nous livrer leurs secrets. Quelle est la hiérarchie des masses ? Quelle sont les valeurs précises des paramètres d'oscillation ? Ou encore, y-a-t-il une asymétrie matière/anti-matière dans le domaine des neutrinos ? La future expérience DUNE cherchera à répondre à ces questions. Les technologies employées et les dimensions de cette expérience requièrent une phase de prototypage à plus petite échelle, dont ProtoDUNE Double Phase.

        Le détecteur ProtoDUNE Double Phase est une Chambre à Projection Temporelle à Argon Liquide constituée d'une phase liquide dans laquelle vont dériver les électrons d'ionisation et une phase gazeuse servant à l'amplification du signal. Les données prises par ce détecteur au CERN en 2019 et 2020 avec des rayons cosmiques permettent de caractériser et d'évaluer les performances de cette technologie. Il est notamment possible d'évaluer la pureté de l'Argon liquide, de mesurer le gain des amplificateurs de charges, de caractériser la dépendance de ce gain aux différents paramètres de fonctionnement du détecteur et des effets dus aux non-uniformités du champs du dérive. Je présenterai les études en cours et les premiers résultats que j'ai obtenus sur ces différents sujets.

        Orateur: Pablo KUNZE
      • 17:46
        Core-Collapse Supernova neutrino detection with the 3" PMT system of the JUNO experiment 23m

        Core-Collapse Supernovas (CCSN) are gigantic and luminous explosions which occur when a massive star (M ≥ 8 M$_{\odot}$) comes to death. Many questions remain unanswered about the mechanisms which leads to such a violent explosion. Thirty-four years ago, for the first time, a few dozens of neutrinos from a CCSN (SN1987A) were detected, marking the beginning of a new era in the study of supernovas. The Jiangmen Underground Neutrino Observatory (JUNO) is a 20-kton liquid scintillator under construction in China. Two photomultiplier tube (PMT) systems, the first one made of 18000 20" PMTs and the second one made of 26000 3" PMTs, will collect the light produced by the neutrino interaction. JUNO is dedicated to Mass Ordering and precise oscillation parameter measurements, however, thanks to its large detection volume, it will be able to detect a burst of ∼10$^{4}$ neutrinos for a typical 10kpc away galactic CCSN. Such high statistics will alow to constrain the supernova explosion models and more generally to improve our knowledge in neutrino physics and nuclear physics. This presentation will be focused on the detection of CCSN neutrinos with the 3" PMT system of the JUNO detector.

        Orateur: Victor LEBRIN (CNRS - IN2P3 - Subatech)
      • 18:09
        Spatial reconstruction of neutrino interactions based on a neural network in the JUNO experiment using the 3 inch PMT system 23m

        abstract will be provided as a separate file

        Orateur: Léonard Imbert (Subatech - CNRS IN2P3)
    • 09:00 10:32
      Neutrinos
      Président de session: Benjamin Quilain (CNRS In2p3, Ecole Polytechnique, Laboratoire Leprince-Ringuet)
      • 09:00
        SuperNemo Demonstrator Current Status and Time Characterization of the Calorimeter 23m

        Abstract:

        SuperNEMO is an experiment aiming to search for the hypothetical neutrinoless double beta decay using a Tracker-Calorimeter Technique. A first module, called Demonstrator, is under construction and testing at the Laboratoire Souterrain de Modane (LSM) at 4800 m.w.e. depth. The Demonstrator aims to reach a sensitivity on the neutrinoless double beta decay half-life of T > 6.5 * 1024 y corresponding to <mν> < (260 – 500) meV with 17.5 kg.y exposure of 82Se, another goal is to demonstrate that a SuperNEMO module can reach its ultra-low background specifications. The Demonstrator tracker started recently taking data and is under commissioning, the magnetic coils are installed, too and to be commissioned. Anti-Radon tent, gamma and neutron shields are yet to be installed. The tracker gas is expected to have high radio-purity in 222Rn, with an activity of 0.15 mBq/m3. The Demonstrator calorimeter is already commissioned with 712 optical modules of which 440 with energy resolution of 8% at FWHM at 1MeV, it is aimed to detect individual particles energies and measure their time-of-flight. Time alignment and calibration of the optical modules was done using a Cobalt 60 where the 2 emitted gammas were detected in coincidence, resulting in a precise alignment to reject backgrounds using time-of-flight measurements and a primarily time resolution of ~ 600 ps for Ɣs @ 1 MeV. More precise characterization of the timing of the calorimeter for electrons are expected with the full demonstrator installed and commissioned and with an electron source. An overview of the current status of the SuperNEMO Demonstrator is presented along with the methods used to perform time calibrations and get the time resolution.

        Orateur: Malak Hoballah ({CNRS}UMR9012)
      • 09:23
        Étude et étallonage du calorimètre SuperNEMO 23m

        Le neutrino est la particule de matière la plus abondante de l’Univers, mais aussi la plus mystérieuse, car on ne connaît toujours pas des propriétés aussi fondamentales que sa nature (Dirac ou Majorana ?) ou sa masse. Le projet SuperNEMO cherche à amener des éléments de réponse à ces interrogations avec la recherche de la décroissance double bêta sans émission de neutrino. Cette réaction, interdite par le Modèle Standard n’est possible qu’avec un neutrino de Majorana et le
        signal correspond à l’émission de 2 électrons emportant la totalité de l’énergie de réaction. Pour détecter cette réaction, le projet SuperNEMO utilise une technologie unique combinant trajectographe et calorimètre qui lui permet d’identifier sans ambiguïté les 2 électrons et de mesurer leur énergie avec le calorimètre. Mon travail de thèse consiste à étalonner en énergie ce calorimètre
        et de suivre son évolution dans le temps à l’aide de différentes techniques : analyse et simulations des spectres de bruits de fonds, utilisation d’un système d’injection de lumière LED pour une calibration relative du calorimètre et une calibration absolue à l’aide de source de Bi207 .

        Orateur: Xalbat Aguerre (CENBG)
      • 09:46
        Neutrinoless double beta decay search in Xenon dual-phase Time Projection Chamber 23m

        abstract will be provided as a separate file

        Orateur: Maxime PIERRE (SUBATECH)
      • 10:09
        R2D2 R&D : developement of a Spherical Proportional Counter for the neutrinoless double beta decay search 23m

        To come later

        Orateur: Vincent Cecchini (CENBG - IN2P3)
    • 10:32 11:10
      Pause café 38m
    • 11:10 12:30
      Nuclear Physics & Interdisciplinaire
      Président de session: Diego Gruyer (LPC Caen)
      • 11:10
        Session overview 30m
        Orateur: Diego Gruyer (LPC Caen)
      • 11:40
        Description microscopique relativiste des systèmes nucléaires et application à la radioactivité 23m

        Les systèmes nucléaires présentent une grande diversité de propriétés prouvant la complexité de leur structure. Cette complexité est héritée de plusieurs phénomènes, le premier étant lié à la structure interne des protons et neutrons en termes de quarks et gluons. Ainsi, la théorie sous-jacente de la chromodynamique quantique (QCD) joue un rôle primordial dans la description des noyaux.
        Cependant, aux énergies impliquées dans les systèmes nucléaires ($\sim$1 à 10 MeV), la QCD est connue pour être non-perturbative, ce qui rend l'obtention d'une description fiable de l'interaction très complexe.

        Un autre aspect important des systèmes nucléaires réside dans le nombre important de particules en interaction. Le problème quantique à N-corps qui en résulte est extrêmement difficile à résoudre et ne peut être généralement pas être décrit sans approximation. En conséquence, comme souvent en physique, les interactions de nombreuses particules dans un système cohérent donne lieu à l'émergence de propriétés collectives. Les noyaux n'échappent pas à cette règle et de nombreuses propriétés émergent du caractère collectif du système (rotation, vibration, superfluidité, agrégation, déformation, ...).

        Parmi les nombreuses propriétés étudiées en structure nucléaire, la description des phénomènes de radioactivité représente un enjeu particulier. La description microscopique de différentes radioactivités cluster a d'ores-et-déjà fait l'objet de plusieurs études[1] ces dernières années. En revanche, la désintégration $\alpha$ restait, jusqu'à récemment, le seul type de radioactivité qui échappait à une description microscopique. Ce pas a été franchis en utilisant le cadre des théories de la fonctionnelle énergie-densité covariante, formalisme connu pour décrire aussi bien les propriétés de masses[2] que la formation de cluster[3]. L'étude a tout d'abord été effectuée dans des noyaux de masse moyenne, à savoir dans la chaîne de désintégration du $^{108}$Xe[4]. Les résultats ont ensuite été élargis à la description de la radioactivité $\alpha$ dans les noyaux lourds. Un nouveau mode de désintégration proposant l'émission de deux particules $\alpha$ a aussi, de cette façon, été prédit[5].
        [1] G. A. Lalazissis, T. Nikˇsi ́c, D. Vretenar, and P. Ring, Phys. Rev. C 71, 024312 (2005).
        [2] J.-P. Ebran, E. Khan, T. Nikˇsi ́c and D.Vretenar, Nature 487, 341 (2012).
        [3] A. Staszczak, A. Baran, and W. Nazarewicz, Phys. Rev. C 87, 024320 (2013) ; M. Warda, A. Zdeb,and L. M. Robledo, Phys. Rev. C 98, 041602(R) (2018).
        [4] F. Mercier, J. Zhao, R.-D Lasseri, J.-P. Ebran, E. Khan, T. Nikˇsi ́c, and D. Vretenar, Phys. Rev. C102, 011301(R) (2020).
        [5] F. Mercier, J. Zhao, J.-P. Ebran, E. Khan, T. Nikˇsi ́c, and D. Vretenar, Phys. Rev. Lett. 127, 012501(2021).

        Orateur: Florian Mercier (IJCLab)
      • 12:03
        Fission studies of neutron deficient N = 100 isotones (176Os*, 177Ir* and 179Au* ) 23m

        Exploring the fission properties of nuclei in the actinide region has revealed that the symmetric division of the nuclei into two equal masses was not the only mode of fission. It was observed that the fission of actinides preferably produced Fission Fragments (FFs) of unequal mass numbers, which was thought to originate from the strong spherical shell effects present in the FFs by the doubly magic $^{132}$Sn. An extensive experimental study of nuclei in the region 205 $\leq$ A $\leq$ 234 showed a transition from asymmetric fission in actinides towards symmetric fission and it was believed that all systems lighter than A$\sim$226 would fission symmetrically.

        Surprisingly, fission of neutron-deficient $^{178-190}$Hg isotopes has revealed prominent asymmetric mass division, despite the presence of the closed-shell in $^{90}$Zr (Z=40, N=50). Similar studies made with $^{179}$Au and $^{178}$Pt have further proved that the asymmetric mass split is a typical property of the neutron-deficient nuclei in this region, thus confirming the theoretical expectations. Therefore, all the mentioned findings determine the existence of a new region of asymmetric fission in addition to the actinide region – the extension of which in the nuclear chart is currently unknown and needs to be established experimentally.

        This work deals with the multi-parameter study of fission modes of highly neutron-deficient nuclei with $N=100$ below $^{180}$Hg namely with $^{176}$Os, $^{177}$Ir and $^{179}$Au. The experiment was conducted at the Advanced Science Research Center (ASRC) of the Japanese Atomic Energy Agency (JAEA). It is the very first study of the neutron and gamma multiplicities at low excitation energies using a large array (33 modules) of liquid scintillators. The simultaneous measurement of the FFs, neutrons, and gamma-rays was done at different beam energies thus leading to different excitation energies of the Compound Nucleus (CN). Some preliminary results will be addressed, from the point of view of fission mode's coexistence as well as their evolution with the excitation energy of the CN.

        .

        Orateur: Mlle Deby Treasa Kattikat Melcom (Centre d'Etudes Nucléaires de Bordeaux-Gradignan)
    • 14:00 16:00
      Nuclear Physics & Interdisciplinaire
      Président de session: Diego Gruyer (LPC Caen)
      • 14:00
        First-forbidden $\beta$-decay study in the pnQRPA approach 23m

        First-forbidden beta decays play an important role in several domains of physics. First, in astrophysics, where nuclear data such as the half-life govern stellar evolution and nucleosynthesis [1]. Second, they are of interest for nuclear reactors physics as first highlighted in 2014 [2]. In first-forbidden $\beta$-decays, the form factor of the leptonic spectra are not equal to one as for allowed decays. It has been shown that it could have a non negligeable impact on the shape of the antineutrino energy spectra. Among the models developed since then, which do not all tend to agree [3, 4, 5, 6], some even state that it could solve the reactor antineutrino shape anomaly.
        New theoretical calculations of the first-forbidden form factors associated to summation calculations [7] and dedicated experimental measurements would be useful to corroborate or negate already existing predictions.

        Charge-exchange excitations corresponding to beta-decay first forbidden transitions in nuclei have been studied in the self-consistent proton-neutron quasiparticle random-phase approximation (pnQRPA) using the finite-range Gogny interaction [8]. No parameters beyond those included in the effective nuclear force are included. Axial deformations are taken into account for both the ground state and charge-exchange excitations.
        With this formalism, nuclear matrix elements have been computed for operators derived from the multipole expansion of the weak current [9]: spin-dipole, anti-analog dipole and pseudoscalar-axial vector and tensor-polar vector operators. Those operators come to complete the already existing Fermi and Gamow-Teller operators already considered in Ref. [8] in order to have a simultaneous description of the allowed and first-forbidden $\beta$-decays.

        At this conference, first results of the charge-exchange operators will be presented for both spherical and axially deformed nuclei with a comparison to other theoretical models.

        References:

        [1] M. Arnould, S. Goriely, and K. Takahashi. The r-process of stellar nucleosynthesis: Astro- physics and nuclear physics achievements and mysteries. Phys. Rept., 450:97–213, 2007.

        [2] A. C. Hayes, J. L. Friar, G. T. Garvey, Gerard Jungman, and Guy Jonkmans. Systematic Uncertainties in the Analysis of the Reactor Neutrino Anomaly. Phys. Rev. Lett., 112:202501, 2014.

        [3] Dong-Liang Fang and B. Alex Brown. Effect of first forbidden decays on the shape of neutrino spectra. Phys. Rev. C, 91(2):025503, 2015. [Erratum: Phys.Rev.C 93, 049903 (2016)].

        [4] X. B. Wang and A. C. Hayes. Weak magnetism correction to allowed β decay for reactor antineutrino spectra. Phys. Rev. C, 95(6):064313, 2017.

        [5] X. B. Wang, J. L. Friar, and A. C. Hayes. Nuclear Zemach moments and finite-size corrections to allowed β decay. Phys. Rev. C, 94(3):034314, 2016.

        [6] J. Petkovic, T. Marketin, G. Martinez-Pinedo, and N. Paar. Self-consistent calculation of the reactor antineutrino spectra including forbidden transitions. J. Phys. G, 46(8):085103, 2019.

        [7] M. Estienne et al. Updated Summation Model: An Improved Agreement with the Daya Bay Antineutrino Fluxes. Phys. Rev. Lett., 123(2):022502, 2019.

        [8] M. Martini, S. Peru, and S. Goriely. Gamow-Teller strength in deformed nuclei within the self- consistent charge-exchange quasiparticle random-phase approximation with the Gogny force. Phys. Rev. C, 89(4):044306, 2014.

        [9] Aage Bohr and Ben R Mottelson. Nuclear Structure. World Scientific Publishing Company, 1998.

        Orateur: Arthur Beloeuvre (CNRS-IN2P3)
      • 14:23
        Measurement of 72Ge(p,γ)73As cross section for the astrophysical p-process 23m

        Most of the heavy nuclei in the Universe (Z > 26) are formed by neutron captures during the so-called s- or r-processes. However, 35 proton-rich nuclei imply the existence of another process of nucleosynthesis, the p-process, which takes place in explosive stellar events. The modeling of this process relies on theoretical calculations of nuclear reaction rates. One of the main uncertainties for light nuclei comes from the (γ,p) photodisintegration reactions occurring in this process.
        To improve the reliability of the calculations, it is necessary to increase the amount of relevant nuclear data at energies as close as possible to the astrophysically relevant ones. Our collaboration has performed cross-section measurements of proton-induced reactions on several germanium isotopes, using the activation method. The main purpose was to measure the 72Ge(p,g)73As cross section in the astrophysical energy range, since this reaction has been identified as particularly important for the abundance of light p-nuclei.
        In this talk, I will present the experiment that has been realized as well as some preliminary results from the data analysis of the 72Ge(p,)73As reaction.

        Orateur: Yasmine Demane (IP2I of Lyon)
      • 14:46
        The nuclear matter density functional under the nucleonic hypothesis 23m

        A Bayesian analysis of the possible behaviors of the dense matter equation of state informed through recent LIGO-Virgo as well as NICER measurements reveals that all the present observations are compatible with a fully nucleonic hypothesis for the composition of dense matter, even in the core of the most massive pulsar PSR J0740+6620. Under the hypothesis of a nucleonic composition, we extract the most general behavior of the energy per particle of symmetric matter and density dependence of the symmetry energy, compatible with the astrophysical observations as well as our present knowledge of low energy nuclear physics from effective field theory predictions and experimental nuclear mass data. These results can be used as a null hypothesis to be confronted with future constraints on dense matter to search for possible exotic degrees of freedom.

        Orateur: Hoa Dinh Thi (LPC Caen)
      • 15:09
        Impact of an impurity in the thermalization of water nanodroplets 23m

        Many molecules are spread in the earth atmosphere and observed as components of aerosol since the industrial revolution. Besides, some organic molecules such as pyridine evidence a significantly increased atmospheric concentration but are not observed as components of the atmospheric aerosols. Pyridine (C5H5N) is a hydrophobic molecule and the pyridinium-water clusters are of interest since water plays a key role in the aerosol nucleation. The Molecular-Cluster Irradiation Device (DIAM) at the Institut de Physique des 2 Infinis de Lyon is dedicated to the exploration of out-of-equilibrium mass- and energy- selected small molecular clusters. The evaporation of water molecules from out-of-equilibrium pyridinium-water cluster ions is studied using the correlated ion and neutral time-of-flight mass spectrometer technique (COINTOF) in combination with a velocity-map imaging (VMI) method. The role of the pyridium versus hydronium ion in such water nanodroplets is investigated. The results highlight the importance of the ion-molecule interactions in the thermalization process, a question that underpins the vast majority of atmospheric and biological phenomena especially when water is involved.

        Orateur: Leo Lavy
      • 15:32
        Interstellar methanol: the challenge of reactivity in astrophysical conditions 23m

        The presence of clouds of methanol in the interstellar medium (ISM) has been evidenced recently by the ALMA (Atacama Large Millimeter Array) radiotelescope. The high abundance of such organic molecule shows its remarkable persistence despite being exposed to the energetic radiation in interstellar space. Indeed, radiation impact can lead to dissociation of the molecule but can also open opportunities for the formation of more complex organic molecules (COMs). The very high abundance of protons in the ISM facilitates the formation of small protonated methanol clusters $H^{+}(CH_{3}OH)_{n}$ via weak bonding of the protonated form $H^{+}(CH_{3}OH)$ with other neutral molecules. The Molecular-Cluster Irradiation Device (DIAM) set-up at the Institut de Physique des 2 Infinis de Lyon is devoted to perform experiments under conditions that reproduce some aspects of interstellar, circumstellar or planetary atmospheric environments. We performed single collision experiments of 8-keV mass-selected protonated methanol clusters on argon atom in order to investigate the competition of the various fragmentation processes: evaporation, dissociation or formation of other COMs. The protonated dimethyl ether observed in interstellar clouds of methanol is evidenced to be formed in our laboratory experiment via a water loss reaction in a protonated methanol cluster.

        Orateur: Denis Comte (IP2I Lyon)
    • 16:00 16:30
      Pause café 30m
    • 16:30 17:20
      Nuclear Physics & Interdisciplinaire
      Président de session: Diego Gruyer (LPC Caen)
      • 16:30
        Development of laser ionization technique coupled with mass separation for environmental and medical applications: A case study of Copper 23m

        A variety of laser-based applications has been developed since its invention in 1960s, among them is mass spectroscopy. A wavelength tunable laser radiation can selectively excite quantum transitions in atoms and molecules. A majority of laser spectroscopy methods are based on this resonance laser-matter interaction where resonant excitation and subsequent ionization of atoms is done using a suitable laser that is followed by a conventional spectrometry. It is often referred as Resonance Ionization Spectroscopy (RIS). Laser Resonance ionization is selective according to number of charge Z, while the application of an electromagnetic field ensures the separation of the isotopes according to their number of mass A. This combination allows to isolate an isotope with a high precision avoiding its isobars. SMILES project (Séparation en Masse couplée à l'Ionisation Laser pour des applications Environnementales et en Santé) initiated in SUBATECH lab aims at the development of a laser ionization device coupled with mass separation to quantify, purify and separate isotopes not only for environmental but also medical purposes. SMILES project is currently focused on copper element, as it is present in most anthropogenic sources of metals and assessing their isotopic composition can determine their contamination level in environment; also, 64Cu and 67Cu are rapidly emerging as potential diagnostic and therapeutic tools in nuclear medicine.
        The main components that are involved in SMILES project include ionization system, beam focusing system and mass separator. The ionization can be achieved in a two-step process with one laser for desorption and another laser for ionisation of the excited atoms. The ionized beam can then be focused using a set of beam focusing lenses and deflectors. Finally, the mass separation can be achieved either by using an electromagnet or a Time-of-flight mass separator (TOF-MS). Both methods have proven to be efficient in literature. To study and optimize these parameters, SIMION software is often used [1,2,3]. It is a helpful tool in understanding the ion trajectories in an electromagnetic field. The performance of SIMION was studied and understood by performing several experiments before proceeding with the simulation. RISIKO mass separator (University of Mainz, Germany) and Time-of-flight mass separator (TOF-MS) were simulated, which will be helpful in configuring SMILES set-up.

        Key words: RIMS, laser desorption, laser ionization, mass separator, SIMION

        References:
        [1] Z. Yin et al; Journal of Mass Spectrometry (2018) 53:435–443
        [2] J.L. Henares et al; Nuclear Instruments and Methods in Physics Research A 830 (2016) 520–525
        [3] K. Blaum et al; Nuclear Instruments and Methods in Physics Research B 204 (2003) 331–335

        Orateur: Mme Keerthana KAMALAKANNAN (SUBATECH, GIP ARRONAX)
      • 16:53
        Développement et optimisation d’une cible de gadolinium enrichi pour la mesure de sections efficaces de production de terbium radioactif à visée médicale 23m

        L’approche théranostique est un nouveau paradigme de la médecine nucléaire qui consiste à utiliser quand c’est possible, un même radioélément pour réaliser le diagnostic et la thérapie et ainsi à personnaliser les traitements de chaque patient. Un quadruplet de terbium répond à cette attente : Tb-149 (α-thérapie ), Tb-160 ( β-thérapie) , Tb-152 (tomographie par émission de positons) et Tb-155 (tomographie par émission monophotonique). Excepté le Tb-160, la production des autres radionucléides est limitée et couteuse, car ils sont actuellement produits par des réactions de spallation à haute énergie couplées à la séparation isotopique. L'emploi des cibles de Gadolinium enrichi dans un accélérateur biomedical est l'une des méthodes possibles pour augmenter les disponibilités de terbium radioactif grâce aux réactions nucléaires suivantes: 154Gd(p,6n)149Tb, 152Gd(p,n)152Tb et 155Gd(d,2n)155Tb.

        L’objectif de ce travail est de développer des cibles contenant de l’oxyde de gadolinium, Gd2O3, et de mesurer les sections efficaces de production du Tb à partir de ces cibles. En utilisant la méthode de co-électrodéposition, nous avons piégé les particules de Gd2O3 dans une matrice de nickel et réalisé des dépôts fins d’épaisseur de 13 µm de Ni/Gd2O3 qui contiennent 3% de Gd atomique. Ces dépôts ont ensuite été irradiés par le cyclotron GIP ARRONAX avec des faisceaux de deutons dont l’énergie varie de 10 MeV à 30 MeV. La technique de stacked-foils est utilisée pour mesurer les valeurs de sections efficaces des réactions nucléaires. Ces valeurs sont ensuite comparées aux valeurs disponibles dans la littérature. Ces mesures donneront les premières données expérimentales disponibles pour ce type de cibles et elles permettront d’évaluer les rendements de la future production.

        En raison du coût élevé du gadolinium enrichi, nous avons utilisé le gadolinium naturel pour réaliser la preuve de concept de notre stratégie. La grande cohérence entre les résultats de mesures et les valeurs de référence confirme la possibilité d'utiliser du gadolinium enrichi dans l'étape suivante. Plus de détails sur ces expériences seront présentés lors de ma présentation orale.

        Mots clés :
        Terbium ; Gadolinium ; mesure de section efficace ; stacked-foils ; cible composée

        Orateur: Yizheng WANG (Laboratoire Subatech)
    • 17:25 18:18
      Theory
      • 17:25
        Session overview 30m
        Orateur: Luc Darmé (IP2I - CNRS)
      • 17:55
        Perturbative renormalization of the semi-infinite massive $\phi_4^4$ theory 23m

        We give a rigorous proof of the renormalization of the $\phi_4^4$ massive semi-infinite model using the renormalization group flow equations. We present the family of all admissible boundary conditions and the propagators associated to each boundary condition. Then we study the regularity properties of the support of the gaussian measure associated to the regularized propagator. We also present the considered action and set up the system of perturbative flow equations satisfied by the connected amputated Schwinger functions (CAS). To establish bounds on the CAS, being distributions, they have to be folded first with test functions. A suitable class of test functions is introduced, together with tree structures that will be used in the bounds to be derived on the CAS. We state and prove inductive bounds on the Schwinger functions which, being uniform in the cutoff, directly lead to renormalizability.

        Orateur: Majdouline Borji (CPT École Polytechnique)
    • 18:20 18:40
      Présentation SFP 20m
      Orateur: Julien Masbou (SUBATECH)
    • 09:00 10:32
      Theory
      Président de session: Luc Darmé (IP2I - CNRS)
      • 09:00
        On the $B$-meson decay anomalies 23m

        In the Standard Model electroweak interactions are strictly lepton flavour universal.
        In view of the emerging hints for the violation of lepton flavour universality in several $B$-meson decays, we conduct a model-independent study (effective field theory approach) of several well-motivated new physics scenarios.
        Taking into account the most recent LHCb data, we provide updates to New Physics fits for numerous popular hypotheses.
        We also consider a promising model of vector leptoquarks,
        which in addition to explaining the $B$-meson decay anomalies ($R_{K^{(*)}}$ and $R_{D^{(*)}}$) would have an extensive impact for numerous flavour observables.

        Orateur: Jonathan Kriewald (LPC Clermont)
      • 09:23
        New physics scenarios in the Non Minimal Flavour Violating MSSM 23m
        Orateur: Amine Boussejra (IP2I Lyon)
      • 09:46
        Computation of relic densities within freeze-out mechanism 23m

        Since decades, our knowledge of fundamental physics is being challenged by astrophysical and cosmological observations, thus leading us to the hypothesis of the existence of the so-called dark matter. Nowadays, different approaches have been explored in order to describe the nature of such matter. One of them assumes that the dark matter is made of a stable particle not yet detected by particle physics experiments. The goal of my work is to add features to the software "SuperIso Relic", in order to study the evolution of the density of particles in new physics models.

        Orateur: Marco Palmiotto (Université Claude Bernard Lyon 1)
      • 10:09
        Solving (g-2) with a new light gauge boson 23m

        "Final" abstract will be provided as a separate file

        Even if the SM describes fundamental interactions and particles extremely well, there are still some theoretical caveats and discrepancies between theory and observation.
        Starting from the anomalous magnetic moment of charged leptons (muon and electron), we minimally extend the SM via a new light gauge boson (Z’) and work under the hypothesis of strictly flavour violating couplings to leptons.
        Taking into account several flavour observables, we can constrain our model and make predictions for several observables.

        Orateur: Emanuelle Pinsard
    • 10:32 11:02
      Pause café 30m
    • 11:02 12:34
      Theory
      Président de session: Luc Darmé (IP2I - CNRS)
      • 11:02
        Analytic and Numerical Bootstrap for One-Matrix Model and "Unsolvable" Two-Matrix Model 23m

        We propose the relaxation bootstrap method for the numerical solution of multi-matrix models in the large N limit, developing and improving the recent proposal of H.Lin. It gives rigorous inequalities on the single trace moments of the matrices up to a given "cutoff" order (length) of the moments. The method combines usual loop equations on the moments and the positivity constraint on the correlation matrix of the moments. We have a rigorous proof of applicability of this method in the case of the one-matrix model where the condition of positivity of the saddle point solution appears to be equivalent to the presence of supports of the eigenvalue distribution only on the real axis and only with positive weight. We demonstrate the numerical efficiency of our method by solving the analytically "unsolvable" two-matrix model with tr[A,B]2 interaction and quartic potentials, even for solutions with spontaneously broken discrete symmetry. The region of values for computed moments allowed by inequalities quickly shrinks with the increase of the cutoff, allowing the precision of about 6 digits for generic values of couplings in the case of Z2 symmetric solutions. Our numerical data are checked against the known analytic results for particular values of parameters.

        Orateur: Zechuan Zheng (ENS ULM)
      • 11:25
        Quantum dynamics beyond the independent particle picture 23m

        abstract will be provided as a separate file

        Orateur: Thomas CZUBA ({UNIV PARIS-SACLAY}UMR9012)
      • 11:48
        Microscopic interactions for the nuclear shell model 23m

        The interacting shell model is a modern many-body method used in nuclear structure calculations. The basic idea of the model is that the eigenproblem for a microscopic Hamiltonian is solved by diagonalization of the Hamiltonian matrix in a spherically-symmetric many-body basis (for example, a harmonic oscillator basis). The basis dimension grows very rapidly with increasing atomic number A. For nuclei with A>18, only a few valence nucleons can be treated as active particles interacting with each other in a truncated Hilbert space, consisted of one or two oscillator shells outside a closed-shell core. The interaction between valence nucleons in such a model space is an effective interaction and not a bare nucleon-nucleon interaction as between free nucleons anymore. When phenomenological effective interactions are used, the shell model is known to provide excellent description of excitation spectra and transitions at low energies, while to derive accurate microscopic effective interactions from the bare nucleon-nucleon potential is still a challenge. In this work we discuss the formalism and numerical implementation of many-body perturbation theory, as well as compare the properties of the derived microscopic interactions with empirical interactions.

        Orateur: Zhen Li (Centre d'Etudes Nucléaires de Bordeaux-Gradignan)
      • 12:11
        Shapes of heavy and super-heavy atomic nuclei with Skyrme Energy Density Functionals 23m

        The mean-field, or Energy Density Functional (EDF), methods allow for the study of energies and
        shapes of all nuclei, but the lightest ones, throughout the mass-table. These approach and their
        extensions such as the Random Phase Approximation (RPA) and Generator Coordinate Method (GCM)
        give access to observables from ground state, excited states and large-amplitude collective motion of the
        nuclei. Furthermore, the mean-field gives a natural interpretation of the nuclear configurations through
        the shapes of the system in its intrinsic frame.
        It is well established that a correct description of the ground states of deformed heavy nuclei, rotational
        bands, isomeric states energies and fission barriers is strongly correlated with the value of the surface
        energy coefficient a_surf and also the surface symmetry energy coefficient a_ssym .
        A first step in the direction of a better description of shapes of heavy nuclei was recently achieved
        with the construction of the SLy5sX series of Skyrme-EDFs and more specifically with the SLy5s1
        parameterisation. The systematically improved agreement for deformation properties of heavy nuclei
        achieved with SLy5s1 compared to widely-used parameterisations such as SLy5, however, comes
        at the expense of a significant increase of mass residuals.
        In this presentation, I will show that a slight modification of the fit protocol together with the
        inclusion of the often-neglected two-body contribution to the center-of-mass correction in functional
        greatly improve the results for shapes, barriers heights and binding energies. I will present the details
        of the fit protocol and show a set of selected results. It turns out that completely omitting the center-
        of-mass correction as sometimes done for parameterisations aiming at nuclear dynamics is similarly
        problematic as using the standard recipe where only the one-body part is kept. I will also discuss how
        the statistical error bars on the parameters of the functional propagate on calculated quantities such as
        fission barriers.

        Orateur: Philippe Da costa
    • 14:00 16:02
      Hadronic Physics
      Président de session: Maxime Guilbaud (SUBATECH)
      • 14:00
        Session overview 30m
        Orateur: Maxime Guilbaud (SUBATECH)
      • 14:30
        How to study the location of the critical point in the phase diagram of nuclear matter with the event generator EPOS 4 ? 23m

        Within the framework of the exploration of the phase diagram of nuclear matter, the susceptibilities are useful tools to probe the existence of a 1st order phase transition and a possible critical endpoint. In this context, STAR collaboration recently published some results of variances and 2nd order susceptibility ratios for electric charge (Q), protons and kaons (the last 2 being used as proxies for baryonic number B and strangeness S).
        Hence, we plan to simulate Au+Au collisions with the event generator EPOS, in order to reproduce STAR analyses, and especially study the impact of hadronisation process and hadronic cascades on those observables.
        We show here our first results for some BES program reactions, obtained with a preliminary version of EPOS 4.

        Orateur: Johannès Jahan (Subatech)
      • 14:53
        Dynamical Thermalization in Heavy-Ion Collisions 23m

        The development of the merged EPOS+PHSD approach is one way to study the influence of the initial non-equilibrium stage of the heavy-ion reactions on the final observables. The microscopic understanding of the initial phase of heavy-ion collisions is an intricate problem, in this respect, the EPOS and PHSD approaches provide a unique possibility to address this problem. We employ the EPOS to do the initial stage of Heavy-Ion Collisions and produce the particles based on a multiple Pomeron exchange in Gribov Reggeon Field Theory formalism. EPOS is a particularly successful event generator and universal model for all collisions. Following injecting particles from EPOS to PHSD, we investigate the medium based on the theory inside PHSD. PHSD is a microscopic covariant dynamical approach for strongly interacting systems formulated based on Kadanoff-Baym equations. I am going to present our results concerning various observables such as charged particle multiplicity, elliptical flow, pt spectra, and mt spectra in EPOS+PHSD and make compare them with EPOS+hydro, and pure PHSD simulations for Au-Au@200 GeV.

        Orateur: Mahbobeh JAFARPOUR (Subatech-Nantes university)
      • 15:16
        PHQMD Equation of state influence of cluster formation and flow for heavy ion collisions 23m

        Parton-Hadron-Quantum-Molecular-Dynamics (PHQMD) is, a microscopic n-body
        transport model based on the QMD propagation of the baryonic degrees of freedom with density
        dependent 2-body potential interactions. All other ingredients of PHQMD, including the collision
        integral and the treatment of the quark-gluon plasma (QGP) phase, are adopted from the Parton-
        Hadron-String Dynamics (PHSD) approach. In PHQMD the cluster formation occurs dynamically,
        caused by the interactions.
        Here we will presents results for the global cluster formation at the end of the collision as well as the directed and elliptical flow of proton and deuterons in 0.6-1.5 AGeV collisionsm observables that prove to be sensitive to the choice of the nuclear equation of state.

        Orateur: Michael Winn ({CNRS}UMR6457)
      • 15:39
        Experimental study of baryon resonances in nuclei 23m

        Baryon resonances (3-quark states) occurred in the micro-second old universe during the transition between the Quark Gluon Plasma and the confinement of quarks and gluons in nucleons. Their properties (mass, life time, branching ratios,…) can be determined through nucleon excitations using electron, photon or hadron beams, providing a unique source of information on Quantum ChromoDynamics (QCD), the fundamental theory of the strong interaction. Baryon resonances also play a major role in nuclear matter studies at center-of-mass energies of a few GeV per nucleon.
        Pion-nuclei reactions allow for a study of the behavior of baryon resonances in nuclei. An experiment has been performed by the HADES (High Acceptance Dielectron Spectrometer) collaboration at the GSI accelerator facility in the second resonance region (masses around 1.5 GeV) using polyethylene (C2H4) and carbon targets. The measurements on the carbon target have been used to extract, using a subtraction, data for pion-nucleon interactions . In the energy domain covered by our experiment, baryon resonances (N(1440), N(1520), N(1535)) are excited and their behavior in nuclear matter is completely unknown. Although pion-nucleon reactions are a crucial tool to study baryon resonances, the data basis is very scarce. The combination of the HADES set-up and of the GSI pion beam is unique in the world for providing the missing data for baryon spectroscopy.
        In this talk, I will present some preliminary results from the data analysis of pion and proton spectra measured in pion-carbon reactions at center-of-mass energies around 1.5 GeV.

        Orateur: Fatima Hojeij (IJCLab (CNRS/IN2P3))
    • 16:02 16:35
      Pause café 33m
    • 16:35 18:36
      Hadronic Physics
      Président de session: Maxime Guilbaud (SUBATECH)
      • 16:37
        Cluster shape analysis and strangeness tracking for the ALICE upgrade 23m

        ALICE is one of the experiments of the LHC (Large Hadron Collider) at CERN (European Organization for Nuclear Research). The purpose of ALICE (A Large Ion Collider Experiment) is to study the properties of strongly interacting matter by performing different kinds of measurements in proton-proton, proton-nucleus and nucleus-nucleus collisions. The first detector encountered by collisions' products is the ITS (Inner Tracking System).
        In prevision of the Run 3 of the LHC, that will start in 2022, many detectors of ALICE were upgraded, the ITS being one of those. During the commissioning of this tracking system in 2020, cosmic-ray data were taken. The ITS is built of ALPIDE (ALICE Pixel Detector) silicon sensors that allow for detecting particles by means of their pixels that become activated when particles cross sensors. When several neighbouring pixels are activated during the same particle crossing, they form a cluster. This talk shall present two ongoing studies.

        First, a study of the shape of these clusters with real data from the commissioning but also with data generated by the official ALICE's Monte-Carlo simulation code. We investigate the impact of various parameters on the clusters' shape such as the dimensions of the pixels and especially the inclination of particle tracks with respect to the surface of sensors.

        With the recent upgrades of the ITS, the first detection layers are closer to the primary vertex (the collision point). This is a huge improvement in secondary vertex reconstruction which is important for short-lived heavy-flavour hadrons like $\Xi_b^-$ for instance, which is at the heart of this second study via a specific decay channel: $\Xi_b^- \to (\Xi_c^0 \to \Xi^- \pi^+) \pi^-$. The desire is to create an analysis prototype using a state-of-the-art detector and a brandnew technique called strangeness tracking. The purpose of strangeness tracking is to improve the efficiency and precison of the reconstruction of weakly decaying particles (such as $\Xi^-$ or hypertritons). This can be achieved using silicon detectors with a few layers very close to the primary vertex. These layers will provide the tracking algorithm with information about the decaying particle before it actually decays.

        Orateur: Alexandre Bigot (IPHC, Université de Strasbourg)
      • 17:02
        J/Ѱ and Ѱ(2S) production as a function of charged-particle multiplicity at the LHC with ALICE experiment 23m

        Lattice QCD predicts the formation of the quark-gluon plasma (QGP) at extreme conditions of temperature and energy density, a state of matter where quarks and gluons are no longer
        confined inside hadrons. The study of QGP is carried out by different experiments, e.g ALICE experiment, at the large hadron collider (LHC), where two ultra relativistic beams of proton or heavy ions are collided to recreate the conditions in which the QGP is formed. Originally, the QGP was expected to be produced only in A--A collisions. However, recent measurements in small collision systems (pp and p--Pb collisions) at high multiplicity have revealed some observations that are usually interpreted as signs of QGP formation in A--A collisions. The formation of small QGP droplets or the influence of multiple parton interactions are among the possible explanations for these features. The correlation of particle production, quarkonia as an example, with the charged particle multiplicity helps to understand such behaviour.
        Quarkonium production, a bound state of a heavy quark and an anti-quark pair, presents an important tool to understand the QGP as it is produced during the initial stages and experiences the full evolution of the collision. The suppression of quarkonium production in A--A collisions with respect to pp collisions has been observed at RHIC and LHC energies. Quarkonium production is also studied in p--A collisions in order to determine whether the origin of this suppression is the QGP or the influence of cold nuclear matter (CNM).
        This presentation will show the measurements of J/Ѱ and Ѱ(2S) production as a function of charged-particle multiplicity in pp and p--Pb collisions with ALICE experiment at the LHC.

        Orateur: Theraa Tork (IJCLab)
      • 17:27
        Quarkonia excited state suppression in pp and p—Pb collisions with ALICE 23m

        Quarkonium production in small systems has been the subject of many theoretical and experimental studies. In proton--nucleus (p--A) collisions, their production is sensitive to cold nuclear matter effects such as nuclear modification of parton densities, parton energy loss via initial-state radiation and transverse momentum broadening due to multiple soft collisions. Furthermore, high-multiplicity proton-proton (pp) and p--A collisions have shown features reminiscent of those observed in heavy-ion collisions. Thus, quarkonium production as a function of event multiplicity can bring new insights on processes at the parton level and on the interplay between the hard and soft mechanisms in particle production. In particular, the role of Multiple Parton Interactions (MPI) which are expected to be relevant for the production of heavy quarks at the LHC energies, can be investigated.

        In this contribution the multiplicity dependence of self-normalized yields measured in pp collisions at $\sqrt{s} = 13$ TeV and $\sqrt{s} = 5.02$ TeV will be presented for several quarkonium states, namely inclusive J/$\Psi$ at midrapidity as well as the corresponding results for J/$\Psi$, $\Psi$(2S), $\Upsilon$(1S) and $\Upsilon$(2S) at forward rapidity. The nuclear modification factor ($R_{\mathrm{pPb}}$) for J/$\Psi$, $\Psi$(2S), $\Upsilon$(1S), $\Upsilon$(2S) and $\Upsilon$(3S), measured in p--Pb collisions at $\sqrt{s_{\mathrm{NN}}} = 8.16$ TeV in forward and backward rapidities will also be presented, including the centrality dependence of J/$\Psi$ and $\Psi$(2S) and the new results of the excited to ground state ratio for both charmonium and bottomonium. Furthermore, the $R_{\mathrm{pPb}}$ results at midrapidity will be shown at $\sqrt{s_{\mathrm{NN}}} = 5.02$ TeV for both prompt and non-prompt J/$\Psi$, the latter originating from the decay of beauty hadrons. The results are compared with several model calculations and the possible interpretation of the results will be discussed.

        Orateur: Yanchun Ding (Institut de Physique des 2 Infinis de Lyon (FR))
      • 17:52
        Commissioning du détecteur MFT d’ALICE et mesure de la polarisation des J/ѱ produits en collisions Pb-Pb ultrapériphériques à 5.02 TeV 23m

        Ultra-relativistic heavy-ion collisions are an important tool to investigate the Quark-Gluon Plasma predicted by the theory of Quantum Chromo-Dynamics. It is also possible to use these collisions to study poorly known gluon shadowing effects at low Bjorken-$x$ values. Indeed Ultra-Peripheral Collisions (UPC) between two Pb nuclei, in which the impact parameter is larger than the sum of their radii, provide a useful way to study photonuclear reactions. Thanks to the data collected during Run2 by the ALICE Collaboration, a study based of the angular modulation of the muons originating from decays of photoproduced J$/\psi$ mesons is being performed in Lyon at forward rapidity. The implementation of the Muon Forward Tracker detector in ALICE for Run 3 and Run 4 at forward rapidity will improve the resolution and thus allow more precise studies of the photoproduction in UPC.

        Orateur: Lucrezia Camilla Migliorin (Institut de Physique de 2 Infinis de Lyon)
    • 21:30 22:00
      Conclusions 30m
      Orateurs: Julien Masbou (SUBATECH), Laura Zambelli (LAPP), Laure MASSACRIER (Institut de Physique Nucléaire d'Orsay)