(English version, for French see below)
Organised by the sections "Fields and Particles" and "Nuclear Physics" of the Société Française de Physique (SFP), the "Journées de Rencontre des Jeunes Chercheurs 2023" welcomes all PhD students (from the first to the last year) and young postdocs.
This year it will be held from October 22 to October 28, 2023, at the holiday resort La Rivière in Saint-Jean-de-Monts (85) – France.
The JRJCs are an occasion for each participant to present their work in a convivial atmosphere and to obtain from their colleagues an overview of the current research going on in France in the domain.
This year the following subjects are proposed : Instrumentation and accelerators - Hadronic physics - Standard Model - Physics beyond the Standard Model - Nuclear structure - Nuclear energy - Medical physics and interdisciplinarity - Gravitational waves - Flavor physics - Astroparticles - Cosmology - Neutrinos - Theory
Presentations can be given either in English or French. The conference social program foresees a half-day trip in the nearby area, as well as one or two public seminars. The deadline for registration is September 8th, 2023. For any other information please feel free to contact the secretary or any member of the organising committee (see below).
(Français)
Organisées par les divisions "Champs et Particules" et "Physique Nucléaire" de la Société Française de Physique (SFP), les Journées de Rencontre des Jeunes Chercheurs 2023 s'adressent à tous les étudiants en thèse (de la première à la dernière année) et aux jeunes post-doctorants.
Elles auront lieu du 22 au 28 octobre 2023 et se tiendront au village vacances La Rivière à Saint-Jean-de-Monts (85) – France.
Les JRJC sont l'occasion pour chaque participant de présenter ses travaux de recherche dans une ambiance conviviale et de partager avec ses collègues une vue d'ensemble des différentes recherches menées à l'heure actuelle dans sa spécialité et dans des domaines proches.
Les thèmes proposés cette année sont les suivants : Instrumentation et accélérateurs - Physique hadronique - Modèle Standard - Physique au-delà du Modèle Standard - Structure nucléaire - Énergie nucléaire - Physique médicale et interdisciplinaire - Ondes gravitationelles - Physique des saveurs - Astroparticules - Cosmologie - Neutrinos - Théorie
La langue de travail des JRJC est le français, mais les non-francophones peuvent donner leur exposé en anglais. Le programme social comprend, outre une excursion dans la région, une ou deux conférences en soirée pouvant être ouvertes au public. Le date limite d'inscription est fixée au 8 septembre 2023. Pour tout renseignement complémentaire, n'hésitez pas à contacter notre secrétariat ou bien un membre du comité d'organisation (voir ici de suite).
Francois Brun (CEA Saclay) | francois.brun@cea.fr |
Luca Cadamuro (IJCLab) | luca.cadamuro@cern.ch |
Rachel Delorme (LPSC) | rachel.delorme@lpsc.in2p3.fr |
Romain Gaior (LPNHE) | romain.gaior@lpnhe.in2p3.fr |
Andreas Goudelis (LPC Clermont) | andreas.goudelis@clermont.in2p3.fr |
Maxime Guilbaud (SUBATECH) | guilbaud@subatech.in2p3.fr |
Julien Masbou (SUBATECH) | masbou@subatech.in2p3.fr |
Laure Massacrier (IJCLab) | massacrier@ijclab.in2p3.fr |
Sabrina Sacerdoti (APC) | sacerdoti@apc.in2p3.fr |
Thomas Strebler (CPPM) | strebler@cppm.in2p3.fr |
Antonio Uras (IP2I) | antonio.uras@cern.ch |
Laura Zambelli (LAPP) | laura.zambelli@lapp.in2p3.fr |
A new method is presented for calibrating hadronic tau leptons ($\tau_h$) using $Z \rightarrow \mu \tau_h$ events, recorded by CMS experiment during 2018 at $\sqrt{s}$ = 13 TeV. The calibration is performed using data samples that reconstruct the decay of a Z boson into $\tau_{\mu}\tau_h$. By reconstructing the invariant visible mass from the visible decay products, a distribution is obtained for events with a true Z decay into leptons. This mass distribution enables the differentiation between signal and background contributions, which mostly consist of particles misidentified as $\tau_h$. The comparison of this distribution in data and simulation allows adjustments to the identification efficiency and the energy scale. The combined fit of the two correction factors avoids double-counting of the correction factor uncertainties, which is currently entering the energy scale calibration and takes in account the possible correlations between the two correction factors. The fit is performed separately for each decay mode region (DM) and for each decay and in splitting of different transverse energy ($p_T(\tau_h)$). This splitting allows for a better modeling of the physics, taking in account the different kinematics of the regions. The corrective factors were found to have negligible correlation as a function of DMs but not with $p_T(\tau_h)$, revealing the importance of the combined fit. This study is interesting for analysis using calibrate hadronic tau leptons such as the search of CP violation in $H \rightarrow \tau \tau$ decay.
The first part of the presentation is focused on a current LHCb analysis about the measurement of the branching fraction of the $B^0_{d(s)}\to K^0_Shh’$ modes, where $h$ and $h’$ could denotes a pion or a kaon, by using the LHCb Run1 and Run2 data. The first goal of the analysis is to measure the $B^0_d\to K^0_S K^+K^-$ decay mode unobserved to date. The second goal of the study is to update the measurement of the known branching fractions with the full Run 1 + Run 2 dataset and improved analysis techniques.
The second part of the presentation is focused on the upcoming amplitude analysis within LHCb, that follows the branching fraction measurement of $B^0_{d(s)}\to K^0_Shh’$. The first time-integrated Dalitz plot analysis of the decay $B^0_s\to K^0_S\pi^+\pi^-$ will be performed in order to reveal direct CP asymmetries.
The third part of the presentation is focused on the study of the capabilities of the Futur Circular Collider (FCC) $ee$ phase to study the electroweak penguin transitions $b\to s\tau^+\tau^-$ unobserved to date. At meson scale the decay $B^0_d\to K^{*0}\tau^+\tau^-$ with the transition $\tau\to\pi\pi\pi\nu_{\tau}$ is studied with a method to reconstruct explicitly the two undetected neutrinos. The detector requirements to study this decay are evaluated, in particular the vertexing resolution performance are emulated and compared to IDEA one simulated detector used for FCC$ee$ simulations. Analysis of simulated signal events together with simulated dominant backgrounds has been done in order to draw the precision of the $B^0_d\to K^{*0}\tau^+\tau^-$ branching fraction measurement as function of the vertexing resolution performances and evaluate the feasibility of this measurement at FCC$ee$.
The observable Universe is mainly made up of matter, while almost all the antimatter disappeared in the very early times. One explanation is that the Universe obeys the Sakharov conditions, which mean the existence of a C (charge) and CP (charge - parity) symmetry violation. An area of Particle Physics deals with the comprehension and measurement of this symmetry breaking and the potential discovery of a Physics beyond the Standard Model (SM).
In particular, the $\gamma$ angle of the Cabibbo-Kobayashi-Maskawa (CKM) matrix sets a benchmark for CP violation, to be compared with the SM predictions. The accumulated statistics by the LHCb detector allows expecting an even more precise measurement of the $\gamma$ angle. Its accuracy is currently around $4^\circ$, nevertheless, a precision around $1^\circ$ is desirable to test SM up to dozens of TeV.
The golden mode for measuring $\gamma$ is a $B^-\rightarrow DK^-$ decay, where $D$ can be either a $D^0$ or an $\bar{D^0}$, by amplitude modulation in the interference between the processes $b\rightarrow c\bar{u}s$ and $b\rightarrow u\bar{c}s$. The purpose of this study is to take a model-independent measurement of this angle through the $B^\pm\rightarrow D^0(\rightarrow K_s^0\pi^+\pi^-\pi^0)K^\pm$ decays, using LHCb data from Runs 1 and 2 (2011-2018), thanks to a generalized GGSZ method. This measurement, with a tree-diagram decay, can typically be used for the direct $\gamma$ measurement, setting a “standard candle” for the SM.
Jets are collimated sprays of particles that stem from the emission of a quark or a gluon. They are crucial for many Standard Model measurements and search for new physics and they are ubiquitous in high energy proton-proton collisions environments, such as at LHC. The goal of jet calibration is to get the true 4-momentum of the jet, which is that of the particle initiating it, from the detector outputs. This talk is an overview of the jet calibration process at the ATLAS collaboration, with an emphasis on the in situ eta-intercalibration step, on which I am working.
In the current ATLAS jet calibration procedure, the GSC/GNNC methods are applied to improve the Jet Energy Resolution (JER) followed by various methods measuring the JER in situ and correcting it.
This project focuses on merging several aspects of the jet calibration chain, combining a NN-based calibration procedure with a loss function that minimises the JER directly in data. As a first step, the JER is approximated by the standard deviation of the dijet asymmetry. By merging several sequential calibration steps, the goal is to save the time and human resources required for deriving the final jet calibration and potentially improve the JER performance.
Several intriguing hints on deviations from the SM predictions have appeared in the studies of decays of B hadrons (hadrons containing the beauty (b) quark) involving leptons. The standard model contains three lepton families: electrons, muons, and tau-leptons, each accompanied by the corresponding type of neutrino. The most striking deviation is the hint on violation of lepton flavor universality (LFU), which states that the interactions of the electroweak bosons with the leptons are independent of the lepton flavor. One of the examples of such deviation between B hadron semileptonic decay with electron and with muon in the final state was recently discovered by Belle collaboration by measuring the values of forward-backward lepton asymmetry and D* polarization. We plan to cross-check Belle results at the LHC hadron collider, obtain these angular coefficients from a model-independent fit of angular distributions, and compare the values with the standard model predictions. We conduct MC studies to measure the resolution of the neutrino reconstruction procedure, estimate expected statistical uncertainty, and test the sensitivity of the model-independent template fit approach to the different NP scenarios. All current results were obtained on simulation. The next step is to check the data-MC agreement and perform fit on the data
Over the next decade, increases in instantaneous luminosity and detector granularity will increase the amount of data that has to be analyzed by high-energy physics experiments, whether in real time or offline, by an order of magnitude. The reconstruction of charged
particles, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the very first stages of the real time processing to enable experiments to achieve their physics goals. Graph Neural Networks have received a great deal of attention in the community because their computational complexity scales linearly with the number of hits in the detector, unlike conventional algorithms which often scale quadratically or worse. We present a first implementation of the vertex detector reconstruction for the LHCb experiment using GNNs, and benchmark its computational performance in the context of LHCb’s fully GPU-based first-level trigger system, Allen. As Allen performs charged particle reconstruction at the full LHC collision rate, over 20 MHz in the ongoing Run 3, each GPU card must process around one hundred thousand collisions per second. Our work is the first attempt to operate a GNN charged particle reconstruction in such a high-throughput environment using GPUs, and we discuss the pros and cons of the GNN and classical algorithms in a detailed like-for-like comparison.
In anticipation of the High-Luminosity phase of the Large Hadron Collider at CERN (HL-LHC), the ATLAS experiment is upgrading its innermost detector to the new Inner Tracker (ITk), characterized by its wider coverage and increased granularity. While this new detector promises enhanced spatial resolution for track measurements, its combination with the increased luminosity of the HL-LHC poses a computational challenge. The augmented number of space points and the combinatorial nature of track reconstruction would result in an unsustainable CPU usage for the existing tracking algorithms, therefore requiring significant improvements. This work focuses on improving the initial stages of the tracking chain, more specifically the seeding process.
Within the ACTS framework, the existing method filters the seeds based on manually-defined scores, selecting the best candidates for subsequent tracking. This study introduces a novel approach to construct the seeds by bucketing the space points using machine learning algorithms, allowing to explore physics-inspired metrics and anticipating a future shift towards metric learning.
In 2029, the High Luminosity phase of the LHC (HL-LHC) is planned to begin taking data, bringing an unprecedented level of radiation and average particle interactions per brunch-crossing (pileup/PU). In order to preserve the same physics performance that the CMS detector achieves now, the collaboration has decided to replace the endcaps of the current electromagnetic and hadronic calorimeters (ECAL and HCAL, respectively) with the High Granularity Calorimeter (HGCAL). My work has focused primarily on optimizing the performance of the HGCAL trigger primitive generator (TPG) system. The TPG system constructs clusters of energy deposits that likely belong to the same primary particle. These clusters form one type of trigger primitive that are sent to the central Level 1 Trigger which uses them to select interesting events. The energy response and resolution of these trigger primitives were measured for simulated photon, electron, and pion particle-gun samples, both in the absence of pileup interactions as well as with 200 PU (the ultimate case for the HL-LHC). An energy weighting procedure is performed on all samples to account for the energy losses the clusters have due to thresholds and leakage; an additional correction is applied to the pileup samples to account for the (approximately) linear increase of pileup contamination with the pseudo-rapidity $|\eta|$. The TPG performance is well optimized for constant radii — it can however be further improved by varying the radii independently in each layer of the HGCAL, with the aim of better mimicking the true transverse development of the particle showers throughout the detector. For this we plan to build and train a machine learning model to determine these radii, optimizing with the energy response and resolution.
Nuclear activation is the process of production of radionuclides by irradiation. This phenomenon concerns all operating or soon-to-be dismantled particle accelerators used in various fields, from medical applications with the production of radioisotopes or radiotherapy cancer treatments to industrial applications with the sterilization of materials and food preservation [1]. For more than three decades, the possibility of using cyclotrons for nuclear power generation and nuclear waste disposal has also been discussed, based on the Accelerator-Driven System technology. This technology looks promising and will give an essential impulse to developing high-power cyclotrons.
The IAEA advocates for the definition of a decommissioning plan for any particle accelerator facilities. Such plans mandates in particular a radionuclides inventory. The Monte Carlo software are an essential components of achieving such estimation, with a critical choice of nuclear database.
JANIS is a database regrouping the main models (such as ENDF, TENDL or JEFF). This work focuses on the (p,n) and (n,gamma) reaction on the ADS range of energy i.e., 1 - 1000 MeV and give a systematic analysis of them. This discussion will be joined by the extraction of cross-sections and particle yield of the four main Monte Carlo software which are Fluka, PHITS, Geant4 and MCNP6.
This discussion will be applied to the study of the radioactivity induced in various materials (Sc, Tb, Ta, W, Au), of known composition, irradiated by protons of 13.5 and 16.5 MeV energies in the cyclotron facility CYRCé, focusing on the lower energy range of ADS. Through MC, we have estimated the neutron fields and their associated induced activities [2]. We confronted the simulation calculations results with experimental activation measurements performed by high-resolution gamma-ray spectrometry (HpGe, LABSOCS).
References
[1]. A. Nourreddine, N. Arbor, J. Riffaud and al., Assessment of Activation in Food Products Irradiated with High Energy X Rays.
IAEA-TECDOC-2008, 978-92-0-137022-Vienna (2022) pp. 193-214
[2] J. Collin, J.M Horodynski, N. Arbor, M. Barbagallo, L. Tagliapietra, F. Carminati, G. Galli Carminati, A. Nourreddine, Validation of Monte Carlo simulations by experimental measurements of proton- and photon-inducedphoton- induced activation in cyclotrons and LINAC,
8th International Conference on Advancements in Nuclear Instrumentation Measurement Methods and their Applications, Lucca, 12-16 June 2023
I will present the construction of a Hubble diagram, especially the standardisation of Type Ia Supernovae luminosity. I will then talk about the ongoing ZTF survey, which is the current state of the art low redshift survey. I will then talk about one of the challenge faced by Supernovae cosmology now: the astrophysics dependency of Type Ia Supernovae.
As of today, the Hubble diagram, which maps the luminosity distance-redshift relation for type-Ia supernovae (SNe Ia), allows us to infer cosmological parameters such as the Dark Energy equation of state (w) with an accuracy reaching a few per cent. Upcoming SNIa samples with O(30,000) SNe (30 times the current worldwide statistics), will allow us to reach the per cent level and start probing potential evolutions of w with the redshift. To reach this goal, an effort has to be made to push the level of the systematic uncertainties affecting the distance measurements down to ~0.1%. In particular, luminosity distances are affected by a selection bias called 'Malmquist bias'. Being able to see only the most luminous supernovae at high distances decreases the apparent mean magnitude of the population and therefore, negatively biases the estimation of distances at high redshifts.
In current analyses, the value of this bias is determined by time-consuming simulations based on either a Bayesian framework or a multiple-time fitting approach. As a faster alternative, we propose a maximum likelihood-based method relying on a fast computation of the truncated likelihood function and its first and second-order derivatives. This new method allows us for a given survey to simultaneously estimate the luminosity distances of supernovae and the selection function of the survey. This prevents the distances from being biased and eases the propagation of uncertainties as all the parameters of the model are fitted at the same time. Eventually, we expect the inference of luminosity distances to be faster by a few orders of magnitude when compared to a classic Bayesian framework. This is essential to be able to deal with the 30-fold increase of statistics expected within the next decade.
The 21cm hydrogen line is present during all the eras following the cosmological recombination, containing information about both the cosmology and the astrophysical processes at work in the universe.
During this presentation, I will discuss the 21cm emission from collapsing gas clouds. I will particularly take into account the gas cooling due to thermal molecular functions. To do so, one has to compute the full chemical network of reactions between atomic and molecular species. The resulting cooling channel can enable, under certain conditions, the growth of thermal instabilities which are a crucial ingredient in order to trigger the process of fragmentation in haloes.
A search for the resonant production of a heavy scalar X decaying into a Higgs boson and
a new lighter scalar S, through the process X$\rightarrow$ S ($b\bar{b}$) H($\gamma \gamma$), where the two photons are consistent with the Higgs boson decay, is performed. The search is conducted using 140 fb$^{-1}$ of LHC Run 2 data recorded by ATLAS. The mass space investigated in the analysis is 170 $\leq$ mX $\leq$ 1000 GeV and 15 $\leq$ mS $\leq$ 500 GeV. Parameterised Neural Networks (PNN) are used to enhance the signal purity and to achieve continuous sensitivity in a domain of the (mX, mS) mass plane.
A log-likelihood fit is performed on the PNN score distribution to look for an excess with respect to expected background compatible with X$\rightarrow$ S ($b\bar{b}$) H($\gamma \gamma$) signal.
If no excess is found, model independent upper limits will be set on the cross section times branching ratio.
Deviations from the Standard Model have long been observed in semileptonic B-meson decays, notably b→ sll transitions, triggering speculations on potential New Physics effects in this sector. After the recent update of RK(*) and BR(B(s) → μμ) by the LHCb collaboration, the remaining significant deviations from the SM in FCNC B decays are found in the branching ratios of mesonic decays involving b → sμμ and in the angular observable P’5.
Unlike RK(*) and BR(B(s) → μμ), the observables BR(B(s) → Mμμ) (M = K(),φ,…) are theoretically challenging to predict accurately because of their high sensitivity to non-perturbative QCD contributions, both local and non-local. These contributions yield a theoretical error of order 30%, which can be as large as (sometimes larger than) the experimental uncertainty, and clearly hamper the potential of these observables for discovery.
I will discuss the current state of B-meson anomalies, and the impact of transition form factors when considering the tension of experimental data with respect to the Standard Model.
The pursuit of deviations from the Standard Model (SM) is prompted by the recognition of the model's known limitations. Some findings suggest possible violations of lepton universality. The identification of such deviations could potentially lead to a SM extension, adding Z' boson that interacts with leptons in a different manner.
The SM is also firmly grounded by the principle of Lorentz invariance. Nevertheless, some theories predict a non-conservation of this symmetry, a possibility considered within the framework of the Standard Model Extension (SME).
In this presentation, I will share my research on testing these symmetries. Firstly, I will discuss my work regarding the search for lepton universality violations in ttZ -> l+l- decays using CMS data. Next, I will present my result ont the test of Lorentz invariance by analysing photon decays in vacuum at the Large Hadron Collider (LHC)
Vector boson scattering (VBS) processes probe the fundamental structure of electroweak interactions and provide a high sensitivity to new physics (NP) phenomena affecting gauge and Higgs couplings. These processes are among the rarest ones in the Standard Model (SM) and were observed during last years in the Large Hadron Collider (LHC). The semileptonic final state, where one of the scattered gauge boson decays hadronically into a quark/antiquark pair and the other boson decays leptonically into electrons, muons or neutrinos, has good statistics and allows to study a lot of different couplings. VBS is sensitive to trilinear gauge couplings and quartic gauge couplings, which can be studied in the framework of the Effective Fields Theories (EFT) to set model-independent constraints on NP. The various and complementary VBS analyses performed in the ATLAS experiment allow to start a combination of the different results in order to set limits on dimension-8 EFT operators.
Since the existence of the Higgs boson has been experimentally confirmed by the ATLAS and CMS Collaborations, many of its properties have been measured with high precision. However, one of its main property, the shape of the Higgs potential, remains unknown. The Higgs pair production represents the most direct way of measuring this potential, and any excess observed could be a sign of new physics. According to various BSM theories, an excess of HH signal may result from a new resonance X decaying into two Higgs bosons. the X->HH searches in CMS are presented.
Une recherche de particules de matière noire produites en association avec un nouveau boson vecteur neutre est effectuée en utilisant des collisions de protons-protons à √s=13 TeV, correspondant à une luminosité intégrée de 140 fb−1 enregistrée par le détecteur ATLAS au Grand Collisionneur de Hadrons. Les désintégrations du boson Z′ en leptons légers de même saveur (e+e−/μ+μ−) sont étudiées pour des masses de Z′ supérieures à 200 GeV. Aucun excès significatif par rapport à la prédiction du Modèle Standard n'est observé. Les résultats de cette recherche sont interprétés pour plusieurs scénarios de modèle de référence, incluant le Higgs sombre et des bosons vecteurs légers. Des limites de section efficace sont fixées en considérant chaque scénario de référence, ainsi que des limites sur le couplage du Z′ avec les leptons.
My thesis subject aims to search for long-lived decays
of new massive particles in the Compact Muon Solenoid
(CMS) experiment at the Large Hadron Collider (LHC).
CMS is one of the four main experiments at the LHC,
where high energy proton-proton collisions are pro-
duced. New long-lived particles are predicted in several
extensions of the Standard Model (SM). In the model
considered, R-parity violated Minimal SuperSymmetric
Model (RPV-MSSM), the lightest SuperSymmetric par-
ticle (LSP) is long-lived and decays into SM particles.
Therefore, the present goal of my thesis is to reconstruct
the displaced vertex emerging from the decay of the LSP
and set selections to reduce the main backgrounds
The Standard Model (SM) cannot explain the composition of Dark Matter in the Universe. Some Beyond Standard Model theories predict the existence of a dark hidden sector which contain new hypothetical particles : stable particles in this sector are Dark Matter candidates. The new particles could weakly interact with Standard Model ones through a new interaction, and thus can be produced in proton-proton collision at the LHC.
In the search for Emerging jets, we are looking for invisible particles from this sector decaying to SM particles with a certain lifetime, producing displaced signals in the detector called emerging jets. The challenge of this analysis is to be able to detect these very rare interactions among the data, by understanding the signature of such particles and by selecting a maximum of events that may contain emerging jets, while rejecting non interesting events.
Moreover, there is a need to test and study many other theoretical models in the data taken at the LHC, but a given LHC analysis cannot test all possible related models. This is why reinterpretation frameworks are useful because they allow to study constraints on new models that have not been considered by existing analysis. There exist several of such frameworks, in particular MadAnalysis which will be discussed here.
The expected increase of the particle flux at the high luminosity phase of the LHC (HL-LHC) with instantaneous luminosities up to L ≃ 7.5×1034 cm−2 s-1 will have a severe impact on the ATLAS detector performance. The pile-up is expected to increase on average to 200 interactions per bunch crossing. The reconstruction and trigger performance for electrons, photons as well as jets and transverse missing energy will be severely degraded in the end-cap and forward region, where the liquid Argon based electromagnetic calorimeter has coarser granularity and the inner tracker has poorer momentum resolution compared to the central region.
The High Granularity Timing Detector (HGTD), a new timing detector for ATLAS, will be installed in front of the liquid Argon end-cap calorimeters for pile-up mitigation and for bunch per bunch luminosity measurements. This detector will cover the pseudo-rapidity range from 2.4 to about 4.0. Two silicon sensors double sided layers will provide a precision timing information for minimum ionizing particles with a time resolution better than 50-70 ps per hit (i.e 30-50 ps per track) in order to assign the particle to the correct vertex. Each readout cell has a transverse size of 1.3×1.3 mm2 leading to a highly granular detector with about 3 millions of readout electronics channels. Low Gain Avalanche Detectors (LGAD) technology was chosen as it provides an internal gain good enough to reach large signal over noise ratio needed for excellent time resolution. A dedicated ASIC for the HGTD detector, ALTIROC, is being developed in several phases producing prototype versions of 2×2, 5×5 and 15×15 channels. HGTD modules are hybrids of the LGAD and ALTIROC connected through flip-chip bump bonding process.
Several test beam campaigns have been conducted at DESY and CERN SPS H6 beamline in 2022. The performance of irradiated Carbon-enriched LGAD sensors has been studied. First module prototypes of 15×15 arrays with a pad size of 1.3×1.3 mm2 for the HGTD project have been tested from different manufacturers. Their performance with charged-particle beams before irradiation is evaluated. A summary of the results from LGAD-only and hybrids will be presented.
Ion beam analysis has been developed at ARRONAX cyclotron in Nantes [1]. Light ions including $H^{+}$ and $He^{2+}$ are accelerated to reach the required energy for the analyses before being extracted in-air. We use a fixe 68MeV α beam and several proton beams at 17MeV to 70MeV. We will explore the specific interest concerning material analysis by beam irradiation. Interaction between ions and target atoms produced X-rays and γ-rays by electronic collision and nuclei activation which are characteristic of the nature of the target material. It is the basis of two analytical techniques: HE-PIXE (High-energy particle induced X-rays interaction) and PAA (Proton activation analysis). High energy is used to probe material in depth through a few μm to several mm depending on the nature of the material and the beam. The K X-rays production cross-section is high for heavy elements, it increases the probing depth compared to less energetics L and M lines that are much more attenuated at the target surface. Even so, HE-PIXE analysis remains dependent on X-rays attenuation in the target. For thicker targets (>1mm), we use PAA to detect energetic γ-rays emitted by the target after irradiation by radioactive decay.
The non-destructive analysis of art and archaeological objects is central to heritage studies, enabling us to trace societies' cultural evolution and technical history. The presentation will focus on two applications of ion beam analysis looking at raw material supply in silver coins and the manufacturing technique used on a medieval lead pipe.
The first study was carried out on silver coins minted in Nantes in the late 16th century. The South American mine at Potosi, in present-day Bolivia, has been in production since 1548. Nantes' trade with Spain favoured the use of potosian silver to mint French coins from 1575 onwards. Previous studies using the thermal neutron activation technique have highlighted indium as a trace element specific to Potosí silver [2]. A new look into the activity of the Nantes mint during the Wars of Religion is provided by this study. We seek to identify coins containing a high concentration of indium in order to trace exchanges between Nantes and Potosi silver. For this purpose, we use HE-PIXE with 68MeV α beam.
The second study focused on a set of lead pipes discovered during the excavation of the water supply system of a medieval castle (13th - 14th centuries). PAA is performed to characterize the composition of the metals used. Regarding the pipe solder joints, we look if the tin-lead concentration profile is homogeneous in all the solder joints to characterize the quality of the soldering technique [3]. The analysis is based on the differential gamma ray attenuation of the Sn and Pb radioisotopes probed for several beam energies (17,34,45 and 68MeV).
[1] C. Koumeir, F. Haddad, V. Metivier, N. Servagent, et N. Michel, « A new facility for High energy PIXE at the ARRONAX Facility », p. 9.
[2] M. F. Guerra, Éd., « The mines of Potosi: A silver Eldorado for the European economy », présenté à Ion beam study of art and archaeological objects: a contribution by members of the COST G1 Action, Luxembourg: Office for the Official Publications of the European Communities, 2000, p. 88.
[3] E. Paparazzo, « Surface and interface analysis of a Roman lead pipe “fistula”: microchemistry of the soldering at the join, as seen by scanning Auger microscopy and X-ray photoelectron spectroscopy », Appl. Surf. Sci., vol. 74, no 1, p. 61‑72, janv. 1994, doi: 10.1016/0169-4332(94)90100-7.
La réduction de la dose délivrée par les scanners est un enjeu majeur de santé publique [1]. L’objectif est d’obtenir des images scanner qui permettent une interprétation médicale avec une dose la plus faible possible.
La plateforme de simulations Monte Carlo GATE (version 10 béta) est utilisée dans le cadre de la modélisation du scanner Go Open Pro de Siemens Healthineers.
La version 10 béta permet la modélisation de l’ensemble de la chaîne d’acquisition du scanner à partir de scripts Python: génération des photons primaires, interactions des photons et des électrons dans la matière et électronique de détection. Les résultats des simulations permettent une cartographie de la dose dans différents objets de tests et la reconstruction d’images scanner en 3D.
Cette étude permet de comparer la modélisation dans GATE à des mesures physiques multicentriques au sein d’Unicancer : la dose pour différents protocoles d’acquisitions et sujets (fantôme et patient) ainsi que la qualité des images produites sont étudiés, comparés et optimisés.
Un protocole de mesures multicentriques est construit pour tester différentes fonctionnalités de réduction de la dose mises en avant par Siemens Healthineers (modulation automatique d’intensité par exemple).
[1] Rapport IRSN « Analyse des données relatives à la mise à jour des NRD en radiologie et en médecine nucléaire pour les années 2019 à 2021 »
Dark matter is one of the major puzzles in fundamental physics. Axions are among the best-motivated dark matter candidates. MADMAX experiment will search for axions in the mass range around 100 $\mu$eV, which is favored by theory. Traditional axion cavity experiments are unable to access this mass range. Therefore, a novel detector called dielectric haloscope will be utilised for this experiment.
The MADMAX experiment is in an R&D phase to validate the experimental approaches to be used for the final detector. There are several prototypes to validate different aspects like mechanics, piezo motors, RF behaviour, and physics studies. I'll present the current status of my work in the simulation, data analysis, and tests of various prototypes.
Active Galactic Nuclei (AGN) stand as enigmatic cosmic powerhouses, harboring supermassive black holes at their centers. Blazars are AGN with a jet oriented toward the observer. Their emission spans from radio to very high-energy gamma rays. Understanding their spectral variability provides crucial insights into the underlying physics governing these astrophysical phenomena. The Cherenkov Telescope Array (CTA) is poised to revolutionize high-energy gamma-ray astronomy, offering unprecedented sensitivity and energy resolution. One of its key components, the Medium-sized Telescope, will be equipped with the nectarCAM camera, currently under development by the LPNHE.
In this talk, we will discuss the CTA's prospects for studying blazar variability and the software development efforts leading to nectarCAM's calibration.
The flares of active galactic nuclei (AGNs) can be used to detect or constrain Lorentz invariance violation (LIV) by measuring time lags in detection of high energetic photons. An important source of uncertainty is our lack of knowledge of source intrinsic processes. However, combining flares and sources allows us to increase the precision of these measurements as well as to limit the noise of intrinsic source effects. Cherenkov Telescope Array (CTA) will be the next generation of TeV gamma-ray observatory. The first results obtained searching for LIV from the data recorded by its first prototype, the Large Sized Telescope (LST-1), will be presented.
Virgo is a gravitational wave detector located near Pisa in Italy. It is composed of a Fabry-Perot Michelson Interferometer and allows to detect gravitational wave passing through it using the interference dark fringe signal going out of the interferometer.
Virgo is an instrument that needs to be controlled and calibrated with great accuracy in order to achieve the precision required to detect further and further gravitational waves sources.
In this talk I'll describe the calibration process of the Virgo interferometer as well as how the calibration is linked to the data reconstruction and its uncertainty computation.
The interstellar medium is made up of gas and dust. This medium is traversed by cosmic radiation and irradiated by stellar UV, except in dense clouds where UV is absent. The interaction of these rays with the dust and gas is crucial to the chemical evolution of interstellar and circumstellar environments. Heavy and slow cosmic rays interact with very small dust particles (-100 atoms) and multi-fragment them by coulombic explosion, enriching the gas phase with complex molecules (Chabot, M et al. 2019). The upper limit in dust size for which multifragmentation occurs is currently unknown. The NanoCR experiment aims to provide physics inputs to determine the coulombic explosion size limit. To do this, the charge state distributions of model nanoparticles in single collision with a sample of heavy ions are measured. The experimental set-up will be presented along the initial results on the collision between 100 nm nanoparticles and Argon ions between 1.5 and 15 MeV.
The mysterious nature of Dark Matter (DM) has puzzled physicists the world over. One well-motivated class of DM candidates is Weakly Interacting Massive Particles (WIMPs). The XENON Dark Matter Project for direct search for WIMPs currently operates XENONnT experiment, lying a dual-phase (liquid & gas) xenon time projection chamber (TPC), situated in the underground LNGS in Italy.
This presentation will be divided into three parts. The first part is an overview of XENON project and the latest results from first science-run in XENONnT. The second part concerns light DMs search related to my thesis. The last part is my contribution to data analysis for next science-run in XENONnT.
The quest for PeVatrons, sources of cosmic rays accelerated up to PeV energies, saw an exciting development in 2021 when LHAASO detected 12 ultra-high energy (UHE) gamma-ray Galactic sources. Among those sources, the supernova remnant G106.3+2.7 (also called the Boomerang SNR) is a promising candidate to both hadronic and leptonic scenarios for the UHE emissions.
Gamma-ray astronomy performed with Imaging Atmospheric Cherenkov Telescopes (IACTs) is the tool of choice when it comes to looking at the most energetic sources of the Universe with the best angular resolution (~ 0.01 deg). We are currently observing the Boomerang SNR with LST-1, the Large Size Telescope prototype of the Cherenkov Telescope Array, together with the two IACTs of the MAGIC experiment. Observations at Large Zenith Angle allow us to explore the 1-50 TeV region of the energy spectrum with an angular resolution sufficient to resolve the source’s morphology.
Such observations raise challenges regarding the reconstruction and the analysis of the data, namely the rapid change of energy threshold and signal properties with the telescope pointing. To improve the uniformity of the response as function of the zenith angles we worked on optimizing the Random Forest based reconstruction pipeline and the data selection.
Heavy quarkonia are considered among the most promising probes of quark
gluon plasma (QGP) formed in ultra relativistic heavy ion collisions (URHIC).
For a reliable use of such probe, we need a rigorous formalism which could
describe the real time and out of equilibrium evolution of in-medium quarkonia
while it keeps a full track of the quantum nature of our system, as well, being
derived from first principles. In addition, the formalism should consider the
static as well dynamical effects of QGP on heavy quarkonia on equal footing.
Among the possible formalisms, the open quantum systems outstand due to its
simplicity. Within this framework, the evolution of our system is governed by
quantum master equation which, by implementing semi classical approximations
(SCA), results into the standard and the more simple semi classical transport
equations as Boltzmann and Fokker-Planck / Langevin equations. Those last
had already shown, prior to the use of open quantum system techniques, a
good success in describing in medium quarkonia transport, in spite of their
model dependence in considering some QGP effects. Therefore, starting from
the fact that the semi classical transport equations are approximates of, the
more fundamental, quantum master equations, it is legitimate to wonder about
their range of validity and a quantitative comparison between the full quantum
and semi classical descriptions become mandatory to put our understanding of
in-medium quarkonia transport on stronger ground. In this talk, we review
briefly the derivation of quantum master equation in the quantum Brownian
regime and its associated semi classical Fokker-Planck / Langevin equation.
Then, for the sake of testing the validity of SCA, we discuss some comparative
results issued from one dimensional resolution of the two equations and draw
some conclusions on SCA validity.
KM3NeT/ORCA is a large-volume water-Cherenkov neutrino detector, under construction at the bottom of the Mediterranean Sea at a depth of 2450 meters. The main research goal of ORCA is the measurement of the neutrino mass ordering and the atmospheric neutrino oscillation parameters. Additionally, the detector is also sensitive to a wide variety of phenomena including non-standard neutrino interactions, sterile neutrinos, and neutrino decay.
This contribution is divided in two parts. First, the use of a machine learning framework for Deep Neural Networks (DNN) which combine multiple energy estimates to generate a more precise reconstructed neutrino energy. Second, the implementation of Graph Neural Networks (GNN) as for multipurpose event reconstructions. Both approaches show how Neural Networks can outperform current reconstructions and have a positive impact on the sensitivities of oscillation parameters.
The KM3NeT experiment is a next-generation neutrino telescope, consisting of two separate detection structures, organised as arrays of light sensors, and immersed in the depths of the Mediterranean Sea. The two detectors are the Oscillation Research with Cosmics in the Abyss (ORCA detector), located off the coast of France and the Astrophysics Research with Cosmics in the Abyss (ARCA detector), off the coast of Sicily. Identical in the design but differing by scale, these two detectors observe neutrino interactions in the sea water through Cherenkov light produced by the interaction products at different energy ranges. Specifically, ORCA aims at detecting atmospheric neutrinos to study their oscillation parameters, while ARCA will focus at higher energies on astrophysical neutrinos and the characterisation of their sources. Among the latter topic, Fast Radio Bursts (FRB) are good candidates for multi-messenger emissions due to the huge energy involved in their burst. I will present the method and criteria of a multi-messenger analysis intended to search for spatial and temporal coincidences of astrophysical neutrino signals from KM3NeT with a FRB catalogue of around 800 sources among which 14 have been observed in this period, ranging from January 2020 to November 2021, and were visible from the KM3NeT site.
Deep Underground Neutrino Experiment (DUNE) is a next generation long-baseline neutrino experiment in the United States.
Its main goal is to make precise measurements of neutrino oscillation parameters and to discover the neutrino mass ordering and CP violation phase in the leptonic sector.
The experiment is made of two elements: the near detector close to neutrino beam source and the far detector 1 300 km away. The latter will consist of 4 giant Liquid Argon TPC modules, one of which will use the new vertical drift technology. In this design, the anode is a stack of two perforated printed circuit board to detect the ionization electrons produced by the interaction of the charged particles with the liquid argon. The signal induced on the anode allows to measure the electric charge. Understanding the signal formation on the anode is therefore very important to improve the energy reconstruction of DUNE.
In this presentation, I will show the numerical simulations I have developed to study the signal formation on the anode. The dependencies of the signal strength and shape as a function of the track angle will be also shown. The simulations will be compared to the data collected in 2023 with the Vertical Drift demonstrator at CERN.
The Deep Underground Neutrino Experiment (DUNE) is a long baseline neutrino experiment under construction that aims to address the main open questions in neutrino physics such as the ordering of the mass hierarchy and the precise measurement of the neutrino oscillation parameters, including the charge-parity violation phase to determine the existence (or not) of the CP-violation in the leptonic sector. It will consist of a Near Detector (ND) located at Fermilab (Chicago) and a Far Detector (FD) in South Dakota, 1300 km away. The FD will consist of four detector modules, the second of which will be instrumented with a 17 kton Vertical Drift Liquid Argon Time Projection Chamber (VD LAr TPC) which allows for excellent imaging capabilities for neutrino event reconstruction. The passage of charged particles through LAr produces both ionization electrons and scintillation photons. While the first are detected by the TPC, a dedicated Photon Detection System (PDS) recovers the light information.
Since the DUNE Far Detector is still under construction, simulation studies are a key instrument to study the future detector performance. In particular, I will be discussing studies related to the PDS performance, regarding light event simulation and reconstruction using clustering algorithms, and showing results on the efficiency of this method mainly towards spatial reconstruction for the neutrino events, and proposing the next steps in this method..
The XENON program aims to directly detect dark matter. As dark matter interacts very weakly, the sensitivity of dark matter experiments has greatly increased over the last two decades. XENONnT, the current XENON detector should be able to study solar neutrinos. This detector is designed to be sensitive to low-energy nuclear recoils which is the expected signal from some dark matter candidates. By chance, it also happens to be the signature expected from a little-known interaction of neutrinos: The Coherent Elastic Scattering with atomic nuclei. It could lead to the first detection of this interaction with neutrinos coming from an astrophysical source, the sun.
This talk will present the search for these events in XENONnT.