(English version, for French see below)
Organised by the sections "Fields and Particles" and "Nuclear Physics" of the Société Française de Physique (SFP), the "Journées de Rencontre des Jeunes Chercheurs 2022" welcomes all PhD students (from the first to the last year) and young postdocs.
This year it will be held from October 23 to October 29, 2022, at the holiday resort La Rivière in Saint-Jean-de-Monts (85) – France.
The JRJCs are an occasion for each participant to present their work in a convivial atmosphere and to obtain from their colleagues an overview of the current research going on in France in the domain.
This year the following subjects are proposed : - Nuclear Energy - Nuclear Structure and Dynamics - Nuclear Astrophysics - Medical Physics - Hadronic physics - Heavy Ion Collisions - Cosmology - Instrumentation - Standard Model (electroweak) - Beyond the Standard Model - Theoretical Physics - Neutrinos - Astroparticles - Heavy Flavour
Presentations can be given either in English or French. The conference social program foresees a half-day trip in the nearby area, as well as one or two public seminars. The deadline for registration is September 9th, 2022. For any other information please feel free to contact the secretary or any member of the organising committee (see below).
(Français)
Organisées par les divisions "Champs et Particules" et "Physique Nucléaire" de la Société Française de Physique (SFP), les Journées de Rencontre des Jeunes Chercheurs 2022 s'adressent à tous les étudiants en thèse (de la première à la dernière année) et aux jeunes post-doctorants.
Elles auront lieu du 23 au 29 octobre 2022 et se tiendront au village vacances La Rivière à Saint-Jean-de-Monts (85) – France.
Les JRJC sont l'occasion pour chaque participant de présenter ses travaux de recherche dans une ambiance conviviale et de partager avec ses collègues une vue d'ensemble des différentes recherches menées à l'heure actuelle dans sa spécialité et dans des domaines proches.
Les thèmes proposés cette année sont les suivants : - Énergie nucléaire - Structure et dynamique nucléaire - Astrophysique nucléaire - Physique médicale - Physique hadronique - Collisions d'ions lourds - Cosmologie - Instrumentation - Modèle standard électrofaible - Au-delà du modèle standard - Physique théorique - Neutrinos - Astroparticules - Saveurs lourdes
La langue de travail des JRJC est le français, mais les non-francophones peuvent donner leur exposé en anglais. Le programme social comprend, outre une excursion dans la région, une ou deux conférences en soirée pouvant être ouvertes au public. Le date limite d'inscription est fixée au 9 septembre 2021. Pour tout renseignement complémentaire, n'hésitez pas à contacter notre secrétariat ou bien un membre du comité d'organisation (voir ici de suite).
Pauline Ascher (CENBG) | ascher@cenbg.in2p3.fr |
Francois Brun (CEA Saclay) | francois.brun@cea.fr |
Emmanuel Chauveau (CENBG) | chauveau@cenbg.in2p3.fr |
Rachel Delorme (LPSC) | rachel.delorme@lpsc.in2p3.fr |
Romain Gaior (LPNHE) | romain.gaior@lpnhe.in2p3.fr |
Maxime Guilbaud (SUBATECH) | guilbaud@subatech.in2p3.fr |
Julien Masbou (SUBATECH) | masbou@subatech.in2p3.fr |
Laure Massacrier (IJCLab) | massacrier@ijclab.in2p3.fr |
Sabrina Sacerdoti (APC) | sacerdoti@apc.in2p3.fr |
Thomas Strebler (CPPM) | strebler@cppm.in2p3.fr |
Antonio Uras (IP2I) | antonio.uras@cern.ch |
Dimitris Varouchas (IJCLab) | dimitris.varouchas@cern.ch |
Laura Zambelli (LAPP) | laura.zambelli@lapp.in2p3.fr |
« What is measured at the LHC, can be well explained by the Standard Model (SM) of Particle Physics. Nevertheless, several phenomena cannot be explained by the SM for example the matter-antimatter-asymmetry detected in our universe and no dark matter candidate was confirmed. In order to resolve these puzzles, it is important to identify good candidates for new physics (NP) searches. One of those is the $b \to s \ell^+ \ell^-$ transitions. Because they are forbidden at tree level, those rare decays are forbidden at tree level. In addition, New Physics particles could contribute to the loop contributions.
The LHCb experiment reported discrepancies to the SM in rare $b$-hadron decays. Most of those measurements are performed in rare $b$-meson decays. Therefore, it is interesting to study if rare $b$-baryon decays show the same behavior. A Lepton Flavor Universality test has been performed in $\Lambda_b \to pK^-\ell^+\ell^-$ (arxiv:1912.08139) and was found to be compatible with the SM and at the same time with the other rare $b$-meson decays.
The following work aims to learn more about the SM and possible NP by performing an angular analysis in the rare $\Lambda_b$ decay. The focus is set especially on the $\Lambda(1520)$ resonance due to its abundance and the existence of SM and NP predictions of the angular observables (arxiv:1903.00448, arxiv:2005.09602). »
The Standard Model of particle physics states that the three charged leptons $e$, $\mu$ and $\tau$ have the same electroweak coupling (through $W^{\pm}$ and $Z^0$ bosons) and the only difference between them is their coupling to the Higgs field, i.e. their mass. Couple of independent experiments (BaBar, Belle and LHCb) have measured deviations from the SM prediction.
One of the anomalies arises from the $b\to c \ell \nu$ transition called charged anomalies, the goal of my thesis is to study such anomalies by comparing the decays $B^0 \to D^{*} \tau \nu_\tau $ to $B^0 \to D^{*} \mu \nu_\mu $ where $\tau$ is reconstructed from $\pi^+\pi^-\pi^+(\pi^0)$, using data collected by the LHCb detector from 2011 to 2018. With larger statistical data, we will be able to significantly reduce the uncertainty of the measurement.
One of the privileged ways to search for signs of New Physics (NP) beyond the Standard Model (SM), is the study of $b\rightarrow s\ell^{+}\ell^{-}$ ($\ell$= electron or muon) transitions which involve Flavour Changing Neutral Currents (FCNC) via box or loop diagrams. The LHCb experiment has recently published a set of measurements in tension with the SM predictions. The aim of this work is to perform an angular analysis on $B_s^0 \rightarrow \phi e^+e^-$ in the low dielectron mass region to provide the measurement of the photon polarization.
Measurements of Higgs boson production cross-sections are carried out in the diphoton decay channel using 139 fb−1 of pp collision data at √s=13 TeV collected by the ATLAS experiment at the LHC. The analysis is based on the definition of 101 distinct signal regions using machine-learning techniques. The inclusive Higgs boson signal strength in the diphoton channel is measured to be 1.04+0.10−0.09. Cross-sections for gluon-gluon fusion, vector-boson fusion, associated production with a W or Z boson, and top associated production processes are reported. An upper limit of 10 times the Standard Model prediction is set for the associated production process of a Higgs boson with a single top quark, which has a unique sensitivity to the sign of the top quark Yukawa coupling. Higgs boson production is further characterized through measurements of Simplified Template Cross-Sections (STXS). In total, cross-sections of 28 STXS regions are measured. The measured STXS cross-sections are compatible with their Standard Model predictions, with a p-value of 93%. The measurements are also used to set constraints on Higgs boson coupling strengths, as well as on new interactions beyond the Standard Model in an effective field theory approach. No significant deviations from the Standard Model predictions are observed in these measurements, which provide significant sensitivity improvements compared to the previous ATLAS results.
The angle $\gamma$ of the Cabibbo-Kobayashi-Maskawa (CKM) Matrix had been until recent the least known parameters in the Standard Model of elementary particle physics and is still far from well known. The golden mode for measuring the angle $\gamma$ is a decay of $B^+\rightarrow DK^+$, where $D$ can be either $D^0$ or $\overline{D^0}$ and has a final state accesible from both of them so that there is an interference effect between the two paths, which allows to extract the value of $\gamma$. One can extend the principle of the measurement to different decay modes to obtain an independent constraint to the angle $\gamma$. One such example is the $B^0\rightarrow DK^+ \pi^-$ decay. The information in the Dalitz plot of the three-body $B^0$ decay gives a larger sensitivity to $\gamma$ compared to a two-body decay mode. In this analysis, nine different $D$ final states are used with $B^0\rightarrow DK^+ \pi^-$ in order to maximise the sensitivity. Amongst them a mode where the $D$ decays to $K^0_s \pi^+\pi^-$ gives the largest sensitivity and thus is the main pillar of this analysis. We perform a model-independent analysis exploiting binned Dalitz plots for both the $B^0$ and $D$ decays and aim at $\sigma(\gamma)\sim5^\circ$ with a dataset taken during the Run 1 and Run 2 operation of the LHCb experiment.
For over a decade, deviations (“B anomalies”) from the standard model have been observed in $b$-hadron decays, for example, the departure from the lepton flavor universality in $b \to s\ell\ell$ and $b \to c\tau\nu$ transitions. Many new physics models trying to explain these results have larger couplings with the $\tau$-lepton, being 3rd generation, which predict an enhanced branching fraction of $B \to K\tau\tau$ decay.
The talk describes how the Belle and Belle II experiments are searching for signatures of $B \to K\tau\tau$ decay using a hadronic B-tagging technique. The current B-tagging algorithm relies on machine learning and hence depends on the Monte Carlo modeling of hadronic B-decays. The improvement of the B-tagging performance through correcting the Monte Carlo description is also described.
The JEM-EUSO collaboration develops a series of balloon and orbital telescopes to detect transient UV emission from the Earth atmosphere, with the primary goal to study ultrahigh-energy cosmic rays from space. These detectors are wide field-of-view telescopes with high temporal resolution (1-2.5 μs) and sensitivity provided by a large aperture. One of these detectors is currently operating onboard the ISS (MINI-EUSO), one is planned to be launched in 2023 (EUSO-SPB2) and one is in preparation stage (K-EUSO). These projects use the same photo-detection modules (PDMs) composed of 36 multi-anode photomultiplier tubes with 2304 channels in total, used in single photon counting mode.
I will present the absolute calibration of the photodetection units of the EUSO-SPB2 mission, also revealing some previously unknown sub-pixel structures, associated with spatial variations in the photoelectron collection efficiency. The calibration of the three PDMs of EUSO-SPB2 was performed in a so-called “black box”, using light sources with controlled intensity. The method uses an integrating sphere illuminated by LEDs with a wavelength of either 375 or 405 nm, with a known fraction of the light flux directed towards the photodetectors and another fraction directed towards an absolutely-calibrated photodiode, read by a power-meter to monitor the light intensity at all times. The detection efficiency of all pixels was then obtained, after determining the optimal level of the discriminator used in the front-end electronics, pixel by pixel, providing the highest efficiency while ensuring negligible contribution of fake photoelectron counts from electronic noise.
Recent developments in heavy ions production increased access to alpha-emitting radioisotopes and opened the door to their use in internal radiotherapy[1]. Targeted alpha therapy is of interest for dedicated applications such as the treatment of disseminated brain metastases[2][3], their radiation range in biological matter covering only a few dozens of micrometers. However, when alpha-emitting radionuclides undergo in vitro assessment, additional care must be taken compared to beta-emitters because of the higher linear energy transfer values of alpha particles. Indeed, the dose delivered to the cells becomes significantly dependent on the spatial distribution of the radionuclides in the culture medium[4]. Knowledge of this distribution would thus allow dose-effect relationships assessments and make comparisons to other irradiation methods more reliable.
We present here an in vitro dosimetry system using silicon semiconductor diodes placed below custom-made culture wells, which record energy spectra of the alpha particles passing through the culture medium and cell layer. A detector chamber protecting the electronics was designed to carry out the measurements inside a cell culture incubator. A spectral deconvolution method was developed to extrapolate the radionuclide spatial distribution from energy spectra acquired during in vitro experiments and compute the dose delivered to the cells. Since our custom-made wells are compatible with microscopy imaging, dose-relationship effects can be directly evaluated for all culture wells.
Reliability of the methodology has been assessed and demonstrated dose computation errors limited to 3% when applied to simulated 212Pb irradiations. Applications of our methodology carried out in preliminary experiments with 212Pb and 223Ra showed that the common homogenous distribution hypothesis could lead to up to 50% dose underestimation. They also revealed that the different radionuclides of complex decay chains present different spatial distributions, which has further consequences on the dose computation and highlights the necessity of new experimental dosimetry methods.
REFERENCES
[1] F. Guerard et al., Q J Nucl. Med. Mol. Im. 59, 161-7 (2015)
[2] A. Corroyer-Dulmont et al., Neuro-Oncology 22(3), 357-68 (2020)
[3] N. Falzone et al., Theranostics 8(1), 292-303 (2018).
[4] A.M. Frelin-Labalme et al., Med. Phys. 47(3), 1317-26 (2020)
*This project has received financial support from the CNRS through the MITI interdisciplinary programs.
Actuellement, les radiothérapies les plus conventionnelles utilisées pour le traitement du cancer induisent des effets délétères dans le corps humain, et plus spécifiquement lorsque la tumeur cible est proche d’organes à risque.
L’utilisation de protons ou d’ions Carbonne (hadronthérapie) plutôt que des rayons X (thérapies plus conventionnelles), permet de délivrer une dose plus conforme à la tumeur mais aussi aux tissus sains environnants. En effet, le profil de dose en fonction de la profondeur de pénétration dans le corps permet d’irradier précisément la tumeur tout en épargnant les tissus sains. Cependant, la fragmentation du faisceau primaire et du volume cible conduit à la production de fragments plus légers, qui peuvent contribuer à une dose non désirée dans les tissus sains.
Ainsi, lors d’un traitement de cancer par hadronthérapie, les réactions nucléaires du faisceau primaire avec le volume cible ont besoin d’être quantifiées afin de pouvoir calculer précisément la dose reçue par le patient. Les calculs de dose en thérapie par faisceau d’ions se font par des algorithmes de grande performance incluant les phénomènes physiques et biologiques, se basant sur des données fournies par les simulations Monte Carlo. Cependant, pour ce type de thérapie, on manque de données expérimentales ce qui induit des calculs de doses non précis de ces algorithmes.
En ce qui concerne la radioprotection spatiale, les rayonnements cosmiques interagissent avec le blindage des dispositifs spatiaux, conduisant également à une production de particules secondaires et donc une dose reçue par les astronautes. Comme pour la hadronthérapie, les simulations MC manquent de données expérimentales pour calculer ces doses.
Ma thèse s’incluant dans une collaboration entre l’IPHC (Strasbourg) et le GSI (Darmstadt), j’effectue donc deux recherches différentes en parallèle sur ces sujets.
En ce qui concerne mon travail à l’IPHC, je travaille sur la caractérisation quantitative/qualitative des particules secondaires créées lors de l’interaction d’un faisceau primaire d’ions Carbonne avec une cible de PMMA (composée des principaux atomes composant le corps humain/composant courant et efficace de blindage spatial). Pour cela, nous faisons 2 méthodes de mesure : E-E, et TOF (time of flight). Ce projet s’inscrit dans une expérience jointe avec une caractérisation des neutrons produits ainsi que des conséquences chimiques dans le corps humain de tels rayonnements (Figl.1 – setup experimental avec (a) l’étude de la production de particules secondaires et (b) l’étude physico-chimique). Durant les premiers mois de ma thèse, j’ai ainsi participé à la mise en place de ces deux configurations et ait effectué les premières mesures lors de temps de faisceau. Les premiers résultats consistent à la caractérisation de nos détecteurs par leurs courbes de calibration pour ces types de faisceaux et énergies.
En ce qui concerne mon travail au GSI, j’ai rejoint un projet se portant plus particulièrement sur la thérapie par particules pour le traitement du cancer du poumon. En effet, à cause des mouvements de respiration du patient et du grand gradient de densité entre le poumon (faible densité) et une tumeur (haute densité), des sur ou sous dosage dans la tumeur ou les tissus sains sont très probables (Fig.2 – Distribution de dose pour un traitement par faisceaux d’ions pour une tumeur au poumon, planifié à la fin de l’inhalation). Ce projet propose ainsi une méthode qui consiste à utiliser ce grand gradient de densité pour déterminer lors d’un traitement du patient si le faisceau est délivré comme planifié. En effet, nous aurons davantage de particules secondaires créées si le faisceau interagit avec le la tumeur (car grande densité) qu’avec le poumon (faible densité).
Pour cela, la méthode proposée utilise des détecteurs pixellisés CMOS. Cette méthode a été testée cette année sur une cible de mousse (poumon) dans laquelle se trouve un cylindre de PMMA (tumeur), avec des résultats prometteurs (Fig.3 – Distribution des points de production des protons secondaires le long de l’axe du faisceau). Dans la continuité, mon travail sera de proposer une application clinique de ce projet, en imaginant et simulant (GATE) un dispositif réel utilisant cette méthode, qui soit utilisable en clinique sur patient.
L’imagerie médicale nucléaire est une méthode qui consiste à suivre des médicaments radiomarqués utilisés pour créer des images dans le but de diagnostic, de suivi thérapeutique ou de recherche. La réduction de l’activité radio pharmaceutique appliquée au patient et le raccourcissement du temps d’exposition sont deux indicateurs cruciaux pour guider les améliorations de l’imagerie en médecine nucléaire. Notre équipe a proposé une méthode innovante XEnon Medical Imaging System (XEMIS) basée sur une caméra Compton au xénon liquide comme milieu de détection pour l’imagerie 3ɣ et sur l’utilisation du radio-isotope 44Sc dans le but de réaliser de l’imagerie médicale à faible activité. Le démonstrateur XEMIS2 est conçu pour obtenir une image du petit animal en 20 minutes via l’injection d’une source dont l’activité est de l’ordre de 20 kBq. L’objectif de ma thèse est de contribuer à la mise en service de la caméra XEMIS2 et d’obtenir les premières images. Plus précisément jusqu’à aujourd’hui, je m’occupe d’analyser les signaux de scintillation et d’ionisation et aussi les simulations associées. La mise en service de la caméra XEMIS2 est prévue pour le début de l’année 2023.
Searches for electric dipole moments (EDMs) of spin 1/2 particles such as the neutron are sensitive probes for CP
violation beyond the standard model, a key to solving the baryon asymmetry problem. In this presentation I give a
brief overview of n2EDM, an experiment aiming to measure the neutron EDM with a sensitivity of 1 × 10−27e.cm.
I then focus on two areas of my PhD work related to magnetic field uniformity in n2EDM: analysis of magnetic
field maps of the apparatus, and calculation of a systematic effect generated by field non-uniformities referred to as the "false EDM".
In-beam gamma-ray spectroscopy with high-velocity recoil nuclei requires very accurate Doppler correction. The Advanced GAmma Tracking Array (AGATA) is a new generation gamma-ray spectrometer that is capable of tracking gamma-rays with up to 5mm resolution which allows for accurate Doppler correction. AGATA is made of high-purity germanium crystals (about 50 available so far) assembled to form a sphere with the goal to cover 4π (180 required). Each crystal is electronically segmented into 36 segments.
To determine the gamma-ray interaction positions, the signals coming from the 36 segments go through the Pulse Shape Analysis (PSA) algorithm. This algorithm estimates the interaction point positions by comparing the measured signals to a database of simulated signals. A tracking algorithm based on the Compton diffusion formula is then applied to reconstruct the full trace of the gamma-ray in the detector, giving access to its first interaction point
The PSA precision is a key point in the AGATA analysis. A way to improve its capabilities is to use experimental data to build the PSA databases instead of simulated ones. A scanning table has been created in Strasbourg to measure these experimental databases, but the current algorithm used to treat these data is very time-consuming. This work will present a new approach to improve the existing analysis using machine learning techniques. Finally, to characterize the improvement of the PSA, a new method based on gamma-ray imaging will be presented.
A non-monotonic net-proton kurtosis as a function of the collision
energy for very central collisions has been suggested and may be
confirmed by recent BES-II program results advocating the existence of
the QCD critical point. Fluctuations at the origin of this peculiar
behavior are produced in the highly dynamic environment of
ultra-relativistic collisions. Especially, the violent longitudinal
expansion and the associated temperature cooling may have a non-trivial
impact on how we interpret the experimental data. The in- or out-of
equilibrium nature of the fluctuations during this expansion is a
crucial question in discriminating between critical contributions and
purely dynamical features.
Here, we inspect the diffusive dynamics of the conserved charges net
density fluctuations in a Bjorken-type 1+1D expanding system. Between
the initial time of the collision and the chemical freeze-out, the
equilibrium thermodynamics is described by a potential derived from a
Ginzburg-Landau free-energy functional parametrized by its second and
fourth-order susceptibilities. In the scaling region, the
susceptibilities are mapped from the 3D Ising model and at muB = 0 MeV,
from lattice QCD calculations. Between the chemical freeze-out and the
kinetic freeze-out, the thermodynamics is determined by the hadron
resonance gas of the 19 lightest species at first order in the chemical
potentials. The non-trivial interplay between the diffusive properties
of the constituents and the longitudinal expansion of the medium allows
us to study the critical fluctuations in the dynamically expanding
medium as well as their survival in the hadronic phase until kinetic
freeze-out.
We demonstrate the enhancement of the critical fluctuations for
trajectories passing near the critical point. The signal is shown to be
largely dependent on the diffusive properties of the medium and the
chemical freeze-out temperature. After chemical freeze-out, we observe
that the diffusion in the hadronic medium has a huge impact on the
amplitude of the critical fluctuations. We conclude that the signal
survives longer in sectors related to the electric charge.
The High Acceptance Di-Electron Spectrometer (HADES) at GSI, Darmstadt,Germany is an experimental setup dedicated to study the hadronic matter in the region of large net baryon densities and moderate temperatures, using fixed-target heavy-ion collisions in the incident energy range of few GeV/nucleon. Dilepton emission is a favored probe for such studies as it gives undistorted information of hadronic matter at all stages of the collision. The detection of e+e- pairs in proton-proton reactions provides an useful reference for the analysis in heavy ion collisions and allows to study specific dilepton production channels, as vector meson decays ($\rho$/$\omega$/$\phi \rightarrow e^{+}e^{-}$) or baryon resonance Dalitz decays ($\Delta/N^{*}\rightarrow e^{+}e^{-}$).
Recently, the HADES collaboration measured the proton-proton reaction at 4.5 GeV. I will show the status of the analysis of the e+e- channels, which combines information from various detectors (tracking system, RICH and electromagnetic calorimeter) and consists of several steps: tracking, lepton identification, gamma conversion rejection, pairing, background subtraction, efficiency and acceptance corrections. I will also present results from simulations including all experimental effects which are prepared to help for the interpretation of data.
Symmetries play a crucial role in our current understanding of the fundamental properties of the Universe. Any combination of the three discrete operations, i.e. charge conjugation (C), parity (P), and time reversal (T) results in a CPT transformation. The corresponding CPT theorem states that CPT is an exact symmetry of nature and puts the following constraints on any quantum field theory (QFT): the Langrangian of the corresponding QFT is Hermitian and local, and Lorentz invariance is observed. An observation of CPT symmetry violation would lead to a serious crisis for QFTs. Experimentally, CPT symmetry can be tested by measuring masses and lifetimes of particles and antiparticles. If CPT symmetry holds, the masses and lifetimes of particles and anti-particles should be equal.
In the multi-strange baryon sector, the only CPT test via the measurement of the mass difference of $\Xi^{-}$ and $\overline{\Xi}^{+}$ dates from 2006 (Abdallah et al.), and from 1998 (Chan et al.) for the $\Omega^{-}$ and $\overline{\Omega}^{+}$ mass difference. The data sample consisted of 2500(2300) for the $\Xi^{-}$ ($\overline{\Xi}^{+}$) and of 6323(2607) for the $\Omega^{-}$ ($\overline{\Omega}^{+}$) analysis. The latest update of the absolute masses of $\Omega^{-}$ and $\overline{\Omega}^{+}$ is from Hartouni et al. (1985) based on a sample of 100(72) $\Omega^{-}$ ($\overline{\Omega}^{+}$) baryons. Clearly, an update of the multi-strange baryon masses and their differences is called for.
In this contribution, a measurement of the mass difference of the $\Xi^{-}$ and $\overline{\Xi}^{+}$, and of the $\Omega^{-}$ and $\overline{\Omega}^{+}$ baryons in proton-proton collision at a centre-of-mass $\sqrt{s}$ = 13 TeV with the ALICE experiment will be presented. The data samples are much larger than those used previously for such measurements: $\sim$1 000 000 $\Xi$ and $\sim$80 000 $\Omega$ with a low level of background.
Charged-particle pseudorapidity measurements help in understanding particle production mechanisms in high-energy hadronic collisions, from proton-proton to heavy-ion systems. Performing such measurements at forward rapidity, in particular, allows one to access the details of the phenomena associated with particle production in the fragmentation region. In ALICE, this measurement will be performed in LHC Run 3 exploiting the Muon Forward Tracker (MFT), a newly installed detector extending the inner tracking pseudorapidity coverage of ALICE in the range $-3.6<\eta<-2.5$.
The presence of an unexpected significant excess of low $p_{T}$ $J/\psi$ over the expected hadronic $J/\psi$ production was confirmed in peripheral Pb$-$Pb collisions and observed for the first time in semi-central Pb$-$Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with ALICE at LHC. The measurements were performed in the dimuon decay channel at forward rapidity ($2.5 < y < 4$) and in the dielectron decay channel at mid-rapidity. Surprisingly, a large increase in the $J/\psi$ nuclear modification factor was observed in peripheral Pb$-$Pb collisions at low $p_{T}$, below 0.3 GeV/c. Most of the excess is believed to originate from a coherent photoproduction mechanism, which was unexpected until few years ago in peripheral collisions with nuclear overlap and is well known in ultraperipheral Pb$-$Pb collisions. The coherent photoproduction implies the interaction of a quasi-real photon coherently with the full Pb target nucleus, which remains intact. Many theoretical models developed for ultraperipheral collisions have tried to extend the description of the coherent $J/\psi$ photoproduction mechanism to Pb$-$Pb collisions with nuclear overlap. In this presentation, I will show the results obtained with ALICE, and will compare them to theoretical calculations. And I will introduce the most recent analysis of the rapidity differential coherent $J/\psi$ photoproduction cross section measurement at forward rapidity in ALICE.
After the discovery of the Higgs boson in ATLAS and CMS collaborations, the standard model is complete. However, there are still remaining questions that the standard model cannot explain such as dark matter, etc. To understand these phenomena, we need to understand the property of this new particle and extend the standard model. Vector-like quark is one of the new candidates that will extend our physics horizon beyond the standard model explaining the renormalization of the Higgs mass. A dedicated analysis was performed to search the Vector-like quark T’ single production and presented access in the full hadronic final state in the 2016 dataset collected in CMS. In this study, we investigate single produced vector-like T’ decaying into top quark and Higgs boson in the full hadronic final state, increasing the sensitivity using neural network technique in Run3 dataset.
One of the most challenging problems of the Standard Model (SM) is the mass of the Higgs boson that diverges by taking the loop contributions into account. The decay of new particles like Vector-Like Quarks (VLQs) could be an interesting explanation as the final state in SM particles is well understood. We will present here the decay of a VLQ T' in a quark top and a Higgs boson in a dileptonic final state. This new analysis that has never been done before is performed thanks to the CMS detector at LHC.
Mon projet de recherche est réalisé au sein de l’experience internationale CMS, qui analyse les collisions proton-proton de haute énergie produites par le LHC. Le projet est centré sur la recherche de particules lourdes, chargées et à long temps de vie (HSCP: Heavy Stable Charged Particles) prédites par certains modèles au delà du Modele Standard. Une premiere partie consiste à analyser les données prises lors de la période 2016-2018, en vue de publier des résultats réinterpretables dans le cadre d’une variété de modèles phénoménologiques. Les résultats que j’ai obtenus jusqu’à présent montrent que près de la moitié des signaux recherchés ne sont pas enregistrés par le système de déclenchement (HLT: High Level Trigger) final de l’experience, et une partie de mon travail consiste donc à developper une nouvelle stratégie de déclenchement en vue de la prise de données qui aura lieu dans le courant de ma thèse, avec pour objectif l’amelioration de l’efficacité de détection des HSCP.
The Standard Model (SM) is unable to explain the predominance of matter over antimatter in our present universe. Matter and antimatter are linked by a CP-symmetry transformation, and current explanations involve a new source of CP symmetry breaking. An effective field theory (EFT) will be used to describe CP-symmetry violation, which will be searched for by analyzing the production and decay of single top quark in the t-channel. A Phenomelogy study is conducted to asses the impact of the EFT on the production and decay of the single top quark. This analysis is based on full LHC Run2 dataset of proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the Compact Muon Solenoid (CMS) experiment.
The CMS tracker Endcap will be upgraded to sustain the high radiation environment of the High Luminosity LHC (HL-LHC), a project called TEDD (Tracker Endcap Double-Discs). The TEDD is composed of several Dees, which are the mechanical structures that holds the detection modules. In this work, we analyze metrological properties of the Dees and prepare for the future Dee production.
Even if the interest into studying charged leptons is not new, it has been growing over the past few years. This is mainly because of the lack of new physics signal at the LHC, and the potential signal of the g-2 experiment at FermiLab. Technical advances giving access to the study of the tau lepton at the LHC along with Belle II, and the possibility to produce high intensity pulsed muon beams, also goes in this way.
In this scope, the COMET experiment (based in J-Parc) search for a coherent conversion of a muon to an electron. COMET would improve by a factor of 100 the actual measurement limit (CR(μ + Au → e + Au) = 7 × 10−13). The main difficulty related to COMET is the prominence of different backgrounds, especially the one induced by cosmic muons. Cosmic muons interacting with the detector and his close environment can produce signal-like electrons. To get rid of this, the COMET detector is surounded by a Cosmic Ray Veto (CRV) as hermetic as possible.
To estimate the number of muons that would pass through non protected areas, like the beamline, simulations based on backward Monte Carlo are done. These predictions have to be tested with a muon telescope before any construction of the CRV.
In the framework of the ATLAS Run-3 datataking period, an early-data analysis targeting emerging jets is in preparation. This analysis is the first effort to study this signature in the ATLAS collaboration.
Emerging jets are part of a global Beyond the Standard Model (BSM) theory called Dark QCD. This BSM theory predicts the existence of a new dark sector : containing QCD-like particles and interactions, that is seperated from the Standard Model (SM), but accessible through a portal productible in proton-proton collisions at LHC. In addition, Emerging jets model predict that dark particles produced at LHC can decay back to the Standard Model with a long lifetime, leading to displaced objects (tracks, vertices) in the ATLAS detector. This leads to a highly exotic type of signature that until recently was poorly studied. This Run-3 analysis will benefit from a new trigger dedicated to this signature and software upgrades for large radius objects reconstruction. An overview of the current state of this analysis will be presented.
In the context of a nuclear power reactor operation, decay heat is a thermal power
which continues to be generated after shut down. This is due to the radioactive decay of
fission products, minor actinides, and delayed fission of fissile nuclide. Hence, a proper
characterization of decay heat and is essential for reactor safety system design, spent
fuel transportation, and repository management.
Decay heat can be calculated using reactor codes that have the capability to simulate
nuclide depletion by solving the Bateman equation coupled to the summation method.
As the name indicates, the summation method is the sum of decay power contributions
from the above-mentioned depleted fuel components at a given time. Since the data
required for this calculation, i.e., decay constants, fission yield data, and mean decay
energies are evaluated from experimental data, they have a certain level of uncertainty
which propagates to the decay heat calculation.
To this end, the objective is to analyze the impact of these uncertainties on decay
heat calculation, and a Monte Carlo method is chosen to propagate the decay heat
uncertainties through the fuel depletion. As a first step, a benchmarking work on a
sample from a PWR assembly is done to estimate the spent fuel nuclide inventory at
different cooling times. The benchmarking was done as a participation for the NEA
working group of criticality safety related to uncertainty on nuclide inventory. The
sample chosen for this purpose is the ARIANE GU3 sample. Two Monte Carlo codes
(Serpent and OpenMC) and two nuclear data libraries (JEFF3.2 and ENDF7.1) were
used for the depletion calculation. The results are compared with experimental values
and with other participants’ results. It has been shown that for most of the nuclide,
the simulation results are in agreement within the error margins with the experimental
values and other computational outputs. The subsequent steps will be to develop a
code which is capable of sampling on the decay data uncertainties (fission yields, decay
constants, mean decay energies) and calculate decay heat uncertainties. It will be first
applied to the ARIANE GU3 sample and fission pulse calculations and later on the
molten salt fast reactor concept.
Understanding dense matter presents a big challenge at the actual time. On one hand QCD, the fundamental interaction of nuclear matter is known to be non-perturbative at such low energy regimes, and on the other hand relying on numerical approaches to solve QCD, also known as lattice QCD, is blocked by what is known as the "sign problem".
Thus effective nuclear modeling may be employed to tackle the problem and efforts have been made to connect those descriptions to the fundamental theory of QCD, in particular its chiral properties. In this talk, I present one of those models, the Relativistic Hartree-Fock with chiral symetry and confinement (RHF-CC).
The EDELWEISS collaboration performs light Dark Matter (DM) particle searches with high-purity germanium bolometers collecting both charge and phonon signals. Our recent results (PhysRevD.106.062004) using NbSi Transition Edge Sensor (TES) equipped detectors operated underground at the Laboratoire Souterrain de Modane (LSM) has shown the high relevance of this technology for future dark matter searches. As most cryogenic dark matter experiments, this study was limited by unknown low-energy backgrounds. In this context, the EDELWEISS collaboration, as part of its SubGeV program, is working on a new design of germanium bolometers using NbSi TES : CRYOSEL. These innovative TES phonon sensors called Super conducting Single Electron Device (SSED) will be sensitive to the athermal phonons induced by the amplification of a single charge drifting in the strong electric field generated in the detector and hence, will be able to discriminate against our main low-energy background, which is not affected by this amplification.
Almost a century ago, astrophysics observations lead to the suspicion of dark matter (DM) existence in gravitationally bound astrophysical objects such as our galaxy, the Milky Way. Since then, it became a pillar in modern cosmology. Yet, although many efforts were made to detect it, no DM signal have been observed in direct or indirect detection experiments.
There exists many candidates for DM ,the most favored being the Weakly Interacting Massive Particle (WIMP).
The DarkSide collaboration builds direct detectors of WIMPs, using the TPC technology, operated at cryogenic temperature with liquid argon. Their next detector is DarkSide-20k (DS20k, 20t of liquid argon in the fiducial volume), it should start taking data in 2027 for a decade.
Once the detector is built, it is absolutely needed to understand perfectly its response to both signal and background. Indeed, DM search is in the realm of physics of rare events, thus the separation between background and signal is key. This is why the calibration of dark matter direct detectors is at high stake. I took part in DS20k calibration by simulating the whole calibration process and establishing a suitable strategy to make it as efficient as possible. I used six photon sources, simulating light background, and three neutron sources. The calibration with neutrons is of high importance because they are the residual background in our experiment, and they are able to mimic the interaction between the WIMP and nucleus of argon.
I will present the detector, its calibration facilities and the results following the simulations.
L’expérience XENONnT est une expérience de détection directe de la matière noire utilisant une chambre à projection temporelle remplie de xénon liquide et gazeux. Elle a pour but principal de détecter la collision des WIMP avec les noyau de xénon. Les WIMP (Weakly Interacting Massive Particles) sont des particules théoriques candidates pour la matière noire. De part leur faible interaction avec la matière, un faible bruit de fond est nécessaire pour observer leurs collisions avec le xénon liquide. L'une des difficulté de l'instrument étant sa grande taille et ses longues prises de données, le monitoring est crucial pour son bon fonctionnement. De plus la réponse du détecteur doit être parfaitement connue afin de pouvoir au mieux reconstruire les différents évènements. Pour cela diverses sources de calibration sont utilisées. Parmi elles, le Kr83m est une source de calibration interne émettant des gamma à une énergie proche de celle attendue pour la collision WIMP-xénon. Ma présentation portera donc sur l’utilisation des signaux provenant des événements de Kr83m afin de surveiller la stabilité spatiale de la réponse du détecteur.
Dark matter (DM) constitutes about 85% of the total matter content in the Universe and yet, we don’t know anything about its actual nature. In this talk, after an historical introduction, I will present my work on DM indirect detection, more especially the computation of X-ray constraints on sub-GeV (or “light”) DM. Photons from the galactic ambient bath see their energy boosted up to X-ray energies when they scatter with electrons or positrons produced by the annihilation or decay of light DM particles. The fluxes of X-rays produced by this process can be predicted and compared with data from X-ray observatories (e.g., INTEGRAL) to obtain competitive constraints on light DM.
The discovery and confirmation of cosmic background radiation (CMB) is landmark evidence of the Big Bang model. Following the CMB fluctuation power spectrum measurements and other experiences, the LCDM model is established and considered the most successful cosmological model. The model of inflation, which is a period of accelerated expansion in the very early Universe, provides a mechanism for generating structures that are the seeds of CMB fluctuations. However, the physics of inflation is not understood and requires knowledge of particle physics in extremely high temperature and density environments which is hard to reach in the lab. Most inflation models predict the presence of primordial gravitational waves (GW), which are quantified by the 𝑟 parameter: the ratio between tensor fluctuations and scalar fluctuation. The primordial GWs generate a curl pattern of the CMB polarization named B-mode, which is the best inflation probe. The CMB B-mode polarization is challenging to measure and remains undetected today. LiteBIRD is a Lite (Light) satellite for measuring B-mode polarization and studying Inflation from CMB Detection to achieve the precision of 𝛿𝑟 < 0.001 given the scientific goal of constraining inflation models. This precision is challenging to reach and requires accurate control of systematic effects. In this work, we study the calibration requirement on the beam far sidelobes, which is expected to be the primary source of the systematic for LiteBIRD. The far sidelobes will pick up the Galactic plane emission and contaminate the high galactic latitude area of the sky; the mismatch of our knowledge on beam far sidelobes will cause an incorrect estimate of the foregrounds and further affect the recovery of the CMB B-mode map. We simulate the error in the calibration, propagate it through the complete analysis pipeline and evaluate the bias 𝛿𝑟. Given the error budget, we drive the calibration requirement on the beam far sidelobe for LiteBIRD.
The Lyman-α forest is detected as the series of absorption lines in the quasar spectra, caused by the Lyman-α transitions of neutral hydrogen in the low-density, high-redshift intergalactic medium (IGM). It is a biased continuous tracer of the quasi-linear matter density field, and the auto (cross) correlation function of the forests (with quasars) has been used to detect the Baryon Acoustic Oscillations (BAO) signal. The Damped Lyman-α System (DLAs) is one of the most important systematics in the Lyman-α BAO analysis. DLAs are strong absorption regions in Lyman-alpha forests caused by neutral hydrogen along the sightline with extremely high column densities, usually log(NHI)>=20. We present an accurate model to characterize the impact of DLAs on the measurement of the Lyman-α correlation function, as well as the BAO fitting.
Since the first direct evidence of gravitational waves in 2015 the LIGO and Virgo collaboration have provided important results for astrophysics, cosmology or fundamental physics. The upgrades of the detectors through the years has increased the sensitivity and the range of detection, challenging their calibration. This talk reports the latest updates for a calibration method based on the local variation of the gravitational field using Newtonian Calibrators (NCal) on the Virgo detector for the upcoming observing run O4.
Since 2015, 90 gravitational waves (GW) signals, mainly produced by the merger of binary black hole (BBH), have been detected by the LIGO-Virgo-Kagra (LVK) collaboration. Beside being one of the most important discovery in physics of the 21st century, the detection of GWs is also the beginning of a new era, that opened a new window to study our universe. The LVK collaboration uses two pipelines (IcaroGW and GWcosmo) to estimate jointly the cosmological parameters (such as the Hubble constant or the density of baryonic matter) and the population parameters of the sources (mass function of BBH, redshift etc...).
The aim of this work is to generalize IcaroGW hierarchical inference by implementing BBHs spin models to the analysis. These parameters are important to take into consideration since the spin of BBHs systems could correlate with some cosmological parameters, hence have an impact on the constraints GWs put on cosmology.
Now that we find ourselves in the epoch of gravitational-wave (GW) astronomy, we can explore new avenues by which to test general relativity (GR) and search for extra dimensions. As a first step, we consider the quasinormal mode (QNM) spectrum of a 4D Schwarzschild black hole embedded in a 7D partially-compactified space-time of mixed scalar curvature. This allows us also to explore the properties of a space-time unrepresented in the Beyond the Standard Model literature whose higher-dimensional part is a nilmanifold (twisted torus) characterised by negative Ricci curvature. We compute the QNM frequencies in this setup using three numerical techniques and import constraints from the LIGO-Virgo-KAGRA collaboration to place bounds on a possible observable from extra dimensions. Our next step is to study the finite temperature effective potential of a simplified 5D model to determine if a first-order phase transition is possible, as well as strong enough to generate a detectable GW signature.
The development of new facilities at CERN to study the properties of antimatter has revived the interest for the physics of interactions between matter and antimatter. These systems can constitute an efficient tool to determine the properties of antimatter but can also be used to improve our knowledge of matter, which is the main motivation for the antiproton Unstable Matter Annihilation (PUMA) project. This experimental project aims to study neutron skin densities of short-lived isotopes using low-energy antiprotons as a probe. To understand how the nuclear skin densities can be related to the measured antiproton-nucleon annihilations, an accurate description of the antiproton-nucleus interaction is required. The main goal of this thesis is the development of a microscopic ab initio approach to study the simplest cases of antiproton-nucleus annihilation. For this purpose, the low-energy antiproton-nucleus scattering is studied by solving the Faddeev-Yakubovsky equations. The nucleon-antinucleon annihilation is a very complex process involving many meson-producing channels. While it is usually treated using optical potentials, the calculations of the present work are carried out considering a coupled channel formalism. This alternative approach will allows us to check the model-dependance of the calculated observables on the input relative to nucleon-antinucleon interaction and will contribute to the evaluation of the theoretical uncertainties.
Loosely bound nuclei are currently at the center of interest in low-energy nuclear physics. The deeper understanding of their properties provided by the shell model for open quantum systems changes the comprehension of many phenomena and offers new horizons for spectroscopic studies from the driplines to the well-bounded nuclei for states in the vicinity and above the first particle emission threshold. In this talk, I will present the recent progress in the open quantum system description of nuclear states and reactions based on the Gamow shell model which provides a comprehensive description of bound states, resonances and scattering many-body states in a single theoretical framework. Selected examples of the unified description of spectra and low-energy reactions and, in particular, appearance of the salient near-threshold correlations/clustering will be demonstrated.
As neutrino oscillation physics enters the precision era, the modeling of neutrino-nucleus interactions constitutes an increasingly challenging source of systematic uncertainty for new measurements. To confront such uncertainties, a new generation of detectors is being developed, which aim to measure the complete (exclusive) final state of particles resulting from neutrino interactions. In order to fully benefit from the improved detector capabilities, precise simulations of the nuclear effects on the final-state nucleons are needed.
To address this problem, we have studied the in-medium propagation of knocked-out protons, i.e., final-state interactions (FSI), comparing the NuWro and INCL cascade models. INCL is a nuclear-physics model primarily designed to simulate nucleon-, pion- and light-ion-induced reactions on nuclei. This study of INCL in the framework of neutrino interactions highlights various novelties in the model, including the production of nuclear clusters (e.g., deuterons, $\alpha$ particles) in the final state.
We present a characterization of the hadronic final state after FSI, comparisons to available measurements of transverse kinematic imbalance, and an assessment of the observability of nuclear clusters.
The study presented here is a crucial milestone toward the precise simulation of FSI in neutrino-nucleus interactions and a complete estimation of related uncertainties.
The T2K experiment is a long baseline neutrino oscillation experiment located in Japan and dedicated to measuring the neutrino oscillations parameters. The muon neutrino beam produced at J-PARC is measured first by a group of near detectors, and then, after passing ~295 km, by a far water Cherenkov detector, where the appearance of electron neutrinos in a muon neutrino beam was observed for the first time. Near detectors are represented by the INGRID on-axis detector, designed to control the position and stability of the beam, and the ND280 magnetize off-axis detector, the main purpose of which is to measure and constrain the neutrino flux before oscillations. One of the current directions of the T2K experiment is to measure CP violation in the lepton sector and improve the current knowledge of neutrino cross-section models. Such measurements require both larger statistics and a better understanding of systematic uncertainties. Thus, the T2K upgrade program suggests an increase in the beam power and modernization of the ND280 near detector. The upgrade includes replacing the pi0 detector with a Super-FGD target that will improve hadron reconstruction, and it will be sandwiched between two high-angle TPCs, allowing high-angle leptons to be reconstructed. In addition, the entire structure will be covered with six Time Of Flight (TOF) planes, which will reduce the background from the outside of the Super-FGD. The talk will cover the performance of HA-TPC prototypes that have been tested in a number of test beam campaigns. In addition, it will concentrate on the validation of GUNDAM, a new generic ND280 fitter developed to perform T2K oscillation analysis for current and future ND280 configurations.
Prédite en 1974 par Daniel Z. Freedman et découverte en 2017 par l'expérience COHERENT, la diffusion cohérente élastique neutrino-noyau (souvent notée CENNS ou CEvNS en anglais) est un processus prometteur pour l'étude des neutrinos et de la physique au-delà du Modèle Standard à basse énergie. L'expérience RICOCHET, en cours de construction, est une des expériences qui vise à mesurer avec précision ce processus avec des bolomètres cryogéniques en germanium et en zinc, afin de tester le modèle standard et tenter de mettre en évidence un potentiel signal de nouvelle physique. Cette présentation portera sur la phénoménologie de la diffusion cohérente des neutrinos, le design de l'expérience RICOCHET et sa sensibilité à ce processus et ses différents canaux de nouvelle physique.