Dear Colleagues,
The third 3rd ECFA workshop on e+e- Higgs, Electroweak and Top Factories will take place in the center of Paris in an in-person mode.
The Workshop will last from Wednesday, October 9th, 2024, 09:00 to Friday, October 11th, 16:00.
Registration is now opened.
The scientific program will continue developing in the coming months, according to the draft block schedule given in the timetable and following the call for abstracts which has started.
This workshop will be the last of the series of workshops on the physics, experiment and detectors for future e+e- factories before the start of the process of the next updated of the European Strategy on Particle Physics. It thus provides a crucial opportunity for the community working on the future e+e- factories to gather together and discuss the latest results and developments on these actitivies in view of the submission of a report as input to the next strategy update.
The central entry point of the ECFA study is accessible through this link
Previous editions:
We report on the latest sensitivity studies of FCC-ee to the measurement of the branching ratios of Higgs boson decays to quark-antiquark pairs and gluons.
The studies use simulated events scaled to integrated luminosities of 10.8/ab of sqrt(s)=240 GeV and 3.0/ab of sqrt(s)=365 GeV.
Jet flavour tagging is exploited to distinguish among different Higgs boson decays.
Various final states (H(jj) + ee/mumu, H(jj) + jj and H(jj) + missing energy) are reconstructed and a joint interpretation of their results is performed.
The expected precision in the branching ratios of decays to b,c,g is at the %-level or better, while that for the H->ss decay is close to the predicted branching ratio in the Standard Model.
Higgs to ss analysis is one of the ECFA HTE focused topics. We are working on the H->ss analysis using ILD full simulation based on a previous ILD study on H->bb/cc/gg branching ratio measurement and a latest DNN-based jet flavor tagging tool. This talk will focus on the application of strange tagging to the H->ss analysis including performance studies of strange tagging as well as results on H->ss sensitivity on ILD. Dependence on PID performance or detector configuration can be discussed as well.
(Details of analysis may be shown on a separate poster to be submitted by our student.)
The FCC-ee is a potential future Higgs factory that can continue to probe the validity of the electroweak theory. One of the key tasks is to study how the next generation of particle detectors can be optimised to study the Higgs boson in detail. This study uses the ZH process, where both the Higgs and Z decay to a pair of jets, to investigate the impact of changing the flavour tagger performance on the measurements of the Higgs coupling with the IDEA detector concept proposed for FCC-ee. The ParticleNet Tagger currently in use as a baseline in FCC-ee studies is re-trained for various possible IDEA vertex detector configurations. The ZH fully hadronic analysis is rerun for each re-trained tagger, and the expected 68% Confidence Level uncertainty on the Higgs coupling to the b-, c-, s-quarks and the gluon is used as the metric to determine the impact of the flavour tagger’s performance.
The ParticleNet tagger is a graph neural network devoted to the tagging of jets from the hadronization of multiple flavors at the FCC-ee. Its impressive and unprecedented tagging performance allows for accessing rare and challenging hadronic final states. This study shows the fast-simulation-based characterization of the ParticleNet performance evolution as a function of the IDEA vertex detector single-hit resolution, material radiation length and number of layers. Furthermore, an attempt to study impacts in physics applications such as the all-hadronic and Higgs-invisible ZH final states will be shown.
We explore the possibility of CP-violation in the complex-singlet extension of 2HDM. The addition of complex singlet paves the way for additional sources of CP-violation compared to 2HDM. If a Z2-symmetry is imposed on the complex-singlet, it can accommodate a dark matter candidate as well. We identify the regions of parameter space, that can fit DM observables and at the same time generate sufficient CP-violation. The amount of CP-violation gets severely constrained from electric-dipole moment experiments, which we take into account. Finally, we probe the CP-violation in this model at present and future collider experiments.
We study possible CP-violation effects of the Higgs to Z-boson coupling
at a future e^+ e^- collider, e.g. the International Linear Collider (ILC). We find that the azimuthal angular distribution of the muon pair, produced by e+ e- -> H Z -> H mu+ mu-, can be sensitive to such a CP-violation effect when we apply initial transversely polarized beams. Based on this angular distribution, we construct a CP sensitive asymmetry and obtain this asymmetry by Whizard simulation. By comparing the SM prediction with 2 range of this asymmetry, we estimate the limit of the CP-odd coupling in HZZ interaction, including as well studies from unpolarized and longitudinally-polarized beams.
The ILD detector concept has originally been developed for the International Linear Collider (ILC). Detailed simulations gauged against the performance of prototype components have shown that ILD in its ILC incarnation is ideally suited to pursue the physics program of a linear Higgs factory as well as of a higher energy e+e− collider. Recently, the ILD collaboration has started to investigate how the detector concept would need to be modified in order to operate successfully in the experimental environment of a circular Higgs factory like for instance FCCee. In particular, the interaction region, or machine-detector interface (MDI), requires substantial changes to make room for accelerator elements and to withstand backgrounds. This contribution presents the progress in the adapted reconstruction to account for the modified tracking detectors and enable the assessment of the modified detector design in background and physics performance studies.
Jet flavour tagging is crucial in experimental high-energy physics. A tagging algorithm, DeepJetTransformer, is presented, which exploits a transformer-based neural network that is substantially faster to train.
The DeepJetTransformer network uses information from particle flow-style objects and secondary vertex reconstruction as is standard for $b$- and $c$-jet identification supplemented by additional information, such as reconstructed V$^0$s and $K^{\pm}/\pi^{\pm}$ discrimination, typically not included in tagging algorithms at the LHC. The model is trained as a multiclassifier to identify all quark flavours separately and performs excellently in identifying $b$- and $c$-jets. An $s$-tagging efficiency of $40\%$ can be achieved with a $10\%$ $ud$-jet background efficiency. The impact of including V$^0$s and $K^{\pm}/\pi^{\pm}$ discrimination is presented.
The network is applied on exclusive $Z \to q\bar{q}$ samples to examine the physics potential and is shown to isolate $Z \to s\bar{s}$ events. Assuming all other backgrounds can be efficiently rejected, a $5\sigma$ discovery significance for $Z \to s\bar{s}$ can be achieved with an integrated luminosity of $60~\text{nb}^{-1}$, corresponding to less than a second of the FCC-ee run plan at the $Z$ resonance.
This talk gives a brief overview of the novel reconstruction tool CPID (Comprehensive Particle Identification) covering its structure and module library, usage for physics analyses and as developer, as well as first applications.
We are working on updating jet flavor tagging using Particle Transformer (ParT) from last year. We implemented b and c tagging feature and obtained a significant performance improvement from the previous software (also developed by the authors), LCFIPlus which was already reported at the previous workshop. We optimized parameters of the network and input structure to further improve the performance and investigating limitation with current network. Implementation of inference process to the ILD framework is also being done (to be finished by the workshop) to enable it to be used for physics analysis. We are also investigating strange tagging with ParT to enhance classification of jets as 5 kinds (b, c, g, s, ud) to be utilized to Higgs to ss analysis.
In addition, we are working on particle flow with GNN and Transformer. Initial study with GNN gives compelling performance with the current PandoraPFA algorithm, which will also be reported.
The precision study of the Higgs boson is a primary goal for future e+e- colliders. Accurate identification of its decay products is crucial for these measurements. Utilizing full simulation of proposed detector concepts provides a realistic estimate of the expected physics performance. In this talk, I will present the first results on jet flavor tagging in full simulation for the proposed CLD detector at FCC-ee, achieved using a transformer-based neural network.
In place of traditional cut-and-count analyses, machine learning methods can provide powerful ways to analyse physics data. In this work, we present techniques involving boosted decision trees (BDT) and deep neural networks (DNN) to increase the existing projected 95% CL limits for the HNL discovery potential at the FCC-ee, specifically as the HNLs decay into the final state of an electron and two jets. Considering HNLs in the mass range of 10-80 GeV, with couplings $10^{-3}$ < $|U_{eN}|^2$ < $10^{-10}$, we report an increased sensitivity of up to two orders of magnitude in the couplings when compared to previous cut-and-count analyses.
I will review the status and latest developments of the Sherpa event generator and its application in particular to future Higgs/EW/Top factories. The newly released version 3.0 of Sherpa provides much needed upgrades while continuing the traditional focus of the framework on higher order corrections both in QCD and EW calculations that will be crucial for a successful physics program at a future lepton collider.
Several key observables of the high-precision physics program at
future lepton colliders will critically depend on the knowledge of the machine absolute luminosity. The determination of the luminosity relies on the knowledge of some process which is in principle not affected by unknown physics, so that its cross section can be computed within a well-established theory, like the Standard Model. Quantifying possible New Physics (NP) effects on such processes is therefore crucial. We present an exploratory investigation on possible NP contamination on reference processes and possible strategies to remove the uncertainties originating from such contamination.
In this presentation, I will provide an update on the technical benchmarking of Monte Carlo generators. I will showcase some preliminary results and discuss the roadmap for their contribution to the final ECFA report.
Beam-beam interactions constitute an important source of beam-induced background (BIB) at any $e^{+}e^{-}$ collider, with implications for the design and optimization of detectors at these machines and, ultimately, their physics reach. In this talk, we will present the status of BIB simulations for the Cool Copper Collider (C$^3$). We will report results for the simulation of incoherent $e^{+}e^{-}$ pair production, hadron photoproduction and halo muon production from interactions with collimator material, and discuss technical challenges with these simulations relevant for any $e^{+}e^{-}$ machine. Using full detector simulation for the SiD detector concept and utilizing the Key4hep framework, we assess the impact of these backgrounds on the occupancy of the various sub-detector systems, most notably the vertex detector, and evaluate the effects of variations in the bunch time-structure of the beams. Finally, we will report the technical progress towards a full, out-of-time pileup mixing procedure of these backgrounds with hard-scatter events using well-established iLCSoft tools. We will conclude by discussing the compatibility of the C$^3$ beam configuration with ILC-like detectors, as well as lessons learned in the process that are useful for background simulation and detector studies at future $e^{+}e^{-}$ colliders.
I will discuss the measurement of the tau polarisation at a Higgs Factory in the process e+ e- -> tau+ tau-, both at the nominal centre-of-mass energy and the radiative return to the Z. Aspects of required detector performance will also be discussed.
The data sample of $6⋅10^{12}$ Z boson decays expected to be produced at the FCC-ee offers unprecedented opportunities for the precise measurement of physics observables. One of the areas in which large improvements are foreseen is the determinations of tau lepton properties (lifetime, leptonic/hadronic widths, mass), allowing for key tests of lepton universality. These measurements will benefit from a low-background environment, initial-state energy-momentum constraints and high Lorentz boost. They present strong challenges to match the 𝑂($10^{−5}$) stat uncertainties, raising strict detector requirements and novel experimental methods to limit systematic effects. In this presentation we will explore the measurement of the tau polarisation at the FCCee, focusing on some of the main experimental inputs and systematics with full simulation studies to emphasize the capabilities of FCCee for tau identification. The performance of different tau lepton reconstruction approaches in some of the leading decay modes ($\tau^{\pm}\rightarrow\pi^{\pm}\nu$, $\tau^{\pm}\rightarrow\pi^{\pm}\pi^{0} (\rho) \nu$, $\tau^{\pm}\rightarrow \pi^{\pm}\pi^{\mp}\pi^{\pm}(a_{1})\nu$) will also be discussed.
Electroweak Precision Measurements are stringent tests of the Standard Model and sensitive probes to New Physics. Accurate studies of the Z-boson couplings to the first-generation quarks could reveal potential discrepancies between the fundamental theory and experimental data. Future e+e- colliders running at the Z pole and around the ZH threshold would be an excellent tool to perform such a measurement, unlike the LHC where hadronic Z decays are only available in boosted topologies. The measurement is based on comparison of radiative and non-radiative hadronic decays. Due to the difference in quark charge, the relative contribution of the events with final-state radiation (FSR) directly reflects the ratio of decays involving up- and down-type quarks. Such an analysis requires proper modeling and statistical discrimination between photons coming from different sources, including initial-state radiation (ISR), FSR, parton showers and hadronisation. In our contribution, we show how to extract the values of the Z couplings to light quarks and present the estimated uncertainties of the measurement.
At the Future e+e- Circular Collider a long data taking period is also foreseen at the ttbar production threshold and slightly above, up to sqrt(s)=365 GeV, with more than 300 000 ZH events expected at these energies. We study the precision which can be reached with this dataset on the Higgs mass, and combine it with the measurement obtained with the same recoil mass technique in the e+e- and mu+mu- final state, at srqt(s)=240 GeV, which are also presented in detail. We present also the precision which can be obtained on the total ZH cross section measurement at sqrt(s)=240 and 365 GeV.
The Higgs mass as one of the fundamental parameters in the Standard Model has been already measured with a precision of 140 MeV with the data collected so far at the LHC. However in some cases of looking for small deviations from the SM, current precision or projection of the Higgs mass measurement at the LHC or HL-LHC may not be enough. One prominent example is for the SM prediction of the Higgs partial decay width H → W W ∗ or H → ZZ∗, in which the Higgs mass uncertainty becomes one of the leading sources of parametric theory error. It is expected that at future e+e- colliders the Higgs mass precision can be significantly improved by the “recoil mass method”, at least statistically. This research proposes a new method which may complement to the recoil mass method in terms of systematic errors. The new method employs the signal channel of Higgs decaying to a pair of fermions, in particular τ leptons, or 2 quarks bb and makes use of transverse momentum conservations alone instead of the 4-momentum conservation in the recoil mass method. The key experimental observables will be the momentum directions of tau leptons without any input from energy measurement, and the momentum directions can possibly
be measured by reconstructing the decaying vertex of the tau leptons. This new method can in principle be applied at linear colliders and the LHC as well. Another possible improvement is in the case of τ → 3 − prongs, to reconstruct the decay vertex and use it to directly obtain the direction of the τ with the IP. This method was studied by performing realistic detector simulation and physics analysis with the ILC and ILD frameworks. This method can be used in conjunction with other methods to improve the Higgs mass measurements at colldiers.
Large scale Monte Carlo studies are only possible with sufficient computing power. To make efficient use of these distributed resources, the DIRAC framework, and its instance for future High Energy Lepton Collider Studies, iLCDirac, offers the end users and production managers a user friendly interface. While studies for the ILC and CLIC have made use of iLCDirac for years, this presentation will detail how it was now adopted also for FCCee studies. The presentation will give a brief overview of the interface to end users and production managers, show how the system was employed in recent months to provide Monte Carlo samples for the ECFA Higgs/Electroweak/Top studies, and how these samples can be used. Recent developments about the integration of the full Key4hep software stack, and more specifically about the introduction of FCCee production workflows, will also be discussed.
Full simulation studies are an essential tool to estimate the physics
reach for future colliders. Developing optimal reconstruction tools
for future colliders is one of the main goals for Key4hep. To properly
estimated performances, it is of particular importance to correctly
treat beam-induced backgrounds and estimate how they affect
reconstruction efficiencies and resolutions for sophisticated
algorithms such as for particle flow clustering, which is a key
ingredient for optimal jet energy resolutions. This presentation will
cover the developments for and the integration of background overlay
processor, the interface to Pandora PFA for arbitrary detectors and
related algorithms, such as digitisers, and which issues were
discovered and resolved along the way.
We present an ML-based end-to-end algorithm for adaptive reconstruction in different FCC detectors. The algorithm takes detector hits from different subdetectors as input and reconstructs higher-level objects. For this, it exploits a geometric graph neural network, trained with object condensation, a graph segmentation technique. We apply this approach to study the performance of pattern recognition in the IDEA and CLD detectors using hits from the pixel vertex detectors and the drift chamber. We also build particle candidates from detector hits and tracks in the CLD detector. Our algorithm outperforms current baselines in efficiency and energy reconstruction and allows pattern recognition in the IDEA and CLD detector. This approach is easily adaptable to new geometries and therefore opens the door to reconstruction performance-aware detector optimization.
Charged Long Lived Particles can produce a characteristic "kinked" track when they decay. Since the ILD-TPC can measure more than 200 hits along a particle's trajectory, it is a very powerful tool to detect such kinked tracks. In this study, based on full detector simulation, a new improved kink finding method was developed for the TPC. The potential to constrain different BSM scenarios will also be presented.