11–16 oct. 2026
Fuseau horaire Europe/Paris

Scientific Challenges

The three challenge tracks are defined to span the main areas where AI is transforming accelerator science. Final challenge definitions will be refined based on the profile of accepted participants and confirmed datasets.


Challenge 1 — Surrogate Modeling for Accelerators

Goal: Build fast, reliable surrogate models for complex accelerator subsystems, enabling rapid simulation and online use in control or optimization loops.

Example topics:

  • Beam dynamics with collective effects (space charge, impedance, etc.)
  • RF cavities and LLRF system response

Example of competing approaches
Team A approach: Physics-informed / hybrid models — embedding known physical laws or structure into the ML architecture
Team B approach: Pure data-driven or foundation-model approaches — leveraging large-scale pretraining or flexible architectures without explicit physics

Key questions: How much accuracy is lost by removing physics? How much data is needed for each approach? How do the two compare under out-of-distribution conditions?


Challenge 2 — Anomaly Detection & Machine Protection

Goal: Detect, classify, and explain abnormal machine behavior from operational time-series data, in support of machine protection and predictive maintenance.

Data type: Time series from diagnostics and control systems

Example of competing approaches
Team A approach: Statistical / signal-based methods combined with ML — classical signal processing, threshold methods, hybrid classical-ML pipelines
Team B approach: Deep learning and self-supervised learning — representation learning, autoencoders, contrastive approaches

Key questions: What counts as an anomaly? How do we evaluate detection without ground-truth labels? Which approach generalizes to unseen fault types?


Challenge 3 — Control, Optimization & Decision Support

Goal: Develop AI-assisted approaches for accelerator tuning, operational intelligence, and decision support — reducing the burden on human operators and improving machine performance.

Methods considered: Reinforcement learning, Bayesian optimization, imitation learning from expert operators

Example of competing approaches
Team A approach: Model-based / constrained approaches — using a surrogate or physics model in the loop, with explicit operational constraints
Team B approach: Model-free or data-centric approaches — learning directly from logged operational data or online interaction without an explicit forward model

Key questions: How do we ensure safety and constraint satisfaction? How much prior knowledge is needed? Can the learned policy transfer across machines or operating modes?