Description
The LHCb experiment has deployed machine learning and artificial intelligence models in its real-time data processing from the start of Run 1 datataking. Contrary to common fears when the LHC was starting up, these models have proven to not only be more powerful than "classical" alternatives but in many cases also more robust to changing detector performance. Their judicious use has also made the real-time processing faster, over time enabling LHCb to deploy its physics-analysis-quality reconstruction in the second stage of its real-time processing. We will describe the development of machine learning and AI models and algorithms within LHCb's real-time community, the early use of quantised machine learning and models which explicitly learned to be insensitive to physics quantities of interest, and place this work in the context of the physics breakthroughs which it has enabled. Looking towards the future, the LHCb collaboration is proposing to build a second upgrade of its detector, an ultimate flavour factory with unparalleled breadth of reach for heavy flavour and forward physics at the LHC. We will sketch the unprecedented challenges this proposal will pose to the real-time reconstruction and selection of physics-quality signals and the ways in which we anticipate machine-learning and AI models playing a central role in allowing LHCb to rise to this challenge.