Orateur
Description
Machine learning methods have managed to provide significant improvements to data analysis in a multitude of scientific fields. However, as ML finds more and more applications in science, the challenge of quantifying machine learning uncertainties moves into the forefront. This is especially notable in High Energy Physics, where high-precision measurements require precise knowledge of uncertainties. Moreover, systematic uncertainties, such as detector effects and calibration factors, are common occurrences in HEP.
This leads to a requirement for methods and approaches that are not only accurate and precise in the presence of such, imperfectly understood systematic uncertainties, but can also provide accurate estimates of the uncertainty in their prediction. Several methods have been proposed for ML uncertainty quantification, however measuring and comparing the performance of these methods is highly non-trivial.
In this talk, we present several metrics for uncertainty quantification metrics, compare their distinct advantages, and benchmark them with example uncertainty quantification challenges.