Orateur
Description
Over the past decade, Deep Learning became an essential approach in many fields, from classical image processing to several scientific and very specific domains. It often shows very promising performances, outperforming human performances in some specific tasks, and even classical methods for some applications. However, because of a lack of theory that can guarantee their performances, the question of the reliability of these models has been raised. More specifically, it would be a desired property of these models if they were able to provide a confidence level associated to their predictions. A possible formalism to address this question comes from the study of uncertainty quantification in deep learning. In this talk, we first try to give some definitions of uncertainties and the associated metrics, in the cases of classification and regression problems. Then, we describe some state-of-the-art methods that have been developed to estimate uncertainties for Deep Learning models and understand their limitations. Finally, we present a methodology to validate the estimated uncertainties by empirical tests.