Orateur
Description
Predictions from empirical evidence come with many sources of potential uncertainty and error. First, the specific choices of models and concepts that we tack onto the observation give a strong prism to the resulting
conclusion. Uncertainty on which functional form to use in a model, naturally results in uncertainty of conclusions. Outside of mature (post-paradigmatic) quantitative sciences such as physics, the mere choice of ingredients put the model (which quantities to measure) is open.
I will discuss how AI, or machine learning, brings a new angle to these questions, because it tackles complex observations with very flexible models. I believe that it opens new doors to scientific evidence by putting the burden on validity on model outputs, rather than ingredients.
However, a given model fitted on data should ideally express its uncertainty as a probability of the output given the input. This is particularly important in high-stakes applications such as health. I would discuss how controlling this uncertainty requires to control a quantity know as calibration, but also to go further and control the reminder, the "grouping loss", which leads to challenging estimation problems.