Jerzy DUDEK (IPHC and Univerite de Strasbourg)
It is (too) often thought (and even said) that the theory has predictive power, when the comparison between the data and the theory graph looks good. [Something looking good for someone is perhaps quite unsatisfactory for someone else, and with such a definition we have an infinity of different predictive powers in the circulation! - in-acceptable in the XXIst century!] This presentation is oriented for the experimental audiences; The idea originates from a sub-field of Applied Mathematics known under the name of "Inverse Problem" but this latter term has in fact not much in common with the "inverse problem of" quantum mechanics where one finds the potential for the Schrodinger equation, out of the energy spectra and scattering information. In short: We formulate an approach according to which each theory (in particular the nuclear ones) provides not only its results (numbers) but also probabilities that these numbers appear in nature; For instance - what is the probability that the results for 132Sn obtained with the Hamiltonian optimized for 208Pb, will in in f a c t hold true? [when the experiments will be finally done!]. From this posing of the problem it becomes clear that we will present stochastic analysis of the parameter adjustments of theories [why do we find in the literature over 120 various parametrisations of the Skyrme-HF Hamiltonian - and yet, predictions for exotic nuclei obtained with them are so very different???] and general hints on: What to do? - but first of all - What NOT to do? with a given theory, if one does not want to l o o s e the predictive power form the start?