Orateur
Description
Sampling from high-dimensional distributions is an important tool in Bayesian inference problems, like cosmological field level inference and Bayesian neural networks (BNN).
Hamiltonian Monte Carlo and its tuning-free implementation NUTS have pushed the limits of typical dimensionalities where sampling is feasible. I will show that this limit can be pushed further by disposing of the Metropolis-Hastings adjustment, at the cost of introducing asymptotic bias. I will show how this bias can be controlled to be negligible compared to the Monte Carlo error, resulting in tuning-free implementations of unadjusted Hamiltonian, Langevin, and Microcanonical Langevin Monte Carlo. I will also show how it can be used to improve sampling performance with massive parallelization. Finally, I will show applications to real-world problems, including BNNs.