Likelihood-free in Paris

Europe/Paris
École Normale Supérieure, Paris

École Normale Supérieure, Paris

45 rue d'Ulm Paris, France
Description

Likelihood-free in Paris

 https://indico.in2p3.fr/e/LFIParis


A gathering to discuss and discover: simulation-based inferenceimplicit inferencedeep learning for simulations and beyond!

Motivated by challenges in the Cosmology/Astrophysics community (especially upcoming experiments and surveys), this is still an opportunity to be joined by people excited by these new approaches from any field.

The meeting is being held (in person) in the historic 45 rue d'Ulm of the École Normale Supérieure. On the first evening, a cocktail reception will be held on Campus. The conference dinner will be held on the second evening nearby.

We strongly encourage talk abstract submissions from early-career scientists as well as more established researchers.

 


Location

On the first day, registration will be held in the Rotond at 45 rue d'Ulm with the talks held in the Salle Dusssane. There should be signs and people to help you get from the gatehosue to the Rotond, but here is a map just in case: Plan_45ULM_RDC.pdf

On the second day we will be at 29 rue d'Ulm and on the third day we will return to 45 rue d'Ulm as before.

 

 

 


Invited speakers & panellists

  • Tom Charnock
  • Alan Heavens
  • Shirley Ho
  • Daniela Huppenkothen
  • Raul Jimenez
  • François Lanusse
  • Gilles Louppe
  • Ben Wandelt

 


Important dates

  • Abstract submission deadline: 3rd March
  • Registration deadline: 3rd March
  • LFIParis meeting: 20th-22nd April 2022

 


Covid restrictions

As the rules currently stand a "pass sanitaire" (vaccine proof) is required to attend.
At the moment we are optimistic, and all spaces at ENS are booked with the expectation of holding the meeting in person. If we find it is not possible to hold an in-person meeting closer to the time, we will announce a move to virtual.

 


Code of conduct

All attendees must follow the IAU Code of Conduct: https://www.iau.org/static/archives/announcements/pdf/ann16007a.pdf

As visitors to ENS, we reserve the right to select (or refuse) attendees.

Participants
  • Adam Coogan
  • Alan Heavens
  • Alessio Spurio Mancini
  • Alex Cole
  • Alexandre Adam
  • Amandine Le Brun
  • Anchal Saxena
  • André ZAMORANO VITORELLI
  • Axel Lapel
  • Beatriz Tucci
  • Benjamin Miller
  • Benjamin Remy
  • Benjamin Wandelt
  • Christoph Weniger
  • Constant Auclair
  • Cyrille Doux
  • Daniela Huppenkothen
  • David Yallup
  • Dirk Scholte
  • Elias Dubbeldam
  • Erwan Allys
  • Francesca Gerardi
  • Francois Boulanger
  • Francois Lanusse
  • Gabriel Jung
  • Gilles Louppe
  • Guilhem Lavaux
  • Hadi Sotoudeh
  • Jamal El Kuweiss
  • James Alvey
  • Jason McEwen
  • Jean-Luc Starck
  • Jed Homer
  • Joel GEHIN
  • Julia Linhart
  • Justin Myles
  • Justine Zeghal
  • Ken Ganga
  • Kiyam Lin
  • Konstantin Karchev
  • Leander Thiele
  • Luca Tortorelli
  • Majd Shalak
  • Malavika Vasist
  • Mario Morvan
  • Martin Bucher
  • Matthew Docherty
  • Maximilian von Wietersheim-Kramsta
  • Natalia Korsakova
  • Natalia Porqueres
  • Niall Jeffrey
  • Nick Kaiser
  • Nicolas Cerardi
  • Nicolas Chartier
  • Nina Bonaventura
  • Noemi Anau Montel
  • Pablo Lemos
  • Pablo Richard
  • Raul Jimenez
  • Roberto Trotta
  • Roger de Belsunce
  • Ronan Legin
  • Silvia Galli
  • Simon Prunet
  • Steven Gratton
  • T. Lucas Makinen
  • Tony Bonnaire
  • Virginia Ajani
  • Will Handley
  • Yongseok Jo
  • Zheng Zhang
  • Ève Campeau-Poirier
    • Registration
    • Welcome: Conference start
    • Invited talk: Ben Wandelt
    • Talks: Wednesday A
      Président de session: Francois Boulanger
      • 1
        Normalising Flows for data analysis of Laser Interferometer Space Antenna

        In the gravitational wave astronomy the main measure that we infer from the observation is the posterior distribution of the parameters that describe the model of the gravitational wave for a particular source.
        The posterior distribution is estimated by utilising Bayes theorem and assuming that we know the model for the signal and that the noise is Gaussian with certain power spectral density.
        The traditional way to perform this estimate is by applying sampling techniques such as MCMC or Nested Sampling. However this requires many evaluations of the likelihood and can take days or months to converge to a good solutions.
        At the same time we expect that we are going to observe Electromagnetic counterparts for Massive Black Hole Binaries (MBHBs) during the inspiral and merger phases, which can be quasi-transient and happen very fast. Many MBHB signals will spend only couple of hours in the LISA sensitivity band.
Therefore to inform electromagnetic observatories and do multimessenger observation it is particularly important to do very fast parameter estimation and predict the location of the source based on the knowledge from the gravitational wave.

        I will present the way how we can do parameter estimation with normalising flows for signals such as MBHBs. Moreover I will describe what are the particular difficulties for our problem and what are the ways forward to resolve them.

        Orateur: Natalia Korsakova (APC)
      • 2
        Inferring Planetary Transits Parameters with Physics-constrained Deep Learning Models

        Deep learning models offer several appealing properties for solving the inverse problem on transit light curves: they can learn arbitrary time-correlated noise distributions, provided there are enough examples; they are commonly scalable with respect to the number of examples and free parameters; they are highly flexible by allowing any differentiable module to be integrated. We discuss various existing or promising approaches to use neural networks for inferring planetary transit parameters, all circumventing the need for explicit likelihood estimation. In particular, we present a work in which we use an explicit forward transit model integrated as an additional constraint in the loss function in a deep learning framework. We show on simulated data how this approach reduces the prediction bias compared to otherwise physics-agnostic models, and finally discuss its applicability to real data and more generally the limitations of deep learning for this problem.

        Orateur: M. Mario Morvan (University College London)
      • 3
        Simulation-based Inference for exoplanet characterization

        With the advent of new ground and space-based instruments that image exoplanets and record their spectra across a broader wavelength range and at higher spectral resolutions, complex atmospheric models are becoming crucial for a thorough characterization. This includes a detailed description of the clouds and their physics. However, since the microphysics of the clouds is not observable, this makes characterization challenging due to the presence of latent parameters which makes the likelihood intractable and restricts characterization to either simplistic models of clouds, or time consuming approximations of the likelihood function. As the parameter space expands, this framework is bound to reach its limits. Hence in this work, we suggest leveraging a novel deep learning approach called Neural Posterior Estimation (NPE). NPE is a simulation based inference (SBI) algorithm that directly estimates the posterior, hence sidestepping the need to compute the likelihood. Once trained, the network provides an estimate for the posterior distribution of any given observation. The key factor in this approach is that the density estimator is amortized, meaning that, once trained, the inference itself does not require simulations and can be repeated several times with different observations, hence saving a lot of time.

        Orateur: Malavika Vasist (University of Liege)
    • Talks: Wednesday (shorter)
      • 4
        Emulating 2-body decaying dark matter with neural networks

        The exact nature of dark matter (DM) remains still unknown and 2-body decaying dark matter model, one of the minimal extensions of standard cold dark matter model ($\Lambda$CDM), has been shown to be an interesting dark matter candidate, namely for a potential of relaxing a famous $\sigma_8$-tension. Moreover, there have even been studies reporting a preference of this model over standard $\Lambda$CDM. In this model, DM particles decay into a dark radiation and stable daughter particles receiving a velocity kick as a consequence of decay. A well-established way of simulating such a model is using an $N$-body code. However, directly running $N$-body simulations within an MCMC framework is computationally prohibitive even on most powerful machines existing. Therefore, it is inevitable to have a faster prediction method bypassing the $N$-body part of the forward modelling.

        In our work, we combine two Machine Learning-based techniques to build up a fast prediction tool for inferring nonlinear effects of 2-body decaying dark matter. First, we run $\sim$100 $N$-body simulations using Pkdgrav3 to obtain the matter power spectrum up to very small scales. We then compress the data using Principal Component Analysis and train sinusoidal representation networks (SIRENs) that can produce the nonlinear power spectra which we call an emulator. This emulator can probe the model in consideration for various redshifts, spatial scales and three 2-body decaying dark matter parameters. Our architecture can emulate the dark matter impact with error below $1$% for both $1-$ and $2-\sigma$ limits, meeting the requirements of most of the currently ongoing and planned probes, such as KiDS-1000, DES and Euclid. We also present constraints for 2-body decaying dark matter model derived from the latest observations, namely KiDS-1000 and Planck 2018.

        Orateur: Jozef Bucko (Institute for Computational Science, University of Zurich)
      • 5
        Towards a Quasi-Universal Field-Level Cosmological Emulator

        We train convolutional neural networks to correct the output of fast and approximate N-body simulations at the field level. Our model, Neural Enhanced COLA, --NECOLA--, takes as input a snapshot generated by the computationally efficient COLA code and corrects the positions of the cold dark matter particles to match the results of full N-body Quijote simulations. We quantify the accuracy of the network using several summary statistics, and find that NECOLA can reproduce the results of the full N-body simulations with sub-percent accuracy down to $k\simeq1~h{\rm Mpc}^{-1}$. Furthermore, the model, that was trained on simulations with a fixed value of the cosmological parameters, is also able to extrapolate on simulations with different values of $\Omega_{\rm m}$, $\Omega_{\rm b}$, $h$, $n_s$, $\sigma_8$, $w$, and $M_\nu$ with very high accuracy: the power spectrum and the cross-correlation coefficients are within $\simeq1\%$ down to $k=1~h{\rm Mpc}^{-1}$. Our results indicate that the correction to the power spectrum from fast/approximate simulations or field-level perturbation theory is rather universal. Our model represents a first step towards the development of a fast field-level emulator to sample not only primordial mode amplitudes and phases, but also the parameter space defined by the values of the cosmological parameters.

        Orateur: M. Neerav Kaushal (Michigan Technological University)
    • Invited talk: Francois Lanusse
    • Pot: (Drinks)
    • Talks: Thursday A
      • 6
        Nested Sampling and Likelihood-Free Inference

        Nested Sampling is an established numerical technique for optimising, sampling, integrating and scanning a priori unknown probability distributions. Whilst typically used in the context of traditional likelihood-driven Bayesian inference, it's capacity as a general sampler means that it is capable of exploring distributions on data [2105.13923] and joint spaces [1606.03757].

        In this talk I will give a brief outline of the points of difference of nested sampling in comparison with other techniques, what it can uniquely offer in tackling the challenge of likelihood-free inference, and discuss ongoing work with collaborators in applying it in a variety of LFI-based approaches.

        Orateur: Will Handley (University Of Cambridge)
      • 7
        Compromise-Free Likelihood-Free Inference

        “Likelihood-Free inference allows scientists to perform traditional analyses such as parameter estimation and model comparison in situations where the explicit computation of a likelihood is impossible. Amongst all methods, Density Estimation LFI (DELFI) has excelled due to its efficient use of simulations.

        However, despite its undeniable promise, current DELFI applications rely on a key approximation, which is the use of a point estimate density estimator. The goal of this work is, instead of finding the fastest methods available, to ask the question: “How far can one get using current computing power without making any compromises or approximations?” By doing this, we hope to gain a better understanding of the method, and develop the basis for future DELFI algorithms."

        Orateur: Pablo Lemos (UCL)
      • 8
        Bayesian Neural Networks with Nested Sampling

        The frontier of likelihood free inference typically involves density estimation with Neural Networks at it's core. The resulting surrogate model used for inference is faced with the well established challenges of capturing the modelling and parameter uncertainties of the network. In this contribution I will review progress made in building Neural Networks trained with Nested Sampling, which represents a novel form of Bayesian Neural Nets. This new paradigm can uniquely capture the modelling uncertainty and provides a new perspective on the fundamental structure of Neural Networks.

        Orateur: David Yallup (University of Cambridge)
      • 9
        HARMONIC: Bayesian model comparison for simulation-based inference

        Simulation-based inference techniques will play a key role in the analysis of upcoming astronomical surveys, providing a statistically rigorous method for Bayesian parameter estimation. However, these techniques do not provide a natural way to perform Bayesian model comparison, as they do not have access to the Bayesian model evidence.

        In my talk I will present a novel method to estimate the Bayesian model evidence in a simulation-based inference scenario, which makes use of the learnt harmonic mean estimator. We recently implemented this method in a public software package, HARMONIC, which allows one to obtain estimates of the evidence from posterior distribution samples, irrespective of the method used to sample the posterior distribution. I will showcase the performance of HARMONIC in multiple simulation-based inference scenarios where the estimated evidence can be compared with exact analytical results, including an example of model selection in the analysis of gravitational waveforms.

        The versatility of the model evidence estimation framework provided by HARMONIC, coupled with the robustness of simulation-based inference techniques, creates a new complete Bayesian pipeline for parameter estimation and model comparison from next-generation astronomical surveys.

        Orateur: Alessio Spurio Mancini (University College London)
    • Talks
      • 10
        Data-driven reconstruction of Gravitational Lenses using Recurrent Inference Machine II

        Modeling strong gravitational lenses in order to quantify the distortions of the background sources and reconstruct the mass density in the foreground lens has traditionally been a major computational challenge. This requires solving a high dimensional inverse problem with an expensive, non-linear forward model: a ray-tracing simulation. As the quality of gravitational lens images increases with current and upcoming facilities like ALMA, JWST, and 30-meter-class ground-based telescopes, the task of fully exploiting the information they contain requires more flexible model parametrization, which in turns often renders the problem intractable. We propose to solve this inference problem using an automatically differentiable ray-tracer, combined with a neural network architecture based on the Recurrent Inference Machine, to learn the inference scheme and obtain the maximum-a-posteriori (MAP) estimate of both the pixelated image of the undistorted background source and a pixelated density map of the lensing galaxy. I will present the result of our method applied to the reconstruction of simulated lenses using IllustrisTNG mass density distributions and HST background galaxy images. I will also discuss how our method shows promise to produce MAP estimates for the Cosmic Horseshoe (SDSS J1148+1930), which has challenged traditional reconstructions methods for over 15 years. I will also discuss avenues for possible extensions of this framework to produce posterior samples in a high dimension space using simulation-based inference.

        Orateur: Alexandre Adam (Université de Montréal)
    • Invited talk: Gilles Louppe
    • Talks: Thursday B
      • 11
        The Measurement of Galaxy Population properties with Forward-Modelling and Approximate Bayesian Computation

        New methodologies to characterise and model the galaxy population as a function of redshift that allow to overcome biases and systematic effects are becoming increasingly necessary for modern galaxy surveys. In this talk, I'm going to describe a novel method we developed for the measurement of galaxy population properties (Tortorelli+18, Tortorelli+20, Tortorelli+21) that relies on forward-modelling and Approximate Bayesian Computation (ABC, Akeret+15). The method builds upon realistic image and spectra simulators (UFig and Uspec, Bergè+13, Fagioli+20), at the heart of which is a simple yet realistic parametric model for the galaxy population. The model parameters can be constrained through ABC by defining and minimising physically-motivated distance metrics between real and simulated photometric and spectroscopic data. By forward-modelling CFHTLS (Cuillandre+12) and PAUS survey (Martì+14) photometric data and SDSS spectroscopic data, we constrained the model parameters using ABC and measured, for the first time with likelihood-free inference (LFI), astrophysically relevant quantities, such as the B-band galaxy luminosity function and the stellar population of galaxies. With the use of a simple yet realistic generative model and LFI, we show that it's possible to reproduce the diversity of galaxy properties seen in modern wide-field surveys.

        Orateur: Luca Tortorelli (University Observatory, Ludwig-Maximilians Universitaet Muenchen)
      • 12
        Unbiased likelihood-free inference of the Hubble constant from light standard sirens

        Late-time measurements of the Hubble Constant (H0) are in strong disagreement with estimates provided by early-time probes. As no consensus on an explanation for this tension has been reached, new independent measurements of H0 are needed to shed light on its nature. In this regard, multi-messenger observations of gravitational-wave standard sirens are very promising, as each siren provides a self-calibrated estimate of its luminosity distance. However, H0 estimates from such objects must be proven to be free from systematics, such as the Malmquist bias. In the traditional Bayesian framework, accounting for selection effects in the likelihood requires calculation of the fraction of detections as a function of the model parameters; a potentially costly and/or inaccurate process. This problem can be bypassed by performing a fully simulation-based and likelihood-free inference (LFI), training neural density estimators to approximate the likelihood-function instead. In this work, I have applied LFI, coupled to neural-network-based data compression, to a simplified light standard siren model for which the standard Bayesian analysis can also be performed. I have demonstrated that LFI provides statistically unbiased estimates of the Hubble constant even in presence of selection effects, and matches the standard analysis’s uncertainty to 1-5%, depending on the training set size.

        Orateur: Francesca Gerardi (UCL)
      • 13
        Marginal likelihood-free cosmological parameter inference from type Ia supernovae

        Type Ia supernovae (SNIa) are standardisable candles that allow tracing the expansion history of the Universe and constraining cosmological parameters, particularly dark energy. State-of-the-art Bayesian hierarchical models scale poorly to future large datasets, which will mostly consist of photometric-only light curves, with no spectroscopic redshifts or SN typing. Furthermore, likelihood-based techniques are limited by their simplified probabilistic descriptions and the need to explicitly sample the high-dimensional latent posteriors in order to obtain marginals for the parameters of interest.
        Marginal likelihood-free inference offers full flexibility in the model and thus allows for the inclusion of such effects as complicated redshift uncertainties, contamination from non-SNIa sources, selection probabilities, and realistic instrumental simulation. All latent parameters, including instrumental and survey-related ones, per-object and population-level properties, are then implicitly marginalised, while the cosmological parameters of interest are inferred directly.
        As a proof-of-concept we apply neural ratio estimation (NRE) to a Bayesian hierarchical model for measured SALT parameters of supernovae in the context of the BAHAMAS model. We first verify the NRE results on a simulated dataset the size of the Pantheon compilation ($\sim 1000$ objects) before scaling to $\mathcal{O}(10^6)$ objects as expected from LSST. We show, lastly, that with minimal additional effort, we can also obtain marginal posteriors for all of the individual SN parameters (e.g. absolute brightness and redshift) by exploiting the conditional structure of the model.

        Orateur: Konstantin Karchev (SISSA / GRAPPA)
      • 14
        Time delay cosmography with a neural ratio estimator

        The latest measurements of the Hubble constant, H$_0$, by local probes like supernova and early Universe probes like the Cosmic Microwave Background are still at a ~$5 \sigma$ tension with each other. Time delay cosmography with strong gravitational lensing is one of the alternative independent methods that could shed light on this tension. The upcoming Legacy Survey of Space and Time should observe at least 3000 lensed quasars with well-measured time delays. However, analyzing this many systems with the traditional method is not feasible due to computational costs. Fortunately, machine learning methods provide an opportunity to accelerate this procedure.

        Here, we discuss our ongoing work in estimating H$_0$ in a simulation-based inference framework using neural ratio estimators. This allows implicite marginalization over large sets of nuisance parameters, while providing an efficient way to estimate this low-dimensional variable. We discuss our simulation pipeline, the inference structure, show preliminary results on simulated data, and point to future directions and the challenges of applying the method to real data.

        Orateur: Ève Campeau-Poirier (Université de Montréal)
    • Talks: Thursday C
      • 15
        Accelerating Simulation-Based Inference with Differentiable Simulators.

        Recent advances in simulation-based inference algorithms using neural density estimators have demonstrated an ability to achieve high-fidelity posteriors. However, these methods require a large number of simulations, and their applications are extremely time-consuming.

        To tackle this problem, we are investigating SBI methodologies that can make use of not only samples from a simulator (which is the case when using a black-box simulator), but also the derivative of a given sample. While state-of-the-art neural density estimators such as normalizing flows are powerful tools to approximate density, their architecture does not always allow to include the simulation gradients.

        In this work we present a dedicated approach to density estimation that allows us to incorporate the gradients of a simulator, and thus reduces the number of simulations needed to achieve a given posterior estimation quality. We also compare the simulation cost of existing SBI methods to our method.

        Orateur: Justine Zeghal (APC)
      • 16
        Sampling high-dimensional posterior with a simulation based prior

        We present a novel methodology to address high-dimensional posterior inference in a situation where the likelihood is analytically known, but the prior is intractable and only accessible through simulations. Our approach combines Neural Score Matching for learning the prior distribution from physical simulations, and a novel posterior sampling method based on Hamiltonian Monte Carlo and an annealing strategy to sample the high-dimensional posterior.

        In the astrophysical problem we address, by measuring the lensing effect on a large number of galaxies, it is possible to reconstruct maps of the matter distribution on the sky, also known as mass maps. But because of missing data and noise dominated measurement, the recovery of mass maps constitutes a challenging ill-posed inverse problem.

        Reformulating the problem in a Bayesian framework, we target the posterior distribution of the mass maps conditioned on the galaxy shapes observations. The likelihood, encoding the forward process from the mass map to the shear map is analytically known, but there is no closed form expression for the full non-Gaussian prior over the mass maps. Nonetheless, we can sample mass maps from an implicit full-prior through cosmological simulations, taking into account non-linear gravitational collapse and baryonic effects. We use a recent class of Deep Generative Models based on Neural Score Matching to learn the full prior of the mass maps. Then, we can sample from the posterior distribution with MCMC algorithms, using an annealing strategy for efficient sampling in a high-dimensional space.

        We are thus able to obtain samples from the full Bayesian posterior of the problem and can provide mass maps reconstruction alongside uncertainty quantifications.

        Orateur: Benjamin Remy (CEA Paris-Saclay)
      • 17
        Truncated Marginal Neural Ratio Estimation with swyft

        Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulation-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms. Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior and by proposing simulations targeted to an observation of interest via a prior suitably truncated by an indicator function. Furthermore, by estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results. Since scientists cannot access the ground truth, these tests are necessary for trusting inference in real-world applications. We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors.

        Our implementation of the above algorithm is called swyft. It accomplishes the following items: (a) estimates likelihood-to-evidence ratios for arbitrary marginal posteriors; they typically require fewer simulations than the corresponding joint. (b) performs targeted inference by prior truncation, combining simulation efficiency with empirical testability. (c) seamless reuses simulations drawn from previous analyses, even with different priors. (d) integrates dask and zarr to make complex simulation easy.

        Relevant code and papers can be found online here:
        https://github.com/undark-lab/swyft
        https://arxiv.org/abs/2107.01214

        Orateur: Benjamin Kurt Miller (University of Amsterdam)
      • 18
        Cosmological Applications of Truncated Marginal Neural Ratio Estimation

        I will describe some applications of Truncated Marginal Neural Ratio Estimation (TMNRE) to cosmological simulation-based inference. In particular, I will report on using SBI for CMB power spectra (based on https://arxiv.org/abs/2111.08030) and realistic 21cm simulations (work in progress). Along the way, I plan to discuss some thoughts on how to incorporate active learning scenarios with high-dimensional nuisance parameter spaces, as well as criteria we need to trust results generated via simulation-based inference.

        Orateur: Alex Cole (University of Amsterdam)
    • Talks
      • 19
        Hierarchical Probabilistic U-Net (HPU-Net) for generating high-dimensional posterior samples

        Deep generative models have proved to be powerful tools for likelihood-free inference, providing a promising avenue to address the problem of doing inference in very high-dimensional parameter space, particularly in the context of the upcoming generation of sky surveys. In this talk, I will present our ongoing exploration of the Hierarchical Probabilistic U-Net (HPU-Net) for generating high-dimensional posterior samples. I will summarize the experiments we conducted with HPU-Net and the methods we employ to assess the quality of its generated samples. We will also present the results of training this model in an adversarial setup and how it affects the quality of samples. We hope to apply this tool to the problem of reconstructing the initial conditions of the Universe, among others.

        Orateur: Mohammad-Hadi Sotoudeh
    • Debate
    • Dinner (pregrestration required): Au Bistrot de la Montagne
    • Talks: Friday A
      • 20
        GLASS: A General Likelihood Approximate Solution Scheme

        We present a technique for constructing suitable posterior probability distributions in situations for which the sampling distribution of the data is not known. This is very useful for modern scientific data analysis in the era of “big data”, for which exact likelihoods are commonly either un- known, computationally prohibitively expensive or inapplicable because of systematic effects in the data. The scheme involves implicitly computing the changes in an approximate sampling distribution as model parameters are changed via explicitly-computed moments of statistics constructed from the data.

        Orateur: Dr Steven Gratton (Kavli Institute for Cosmology Cambridge)
      • 21
        Towards universal simulation-based inference with TMNRE

        The correct interpretation of detailed astrophysical and cosmological data requires the confrontation with equally detailed physical and detector simulations. From a statistical perspective, these simulations typically take the form of Bayes networks with an often very large number of uncertain or random parameters. These are often intractable to analyse using likelihood-based techniques. In these cases, specific scientific questions are answered through engineering dedicated data analysis techniques and/or simplified simulation models. Examples include point source detection algorithms, template regression, or methods based on one-point or two-point statistics. Importantly, these algorithms define the space of inference problems that can be solved and hence bound the information that we can extract from data. And they often introduce biases that can be difficult to control.
        I will here argue that targeted simulation-based inference might provide a path towards providing precise and accurate answers for any possible inference problem using relatively few simulations from the complete simulation model only. I will demonstrate this in the context of models of the gamma-ray sky, using Truncated Marginal Neural Ratio Estimation (TMNRE) to perform inference for point sources, point-source populations and diffuse emission components. I will highlight the importance of selecting the right network architectures and validation of the results, and conclude with open problems and challenges.

        Orateur: Christoph Weniger (GRAPPA)
      • 22
        Measuring individal dark matter halos in strong lenses with truncated marginal neural ratio estimation

        Strong lensing is a unique gravitational probe of low-mass dark matter (DM) halos, whose characteristics are connected to the unknown fundamental properties of DM. However, measuring the properties of individual halos in lensing observations with likelihood-based techniques is extremely difficult since it requires marginalizing over the numerous parameters describing configuration of the lens, source and low-mass halo population. In this talk I introduce an approach that addresses this challenge using a form of simulation-based inference called truncated marginal neural ratio estimation (TMNRE). TMNRE enables marginal posterior inference for the properties of an individual subhalo directly from a lensing image using a neural network, trained on data tailored to the image over a series of rounds. Using high-resolution mock observations generated with parametric lens and source models, I first show that TMNRE can infer a subhalo’s properties in scenarios where likelihood-based methods are applicable. I then show that TMNRE makes it possible to extend this analysis to further marginalize over the properties of a population of low-mass halos, where likelihood-based methods are intractable. This paves the way towards robust marginal inference of individual subhalos in real lensing images, and compliments efforts to directly measure the DM halo mass function from images.

        Orateur: Adam Coogan (Université de Montréal and Mila)
      • 23
        Using Neural Ratio Estimation and Probabilistic Image Segmentation to detect Dark Matter Subhalos

        Analyzing the light from strongly-lensed galaxies makes it possible to probe low-mass dark matter (DM) subhalos. These detections can give us insight into how DM behaves at small scales. Traditional likelihood-based analysis techniques are extremely challenging and time-consuming. One has to marginalize over all lens and source model parameters, which makes it practically intractable. Near-future telescopes will provide a lot of observational data, therefore a fast automated approach is needed. We are using the likelihood-free simulation-based inference (SBI) method Neural Ratio Estimation (NRE). With NRE, neural networks learn the posterior-prior ratio.

        I will describe how I am combining NRE with a U-Net to directly detect the mass and position of multiple subhalos at once. A U-Net is a CNN developed for image segmentation. It is used to classify each pixel of an image, where the network combines down- and upsampled information. Where 'traditional' image segmentation is only interested in binary predictions, 'probabilistic' image segmentation is able to calculate the pixel posteriors of the subhalos coordinates. To do this, one needs to correctly calibrate the results. With this approach, we can obtain predictions for every single pixel about the probability that there is a subhalo with a certain mass.

        Orateur: M. Elias Dubbeldam (GRAPPA institute (University of Amsterdam))
    • Talks
      • 24
        Lifting weak lensing degeneracies with field-based inference

        With Euclid and the Rubin Observatory starting their observations in the coming years, we need highly precise and accurate data analysis techniques to optimally extract the information from weak lensing measurements. However, the traditional approach based on fitting some summary statistics is inevitably suboptimal as it imposes approximations on the statistical and physical modelling. I will present a new method of cosmological inference from shear catalogues, BORG-WL, which is simulation-based and uses a full physics model. BORG-WL jointly infers the cosmological parameters and the dark matter distribution using an explicit likelihood at the field level. By analysing the data at the pixel level, BORG-WL lifts the weak lensing degeneracy, yielding marginal uncertainties on the cosmological parameters that are up to a factor 5 smaller than those from standard techniques on the same data. I will discuss the current status and ways to meet the challenges of this approach and compare it to simulation-based inference with implicit likelihoods.

        Orateur: Natalia Porqueres (Imperial College)
    • Invited talk: Daniela Huppenkothen
    • Talks
      • 25
        Interpreting non-Gaussian posterior distributions of cosmological parameters with normalizing flows

        Modern cosmological experiments yield high-dimensional, non-Gaussian posterior distributions over cosmological parameters. These posteriors are challenging to interpret, in the sense that classical Monte-Carlo estimates of summary statistics, such as tension metrics, become numerically unstable. In this talk, I will present recent work where normalizing flows (NF) are used to obtain analytical approximations of posterior distributions, thus enabling fast and accurate computations of summary statistics as a quick post-processing step. First (arXiv:2105.03324), we develop a tension metric, the shift probability, and an estimator based of NFs, that work for non-Gaussian posteriors of both correlated and uncorrelated experiments. This allows us to test the level of agreement between two experiments such as the Dark Energy Survey (DES) and Planck using their full posteriors, but also the internal consistency of DES measurements. Second (arXiv:2112.05737), we use the NF differentiable approximation to define a local metric in parameter space. This allows us to define a covariant decomposition of the posterior, which is useful to characterize what different experiments truly measure. As an application, we estimate the Hubble constant, $H_0$, from large-scale structure data alone. These tools are available in the Python package tensiometer.

        Orateur: Cyrille Doux (Laboratoire de Physique Subatomique et de Cosmologie)
      • 26
        The Essence of the Cosmos: how much cosmological information is trapped in large-scale structure, and can it be extracted ?

        How much cosmological information is embedded in large-scale structure, and can we extract it? Modern cosmological surveys aim to capture rich images or "fields" of evolving cosmic structure but are often too massive to be interrogated pixel-by-pixel at the field level. We demonstrate that simulation-based compression and inference can be equivalent to all-pixel field likelihoods. We compare simulation-based inference with maximally-informative summary statistics compressed via Information Maximising Neural Networks (IMNNs) to exact field likelihoods. We find that a) summaries obtained from convolutional neural network compression do not lose information and therefore saturate the known field information content, b) simulation-based inference using these maximally informative nonlinear summaries recovers nearly losslessly the exact posteriors of field-level inference, bypassing the need to determine or invert covariance matrices, or assume gaussian summary statistics, and c) even for this simple example, implicit, simulation-based likelihood incurs a much smaller computational cost than inference with an explicit likelihood. This work uses a new IMNNs implementation in 𝙹𝚊𝚡 that can take advantage of fully-differentiable simulation and inference pipeline. We further highlight extensions of this pipeline to cases where the cosmological field information is not known a priori, such as in full N-body gravitational and hydrodynamical simulations.

        Orateur: M. Lucas Makinen (Imperial College London)
      • 27
        Simulation-based inference from the CMB

        In this seminar, I will discuss challenges arising in cosmological data analysis. Either likelihoods are intractable or systematics in the data cannot be properly modelled. How can we make reliable inference from noise and systematics dominated signals, such as the optical depth to reionization (tau) or the tensor-to-scalar ratio (r) from large angular scale CMB data? Therefore, I will present methods ranging from likelihood-approximations to density-estimation likelihood-free approaches to constrain cosmological parameters. I will discuss advantages and draw backs of these methods and apply them to current observational data. The developed methods will be required for next-generation CMB surveys, such as LiteBIRD and Simons Observatory.

        Orateur: Roger de Belsunce (University of Cambridge)
      • 28
        Towards a Likelihood-Free Inference Analysis of KiDS-1000 Cosmic Shear

        Likelihood-free inference (LFI) allows to evaluate non-trivial likelihood functions, while making it possible to fully propagate all uncertainties from the data vectors to the final inferred parameters. Nevertheless, this necessitates computationally optimised yet realistic forward simulations which are not trivial to procure for a cosmic shear analysis (Jeffrey et al. 2020).

        In this work, we propose such a forward simulation pipeline which produces observable Pseudo-Cls from lognormal random galaxy and shear fields. The pipeline reproduces a realistic KiDS-1000 shear catalogue by sampling galaxies and their shapes from the galaxy and shear fields while factoring in the survey's mask and redshift distributions. For added realism, other observational effects, such as survey variable depth, can be included in the simulations. For our LFI pipeline, we opt to obtain Pseudo-Cl cosmic shear observables from these catalogues, since they allow for similar accuracy and precision in the cosmological inference as with other probes (Loureiro et al. 2021) while being more efficient to calculate. We find that the pipeline is internally consistent and produces realistic data vectors towards a likelihood-free analysis of KiDS-1000 cosmic shear.

        Orateurs: Maximilian von Wietersheim-Kramsta (University College London), Kiyam Lin (UCL)
      • 29
        Towards a Likelihood-Free Inference Analysis of KiDS-1000 Cosmic Shear

        Cosmological weak lensing in the era of modern high-precision cosmology has proven itself to be an excellent probe of key parameters of the standard ΛCDM Model (Lambda Cold Dark Matter). However, the cosmological inference task of working with weak lensing data involves a complex statistical problem to solve within the likelihood function (Jeffrey et al. 2021). Likelihood-free inference (LFI) allows us to overcome the likelihood problem contained within the non-trivial stochastic modelling processes. For cosmic shear analysis however, it is a challenge to procure forward simulations that are both accurate and computationally optimised.

        In this work, we propose to make use of a simulation pipeline that produce Pseudo-Cl cosmic shear observables as they contain a similar level of information as other probes (Loureiro et al. 2021) whilst being more efficient to calculate. We demonstrate the power of using a machine learning-based likelihood-free inference methodology in the form of the PyDELFI package by Alsing et al. (2019) combined with score compression to recover cosmological parameter posteriors that are as good as those inferred through traditional methods, but with a near 10 times reduction in necessary evaluations. We find that the performance of our chosen likelihood-free inference methodology is robust to both a poor choice of fiducial cosmology used in the score compression as well as poor compression through an inaccurate data covariance matrix.

        Orateurs: Kiyam Lin (UCL), M. Maximilian Von Wietersheim-Kramsta (UCL)
    • Talks
      • 30
        Simulation-based inference of dark matter properties in strong gravitational lenses

        Precision analysis of strong gravitational lensing images can in principle characterize the population of small-scale dark halos and consequentially constrain the fundamental properties of dark matter (DM). In reality, this analysis is extremely challenging, because the signal we are interested in has a sub-percent level influence on high-variance data dominated by statistical noise.
        Robustly inferring collective substructure properties from gravitational lensing images requires marginalizing over all source, lens, and numerous subhalos and line-of-sight halos parameters. Thus, conventional likelihood-based methods would be extremely time-consuming, necessitating the exploration of a very high-dimensional parameter space, which is often intractable. Instead, we use a likelihood-free simulation-based inference (SBI) method called truncated marginal neural ratio estimation (TMNRE) that leverages neural networks to directly obtain low-dimensional marginal posteriors from observations.
        We present a new multi-stage method that combines parametric lensing models and TMNRE to constrain the DM halo mass function cutoff scale. We apply our proof-of-concept pipeline to realistic, high-resolution, mock observations, showing that it enables robust inference through marginalization over source and lens parameters, and large populations of realistic substructures that would be undetectable on their own. These first results demonstrate that this method is imminently applicable to existing lensing data and to the large sample of very high-quality observational data that will be delivered by near-future telescopes.

        Orateur: Noemi Anau Montel
      • 31
        Information content on primordial non-Gaussianity from the non-linear dark matter field

        Constraining primordial non-Gaussianity using large-scale structure data usually requires accurate predictions of the matter bispectrum, limiting significantly the range of scales which can be considered (linear and midly non-linear regimes).
        In this talk, I will present a simulation-based inference approach which allows us to probe the non-linear regime. We combine the modal bispectrum estimator (a standard method to extract bispectral information from data) with an optimal compression scheme (using the score function) to build a quasi maximum-likelihood estimator for $f_\mathrm{NL}$ of several primordial shapes (local, equilateral, orthogonal). I will then show the constraints we obtained from the Quijote simulations, including a joint-analysis with the power spectrum to disentangle the impact of several cosmological parameters from primordial non-Gaussianity.

        Orateur: Gabriel Jung (University of Padova)
      • 32
        Simulation-Based Inference in Strong Gravitational Lensing

        In the coming years, a new generation of sky surveys, in particular, Euclid Space Telescope, and the Rubin Observatory's Legacy Survey of Space and Time (LSST) will discover more than 200,000 new strong gravitational lenses, an increase of more than two orders of magnitude compared to currently known samples. Accurate and fast analysis of such large volumes of data within a clear statistical framework is crucial for all sciences enabled by strong lensing. In this talk, I will discuss the critical role of simulation-based inference (SBI) in the context of strong gravitational lensing analysis for these surveys. I will present our results related to obtaining the posteriors of the macro-parameters of individual strong lenses using machine learning models and share our ongoing work in inferring population-level statistics using hierarchical models.

        Orateur: Ronan Legin (Université de Montréal)
      • 33
        Scattering transform and generative models for LFI application

        Scattering transforms are a new kind of statistics which have been recently developed in data science. They share ideas with convolutional networks, allowing in-depth characterization of non-Gaussian fields, but do not require any training stage. These statistics allow in particular to build realistic generative models from a single image, which can be used as forward model for LFI applications. In this talk, I will in particular show an application of such generative forward models to perform a CMB foreground removal on a single-frequency BICEP-like sky patch in polarization.

        Orateur: Erwan Allys (LPENS, Paris)
    • LFIParis summary