Orateur
Description
As the Large Survey of Space and Time (LSST) will detect fainter objects, the increased object density will lead to more overlapping sources. For LSST, we expect around 60% of galaxies to be blended. In order to better constrain Dark Energy parameters, mapping the matter content of our Universe with weak gravitational lensing is one of the main probes for the upcoming large cosmological surveys and the blending effect is expected to be one of the major systematics to face. Classical methods for solving the inverse problem of source separation, so-called “deblending”, either fail to capture the diverse morphologies of galaxies or are too slow to analyze billions of galaxies. To overcome these challenges, we propose a deep learning-based approach to deal with the size and complexity of the data.
Taking forward the work on Debvader, a deblender that uses a modified form of Variational Autoencoders, our algorithm called MADNESS deblends by finding the Maximum-A-posteriori solution parameterized by latent space representation of galaxies generated with a deep generative model. We first train a VAE as a generative model and then model the underlying latent space distribution so that can it be sampled to simulate galaxies. To perform deblending, we do a gradient descent to find the MAP estimate, i.e., the particular latent space realization that minimizes the sum of negative log-likelihood and negative log probability of z being a galaxy.
In my talk, I will outline the methodology of our algorithm and evaluate its performance using flux reconstruction as a metric.