- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
This meeting is dedicated to CosmoStat members and their collaborators, especially TITAN, ARGOS, TOSCA and FornaX DEEP field members. Deadline to register is Jan 10, 2025.
Capturing the full information in weak lensing data requires new analysis techniques going beyond the standard two-point statistics, which discard the non-Gaussian information in the data. I will present a field-level approach that directly analyses the shear maps at the pixel level and provides uncertainties on the cosmological parameters up to a factor 5 smaller than the two-point statistics on the same data. I will discuss the current status and ways to meet the challenges of this approach for its first real data application.
Weak gravitational lensing is a powerful tool for probing the distribution of dark matter in the Universe. Mass mapping algorithms, which reconstruct the convergence field from galaxy shear measurements, play a crucial role in extracting higher-order statistics from weak lensing data in order to constrain cosmological parameters. However, there has been limited research on whether the choice of mass mapping algorithm affects the inference of cosmological parameters from weak lensing higher-order statistics. This study aims to evaluate the impact of different mass mapping algorithms on the inference of cosmological parameters. Specifically, we assess the constraints on cosmological parameters obtained using different mass mapping techniques. We employ the Kaiser-Squires, iterative Kaiser-Squires, Wiener filter, and MCALens mass mapping algorithm to reconstruct the convergence field from simulated weak lensing data, generated from the cosmo-SLICS simulations. Using these maps, we compute the peak counts, wavelet peak counts, and starlet l1-norm as our data vectors. A Bayesian analysis with MCMC sampling is performed to estimate the posterior distributions of cosmological parameters, including the matter density, the amplitude of matter fluctuations, and the dark energy equation of state parameter. Our results indicate that the choice of mass mapping algorithm significantly affects the constraints on cosmological parameters and that the accuracy of mass mapping algorithms is critical for cosmological inference from weak lensing data. Thus, advanced algorithms like MCALens, which offer a superior reconstruction of the convergence field, can substantially enhance the precision of cosmological parameter estimates.
In this talk, I will present a plug-and-play (PnP) approach for reconstructing convergence maps from noisy shear measurements. The method aims to provide accurate estimates efficiently while eliminating the need to train a deep learning model for each new galaxy survey or region of the sky. Instead, the approach requires training a denoiser just once on simulated convergence maps corrupted with a Gaussian white noise. Additionally, we propose applying a distribution-free uncertainty quantification (UQ) method, conformalized quantile regression (CQR), to this mass mapping framework. Using a calibration set also derived from simulations, CQR provides coverage guarantees independent of any specific prior data distribution. We benchmark our results against CQR applied to existing mass mapping approaches, such as Kaiser-Squires, Wiener filtering, MCALens, and DeepMass. Our findings show that while the miscoverage rate remains constant across methods, the choice of such mass mapping method significantly affects the size of the error bars.
Deep learning has shown great promise for improving medical image reconstruction, often surpassing traditional MBIR methods. However, concerns remain about the stability and robustness of these approaches, particularly when trained on limited data. The Plug-and-Play framework offers a promising solution, showing that a stable reconstruction can be ensured, provided conditions on the plugged network. Yet, it has been underexplored in PET reconstruction.
This talk introduces a convergent PnP algorithm for low-count PET reconstruction, leveraging the Douglas-Rachford splitting method and a network trained for the reconstruction. We evaluate bias-standard deviation tradeoffs compared to MBIR, post-reconstruction processing, and PnP with a Gaussian denoiser across multiple regions, including an unseen pathological case. Our findings emphasize the importance of how convergence conditions are imposed. While spectral normalization underperformed, our deep equilibrium model remained competitive with convolutional architectures and generalized better on our unseen pathology. Our method achieved lower bias and reduced standard deviation at matched bias compared to MBIR. Our results demonstrate PnP's potential to improve image quality and quantification accuracy in PET systems.
With the advent of surveys like Euclid and Vera C. Rubin, astrophysicists will have access to both deep, high-resolution images, and multi-band images. However, these two conditions are not simultaneously available in any single dataset. It is therefore vital to devise image deconvolution algorithms that exploit the best of the two worlds and can jointly analyse datasets spanning a range of resolutions and wavelengths. In this work, we introduce a novel multi-band deconvolution technique aimed at improving the resolution of ground-based astronomical images by leveraging higher-resolution space-based observations. The method capitalises on the fortunate fact that the Vera C. Rubin r-, i-, and z-bands lie within the Euclid VIS band. The algorithm jointly deconvolves all the data to turn the r-, i-, and z-band Vera C. Rubin images to the resolution of Euclid. We illustrate the effectiveness of our method in terms of resolution and morphology recovery, flux preservation, and generalisation to different noise levels.
Exploring different denoising methods for spectral-cube data to find the best way to extract the maximum possible signal from ALMA and JWST data. Mock IFU spectral cubes are generated from state-of-the-art high-resolution simulations - FIRE (Feedback In Realistic Environments), and methods include BSS, Wavelet transforms, and machine learning. The subsequent project (data in prep) involves using spectral information from mock cubes, learning simulation-based galaxy properties using deep learning, and comparing them with the scaling relation results obtained from real spectral observations. Analyzing these results will allow us to learn more about galaxy evolution physics at high redshift. Planned analysis methods include machine learning interpretability and symbolic regression, among others.
Wide-field astronomical images contain a mixture of overlapping sources from very different natures, including stars, galaxies, diffuse emission, and artifacts, complicating scientific analyses. Single-channel source separation methods based on deep learning offer a direct and powerful approach to disentangle these components using only individual observations. In this presentation I will present a software prototype implementing such methods, and explore its application to key science cases, highlighting the potential for exciting new discoveries.
Les Baux de Paris
71 Rue Mouffetard, 75005 Paris, France
https://maps.app.goo.gl/smHc1Dg28EMbSx4L7