23–25 oct. 2018
Institut d'Astrophysique de Paris
Fuseau horaire Europe/Paris

A deep learning approach for the classification of supernovae and the estimation of photometric redshifts

Non programmé
15m
Amphithéatre (Institut d'Astrophysique de Paris)

Amphithéatre

Institut d'Astrophysique de Paris

98bis Boulevard Arago, 75014 Paris

Orateur

Johanna Pasquet (CPPM)

Description

Future large surveys like the Large Synoptic Survey Telescope (LSST) aim to increase the precision and accuracy of observational cosmology. In particular, LSST will observe a large quantity of well-sampled type Ia supernovae that will be one of the major probe of dark energy. However the spectroscopic follow-up for the identification of supernovae and the estimation of redshift will be limited. Therefore new automatic classification and regression methods, that exploit the photometric information only, become necessary .
We have developed two separate deep convolutional architectures to classify supernovae light curves and estimate photometric redshift of galaxies. PELICAN (deeP architecturE for the LIght Curve ANalysis) is designed to characterize and classify light curves from a spectroscopic training dataset small and non-representative of the testing dataset. It takes as input only multi-band light curves. PELICAN is able to detect 85% of type Ia supernovae with a precision higher than 98% from a training database composed of 2,000 LSST simulated light curves of supernovae.
The second Convolutional Neural Network (CNN) is developed to estimate photometric redshifts and associated probability distribution functions (PDFs) of galaxies. We have tested it on the Main Galaxy Sample of the 12th data release of the Sloan Digital Sky Survey. It takes as input 64x64 ugriz images and is trained with 90% of the statistics. We obtained a standard deviation σ for (zspec-zphot)/(1+zspec) of 0.0091 with an outlier fraction of 0.3%. This is an improvement over the current state-of-the-art value (σ ~ 0.0120) obtained by Beck et al. (2016, MNRAS, 460, 1371).

Auteur principal

Documents de présentation

Aucun document.