- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The next biannual meeting of the Rubin-LSST France community will be held 13-15 December 2023, at CC-IN2P3 in Lyon.
The location of CC-IN2P3 can be found in this map and details to reach it by several transportation means can be found online.
For planning your journey, you may also find useful this interactive map provided by the Lyon public transportation company (tcl.fr). The nearest tram stop is Université Lyon 1 (tramway lines T1 and T4). The nearest subway stop is Charpennes (lines A and B), where you can connect to lines T1 and T4 or walk for about 12 minutes to the venue.
The agendas of the most recent editions of this meeting can be found online:
In this talk, I will review the status, recent achievements and scientific results from the Fink project, and discuss the future roadmap.
Reproducibility is a very important topic in today's research world.
With the upcoming wealth of data and amount of code needed to process it, using an adequate set of tools is key to help you and your colleagues ensure the produced results will be reproducible, or at least keep track of how they were achieved. This is what MLOps intends to do.
In this presentation I will define what are MLOps and motivate the use of these tools in a daily context, both for machine learning and general research purposes.
In the interest of improving photo-z estimation with template fitting methods, data from observations with FORS2 has been used to generate templates to be used in LEPHARE-like code. An in-house estimator enables access to the whole posterior distribution of redshift and aims to help us understand how to generate / select the best templates, and how to extract the closest point-estimate to the true redshift.
Upcoming deep optical surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will scan the sky to unprecedented depths and detect billions of galaxies. This amount of detections will however cause the apparent superposition of galaxies on the images, called blending, and generate a new systematic error due to the confusion of sources. As consequences, the measurements of individual galaxies properties such as their redshifts or shapes will be impacted, and some galaxies will not be detected. However, galaxy shapes are key quantities, used to estimate masses of large scale structures, such as galaxy clusters, through weak gravitational lensing.
This talk will present a new catalog matching algorithm, called friendly, for the detection and characterization of blends in simulated LSST data for the DESC Data Challenge 2. The purpose of this matching algorithm is to combine several matching procedures and to use well-defined blended systems in order to study their impact on weak gravitational lensing profiles and in a second time on galaxy clusters mass estimates and on cosmological parameters.
Understanding the impact of astrophysical environment on type Ia supernovae (SNe Ia) properties is crucial to minimize systematic uncertainties in cosmological analyses based on this probe. We investigated the dependence of the SN Ia SALT2.4 light-curve stretch on the distance from their nearest galaxy cluster to study a potential effect of the intracluster medium (ICM) environment on SNe Ia intrinsic properties. We used the largest SN Ia sample sample to date, the ZTF DR2 sample, and cross-matched it with existing X-ray, Sunyaev-Zel’dovich, and optical cluster catalogs in order to study the dependence between stretch and distance to the nearest detected cluster from each SN Ia. In this presentation, I will show how clusters can help understanding SNe Ia astrophysical systematics and how SNe Ia offer a new avenue to studying the evolution of star formation rate in clusters. Our work supports previous evidence that the age of the stellar population is the underlying driver of the bimodial shape of the SN Ia stretch distribution. It also indicates that SNe Ia search at high redshift targeted towards clusters to maximize detection probability should be consider with caution as the stretch distribution of the detected sample would be strongly biased towards the old sub-population of SNe Ia. Furthermore, we show that the effect of the ICM environment on the SNe Ia properties appears to be significant up to the splashback radius of clusters. This is compatible with previous works based on observations and simulations of a galaxy age gradient with respect to cluster-centric distance in massive halos. The next generation of large area surveys will provide an order of magnitude increase in the size of SNe Ia and cluster catalogs. This will enable more detailed analyzes of the impact of halo mass on the intrinsic properties of SNe Ia and of the fraction of quenched galaxies in the outskirts of clusters, where direct measurements are challenging.
Analyzing the distribution of peculiar velocities for a sample of objects enable us to measure the growth-rate of cosmic structure ($f\sigma_8$), which is directly linked to the theory of gravity assumed in the cosmological model. In this work we measure peculiar velocities of SNeIa by comparing their estimated distances with redshifts. We recover the SNeIa peculiar velocities through the residual of the Hubble Diagram and we measure $f\sigma_8$.
In this work we have used mocks from Outer Rim N-body simulations to obtain realistic velocities for the simulated SNeIa. Using the LSST observing strategy and SNsim survey simulator, we have simulated realistic light-curves. We present the first preliminary results on $f\sigma_8$ from LSST SNeIa, under simple assumptions on the selection function.
Cosmological parameters can be inferred from the measurement of cluster abundance in the unbinned regime. The standard unbinned likelihood is based on Poisson statistics and does not include the Super-Sample Covariance (SSC), assumed to be negligible, arising from the fluctuation and clustering of the underlying matter density field. In this talk, I present a formalism to account for SSC in the unbinned regime and compare it to the standard unbinned approach.
The Cluster Finder Comparison Project is aimed at building a pipeline for consolidating, validating, and comparing various cluster finder algorithms on the cosmoDC2 and DC2 simulated catalogs. This project not only gives a better understanding of how the different cluster finder algorithms perform but also provides consistent metric estimations like purity and completeness which is to be used in cosmological predictions. In this talk we will discuss the current state of the project.
The abundance of galaxy clusters is a powerful probe for cosmology, especially on large optical surveys where hundreds of thousands can be detected. One of the main techniques that allows us to evaluate the large number of composing galaxies at a low cost is the photometric estimation of redshifts, i. e. photo-zs. Here we intend to evaluate the propagation of the uncertainties of photozs on the determination of the cluster redshift and mass proxy, and consequently, on cosmological constraints from cluster abundance. This preliminary work is evaluating the impact on the dark matter halos on the DC2 simulation using FlexZBoost photometric redshifts. Ultimately, the goal will be to evaluate this effect on optically detected galaxy clusters and optimize the application of different photo-zs for cluster cosmology.
Last September, new optical elements were installed and tested for spectroscopy on AuxTel. I will discuss the improvements expected as a result of these modifications, and show the first results.
In this contribution we will present the data products that the Rubin project is deliver as part of the annual data releases of LSST as well as the services and tools to access them at the archive center and other data access centers around the world. We will provide the reference documentation prepared by Rubin for the needs of the science collaborations.
We will also mention the data products we can expect to have available at CC-IN2P3 given the known budgetary constraints.
The end goal is to start a dialog with scientists members of LSST France to identify what among those products are necessary at CC-IN2P3 for conducting the research relevant for them as well as the tools as services required to scientifically exploit those products.
Our aim is to extract SED templates from spectral and photometric data observed at high redshift to update the database of SED templates used for the PhotoZ SED Template fitting.
At present, we have a set of 550 spectra observed on the Fors2 instrument of the UT1 telescope at the VLT by astronomer Edmond Giraud (LUMP, Eric Nuss and J. Cohen-Tanugi) at an average redshift of 0.3.
These spectra have been supplemented by photometric observations from the Galex and KIDS-VISTA surveys.
We'll show how we can fit DSPS model parameters and dust parameters to these data to extract deredden spectra that we can compare with those obtained with StarLight by Eric Nuss.
This presentation is related to Joseph Chevalier's presentation on obtaining the best SED templates for PhotoZ codes such as LePhare++.
Nous travaillons sur une technique d’estimation des compensations de couleur par objet basées sur des observations spectroscopiques avec le télescope auxiliaire (AuxTel) à appliquer à la photométrie de LSST. Cette méthode se basera sur la comparaison des couleurs obtenues à partir de spectres standards lors de la traversée d'une atmosphère standard et lors de la traversée d'atmosphères simulées avec différentes transparences, fonctions des composants chimiques.
With the upcoming Legacy Survey of Space and Time, the number of observed type Ia supernovae is expected to substantially increase, leading to a reduction in statistical uncertainties and thus placing flux calibration as the predominant source of uncertainty in constraining the dark energy equation of state parameter w.
Atmosphere is one of the last remaining sources of systematic uncertainty among others, limiting photometric observations accuracy. In the context of the StarDICE experiment that aims to refine the spectrophotometric reference CALSPEC star catalog down to the millimagnitude level, atmospheric effects need to be corrected with high-precision. Gray extinction is one such atmospheric effect causing wavelength-independent flux attenuation that is challenging to quantify. One proposed solution is the use of an uncooled infrared thermal camera to image the long-wave infrared range (10-12 µm) corresponding to the atmosphere transparency window. In this presentation, I will talk about the basic concept of the instrument, the on-going calibration data analysis, and some preliminary results of recent data obtained in parallel to the StarDICE ugrizy photometric observations.
The number of type Ia supernova observations will see significant growth within the next decade, especially thanks to the Legacy Survey of Space and Time undertaken by the Vera Rubin Observatory in Chile. With this improvement, statistical uncertainties will decrease and flux calibration will become the main uncertainty for the characterization of dark energy. The StarDICE experiment proposes to overcome this uncertainty by measuring the spectra of stars from the CALSPEC catalog at the millimagnitude level, and make it the new calibration reference for the LSST experiment.
The StarDICE experiment is currently operating at l’Observatoire de Haute-Provence, and has been taking data since the beginning of 2023. To reach a sub-percent precision, the instrument throughput will be calibrated and monitored with a LED-based artificial star source, calibrated on NIST photodiodes. In this talk, I will present the ongoin analysis over the slitless spectrophotometric data, and the photometric analysis on the data obtained with the "ugrizy" filters.
Shear estimation began in 1995 with the KSB proposal, which essentially consists of using a combination of the second moments of the observed image of the galaxy and the PSF. Numerous other methods have been proposed over the years, and in most cases, the measurements derived from these methods have to be corrected using simulations, and therefore depend on the assumptions of these simulations, particularly concerning galaxy and PSF profiles. Whether these methods measure shapes by maximum likelihood, or by a more or less complex combination of second moments, the corrections to be applied depend on the details of galaxy and PSF shapes.
Although we use simulations, we are trying here to develop an approach that is independent of the galaxy profile and PSF, since the way the estimator depends on them is measured on the images themselves.
I will present the weak lensing analysis of the 6 very massive and complex Frontier Field clusters, using the BUFFALO HST observations, and I will discuss how the different measuring and modeling assumptions impact their mass estimation.
We look for signatures of the Hu-Sawicki f (R) modified gravity theory, proposed to explain the observed accelerated expansion of the
universe; in observations of the galaxy distribution, the cosmic microwave background (CMB), and gravitational lensing of the CMB.
We study constraints obtained by using observations of only the CMB primary anisotropies, before adding the galaxy power spectrum
and its cross-correlation with CMB lensing. We show that cross-correlation of the galaxy distribution with lensing measurements is
crucial to breaking parameter degeneracies, placing tighter constraints on the model. In particular, we set a strong upper limit on
log| fR0 |< −4.61 at 95% confidence level. This means that while the model may explain the accelerated expansion, its impact on large-
scale structure closely resembles General Relativity. Studies of this kind with future data sets will probe smaller potential deviations
from General Relativity.
Gravitational wave (GW) multi-messenger (MM) observations of binary neutron star systems mergers (BNSs) are extremely challenging. With current GW interferometers, BNS detection rates are low. This will significantly improve with next generation gravitational wave interferometers, such as the Einstein Telescope (ET). These latter will detect thousands of BNS beyond the Local Universe, revolutionizing MM astrophysics.
Electromagnetic (EM) counterparts of ET BNS detections will likely be faint and to be found within large error regions among a huge number of contaminants. Photometric observations with Rubin will be fundamental to detect counterpart candidates. To exploit such observations at best, it is necessary to identify and characterize the EM counterparts. To this purpose spectroscopic observations are mandatory in most cases, and currently they are the bottleneck of GW-MM science.
In this context, I am exploring the possibility of using the next generation Integral Field Spectroscopy (IFS) and Multi-Object Spectroscopy (MOS) to this aim, considering the synergy with Rubin photometric observations.
I will present the results of the work I am carrying out within the Wide-field Spectroscopic Telescope (WST) science team and the MM division of the ET Observing Science Board to prepare WST observations to identify the EM counterparts of ET BNS detections.
The detection of new astronomical sources is one of the most anticipated outcomes of the next generation of large-scale sky surveys. Experiments such as the Vera Rubin Observatory Legacy Survey of Space and Time are expected to continuously monitor large areas of the sky with remarkable deliberation, which will undoubtedly lead to the detection of unforeseen astrophysical phenomena. At the same time, the volume of data gathered every night will also increase to unprecedented levels, rendering serendipitous discoveries unlikely. In the era of big data, most detected sources will never be visually inspected, and the use of automated algorithms is unavoidable. I would like to present the anomaly detection module developed for the Fink community broker – one of the official LSST brokers – to search for unusual astrophysical events in the Zwicky Transient Facility alert stream and LSST in future. I will talk about the recent updates on the module and present the most recent discoveries. The further plans on incorporating the active anomaly detection algorithms will be discussed.
Gamma-Ray Bursts (GRBs) are among the most energetic phenomena in the Universe. Viewed off-axis, this emission has a negligible gamma-ray flux and is hence called ”GRB orphan afterglow” (OA). To identify OAs in Rubin LSST data, we plan to use the characteristic features of their light curves which depends on several parameters defined by the chosen model, here the forward shock model associated with the electron synchrotron model. In this work, we generated a population of short GRBs and simulated their afterglow light curves with the afterglowpy package. We then used the rubin_sim package to simulate “pseudo-observations” of these OAs with Rubin LSST. Features describing the shape of the light curves are then calculated for the pseudo-observed OA light curves and for a sample of ELAsTiCC data, allowing us to establish some cuts to remove as much as possible non-OA events among all the data. This work will ultimately allow us to implement a filter in the alert broker FINK
The upcoming Vera C. Rubin Observatory with its deep and wide survey of the sky will revolutionize the time domain astronomy. Compared to current sky surveys such as the Zwicky Transient Facility (ZTF), the Rubin Observatory will provide a volume of transient and variable objects at least ten times larger, reaching ten million transient alerts per night. To overcome this challenge, the Fink alert broker has been designed to handle the processing of the Rubin alert stream in real-time.
In this talk, we will cover the status of Rubin's brokers, and in particular the latest results from Fink on ZTF alert data. We will explicitly focus on the real-time multi-messenger astronomy effort carried out in the collaboration. We will describe the preliminary results in searching for real-time coincidences between ZTF alerts and events distributed via the General Coordinates Network (GCN) circulars, and the prospects for Rubin.
Finally, we will describe a new network of telescopes, GVOM, dedicated to the follow-up of alert data from the Fink broker. The GVOM network is a partnership between Fink and the SVOM mission focusing on fast transients.