Orateur
Description
In my talk I am going to concentrate on the models of machine learning which allow us to learn the probability distributions and apply it to the important unsolved problems in the LISA data analysis.
The most common approach to do parameter estimation is through defining the likelihood function and producing posterior samples with some form of sampling technique. The disadvantage of sampling methods is that they are slow. We propose the Bayesian parameter estimation method which is based on Normalising flows -- a technique which allows to make an extremely fast mapping from the base sample distribution to the posterior conditioned on the data. This is implemented by learning this mapping in advance on the training dataset and then applying the trained map to the real data. We apply this method to the data from the first LISA Data Challenge (LDC) in order to evaluate how the estimated posteriors agree with the standard approaches. The main purpose of the fast parameter estimation is to use it for the multi-messenger observations and to be able to alert other observatories to perform follow-ups.
Another challenge for the extraction of LISA sources from the data stream will be the presence of the multiple signals simultaneously in the data stream. To solve this problem we propose to use different data representations which project the data on the new basis. One of the methods which allows to do that is called Autoencoders (AE). In the simples case AE can be viewed as a non-linear mapping, which maps the original signal to the lower dimensional representation and then recovers it back to the original number of dimensions. We project the data in such a way that we are sensitive to one or the other type of the signal.
We will finish by discussing how we can combine these approaches in the effort to tackle more realistic LISA data analysis problems.