Thanks to everyone who has registered. The registration is now closed and the schedule is on-line.
Location
Observatoire astronomique de Strasbourg, 11 rue de l'Université, Strasbourg, FRANCE. See the following link for more details about getting here: Access Information
The meeting will be held in the main building with the big dome (plenary sessions in the amphitheatre, hack-a-thon sessions in the amphitheatre and in the meeting room on the ground floor).
Dinner and Lunches
A working dinner is organised for Wednesday 5 February evening at 19h30 at Le Gruber. (https://www.legruber.com) 11, rue du Maroquin.
Buffet lunches will be available on 4, 5 and 6 February.
Short intro from everybody
We will show an overview of the O3 LIGO-Virgo low-latency multi-messenger program and the main strategies and the ongoing implementations for working with a gravitational-wave sky localization in the context of the ESCAPE project.
Using sing SPLAT, Topcat, SAMP in docker containers. How to do it, advantages and disadvantages, problems to solve, how to go on?
A short study about how to propose its service(s) through EOSC
The data explosion in astronomy requires the development of new techniques both from the infrastructure and from the analysis side. In particular, the increase of the data complexity demands a parallel effort to deliver efficient and standardized solutions for accessing and managing data, tools and software. The aim of the ESCAPE project is to build a huge European collaboration to face the new challenges given by data-driven research, complex data workflows, infrastructural issues, data and software interoperability. I will present the prototype that resulted from the first year of work within the project in the form of a live demo. The prototype is a tool for dimensionality reduction and visualization of spectra with an autoencoder and other analogous models, meant to allow users to inspect and interact with astronomical data, and in particular spectra, in a novel way.
STMOC -Space-Time MultiOrder Coverages - offer an unify solution for handling spatial and temporal coverages. This is one of the keys for efficient and fast interoperability tools. We will briefly present the principles of STMOC, the existing implementations, and some scientific use-cases.
The provenance data model proposed as a recommendation to the IVOA is now the base of several implementations. I focused on the capture of relevant provenance information within data processing packages like gammapy and ctapipe, and the job execution tool OPUS.
In this presentation, an overview is given over the efforts of the KM3NeT collaboration to include the requirements for FAIR data already at the early stages of the construction of the high-energy neutrino experiment. Producing scientific results both in the fields of particle physics and astrophysics, the KM3NeT software and data management build both on HEP technologies and move towards the integration into the Virtual Observatory. While test data is taken by the first deployed detection units of the the large-scale experiment in the Mediterranean Sea, the specific challenges to unite scientific requirements for the publication of data used in different areas of physics become clearer and will be presented.
In this talk, I will briefly review the goals of the effort of updating Vocabularies in the VO and the current ideas on how to reach them. Based on this, I will discuss how the first few Vocabulary Enhancement Proposals (which are the means of vocabulary management forseen in the WD) went and what conclusions I draw from that.
In this talk I will go over our recent efforts in utilization of deep learning techniques towards making sense of big astronomical archives. Particularly we will see a demo of the first prototype of RETR-SPECT: a retrieval engine for spectra. I will follow with an exemplar illustration of how we can learn from (BIG) data in astronomy.
----------
The full meeting was a good occasion to get requests from the community concerning provenance of their data. In the context of a needed FAIRisation of science data, the interest in provenance is more and more manifest. This lead to new concrete use cases :
* how to attach provenance information to a VOEvent about a solar (coronal) event ? the idea is to valorize the software used to issue the event, how it was used and what is the data behind, as well as the contact persons.
* how to attach provenance information to a Vizier catalog ? the idea is to provide a link to the origin of the catalog (article, data...).
* Use a CWL workflow description to create provenance information.
We observed that the information was generally already at hand, but a simple way to write this information as being provenance is missing (dedicated XML or JSON structure, attached PROV file, access point to a provenance service ?).
The discussion was also the occasion to prepare the next IVOA steps :
* A ProvTAP working draft is to be prepared for end of march. We discussed the main issues that required a decision.
* ProvSAP is more and more seen as complementary to ProvTAP, and more oriented to graph exploration. As such, a working draft should be completed. It seems preferable to have a very simple interface : ProvSAP queries are based on an identifier, and only a few parameters.
* The implementation note accompanying the Provenance DM should be updated. A general introduction will expose common guidelines when implementing the model, and each implementation will be exposed.
----------