Distributed Computing in Astrophysics
Salle de la Convivialité
Centre François Arago (FACe)
While the performance of the computing hardware is rapidly evolving the evolution is no more primarily due to development of a single processor but is rather achieved by increase of the number of coordinated processing units. Distributed hardware requires special design of the software, supporting dedicated parallel algorithms, reliable and resilient, capable of handling simultaneous access to large and diverse data. Distributed systems have different degrees of integration: cores at each CPU or GPU, nodes at a computing cluster, or world-wide heterogeneous systems like grids or clouds. Different systems set different requirements for the software.
In this 2-day workshop we will learn about the status and future plans of distributed computing in the context of astrophysical projects. In addition, we will have hands-on session to experience how distributed computing using the Hadoop system works, and how astrophysical analysis on a virtual system already today is successfully put into place in the HERA analysis service.
The workshop is organised by Volodymyr Savchenko, Karsten Kretschmer, Cécile Cavet and Volker Beckmann of the François Arago Centre at the APC laboratory.
We acknowledge the financial support of DIM-ACAV and of the LabEx UnivEarthS for this workshop.