The four experiments at the Large Hadron Collider (LHC) at CERN laboratory near Geneva, Switzerland, are generating, distributing and processing massive amount of data around the clock in a highly distributed computing environment: the Worldwide LHC Computing Grid (WLCG). The amount of storage capacity required worldwide has now exceeded 1 exabyte. To process and analyze the data, and to generate large-scale simulations, close to 400,000 core-years are required. Global data transfers are reaching 35 GB/s.

After a brief review of the LHC experiments and the scientific motivations that are driving the data storage and computing needs, we will describe the WLCG collaboration and its infrastructure and services. Canada is a key player within WLCG and the ATLAS experiment; we will therefore focus on the ATLAS experiment and highlight its distributed computing environment for data distribution, software distribution, and the workload management system used for data processing, simulation and analysis tasks.

Speaker

Dr. Reda Tafirout

Research Scientist, TRIUMF