Thesis

Accelerating Trigger Algorithms through parallelization for the Upgrade of the ATLAS Experiment at the LHC

Details

  • Call:

    IDPASC Portugal - PHD Programme 2014

  • Academic Year:

    2014 /2015

  • Domain:

    Experimental Particle Physics

  • Supervisor:

    José Soares Augusto

  • Co-Supervisor:

    Patricia Conde Muino

  • Institution:

    Faculdade de Ciências - Universidade de Lisboa

  • Host Institution:

    Laboratório de Instrumentação e Física Experimental de Partículas

  • Abstract:

    The Large Hadron Collider, at CERN, is the most powerful proton-proton collider ever built. When it will re-start operation in 2015, it will collide protons with a center of mass energy of 13 TeV, at a rate of 40 Million times per second, with a data-flow of the order of 1 PB/s. Out of all these collisions, only a very small fraction is indeed interesting for physics analysis. The Trigger and Data Acquisition system of ATLAS has the main role of selecting and storing about 400 interactions per second for further analysis, using a combination of hardware and software based filtering systems. The LHC will be further upgraded in 2018 to be able to deliver an even higher rate of collisions, bringing an increase in data quantity of an order of magnitude or more. To find interesting physics embedded in a huge amount of data, complex algorithms involving lots of computation are needed. Several algorithms applied to LHC data are, already today, quite heavy on computations. In order to keep the performance of the trigger system, to effectively select the relevant physics process while rejecting the backgrounds at the required rate, parallelization of the algorithms that are mining the data coming from the LHC detectors will be mandatory. In recent years, several pervasive and affordable platforms capable of accelerating algorithms through parallelization have appeared. The most notable are the FPGA (Field-Programmable Gate Array), a pure hardware reconfigurable platform, and the GPU (Graphical Processing Unit) a visualization-targeted specialized massively parallel processing device found at the heart of modern PC graphical boards. For raw computing tasks, such as are the execution of Trigger algorithms, the GPUs are far better suited than FPGAs. Modern GPUs perform calculations in double format (64 bits); implementing similar arithmetic units in FPGAs is very resource and time-consuming, what favors GPU implementation. This proposal is focused in the acceleration of algorithms used at the trigger level of the ATLAS experiment at the LHC (CERN) by parallelization of repetitive tasks and the use of hardware accelerators such as Graphical Processing Units (GPU). Several groups at CERN and elsewhere are already studying GPU-acceleration of several algorithms -- Z finder, Kalman filter, integration of particle trajectories in detectors. This is an important area of work, especially when seen in the light of the increase in data flow, of around one order of magnitude, expected for the upgrade of the LHC. This project will contribute to the development of the ATLAS Upgrade trigger algorithms and, as such, will be integrated in the Software Upgrade program of the ATLAS experiment.