Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2012 Dec 10;371(1983):20120094.
doi: 10.1098/rsta.2012.0094. Print 2013 Jan 28.

Processing LHC data in the UK

Affiliations

Processing LHC data in the UK

D Colling et al. Philos Trans A Math Phys Eng Sci. .

Abstract

The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011--the first two significant years of the running of the LHC.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
The LCG adopted a modified benchmark specification in 2006 called HEPSPEC06 in order to gauge what sites can provide (or have pledged to provide) versus what the experiments have calculated that they need. To demonstrate the spread in the size of CPU resources provided across GridPP Tier-2 sites, this figure shows the HEPSPEC06 contributions (ordered according to size) for GridPP Tier-2 sites. For comparison, the Tier-1 at RAL provides of order 64 000 HEPSPEC06. (Online version in colour.)
Figure 2.
Figure 2.
The terabytes of disk deployed at each of the GridPP Tier-2 sites, as of the beginning of 2012, can be seen in this plot. The site ordering is based on CPU available at the site (see figure 1) and if compared with figure 1 shows that the ratio of disk to CPU at sites is not constant. This reflects the fact that different sites support (and therefore pledge resources to) different combinations of the LHC experiments, and the LHC experiments have different computing models that require differing ratios of resources. LHCb, for example, requires very little disk at Tier-2 sites, while ATLAS requires a ratio of close to 2 for TB of disk vs HEPSPEC06 of CPU. (Online version in colour.)
Figure 3.
Figure 3.
Aggregate traffic in to and out of the UK Tier-1 centre over the OPN router. (Online version in colour.)
Figure 4.
Figure 4.
Performance of the ATLAS CASTOR instance in 2011. (Online version in colour.)
Figure 5.
Figure 5.
The number of completed simulation and reconstruction jobs run by ATLAS in 2011, broken down by regional cloud. The UK is the third largest contributor. (Online version in colour.)
Figure 6.
Figure 6.
The number of completed analysis jobs run by ATLAS on the Grid in 2011, broken down by regional cloud. The UK is the third largest contributor. (Online version in colour.)
Figure 7.
Figure 7.
The throughput of data distributed to the UK broken down by activity. The clear growth due to increased use of data brokerage is evident. (Online version in colour.)
Figure 8.
Figure 8.
The reconstruction time for each event as a function of the number of interactions in each read-out of the experiment. (Online version in colour.)
Figure 9.
Figure 9.
CMS PhEDEx transfer rates to sites around the world. (Online version in colour.)
Figure 10.
Figure 10.
Analysis jobs run by CMS in 2011. (Online version in colour.)
Figure 11.
Figure 11.
Analysis jobs run in the UK by CMS in 2011. (Online version in colour.)
Figure 12.
Figure 12.
LHCb baseline computing model. (Online version in colour.)
Figure 13.
Figure 13.
LHCb data transfer rates across the Grid by country. (Online version in colour.)

References

    1. Evans L. 2011. The Large Hadron Collider. Annu. Rev. Nucl. Part. Sci. 61, 435–466 10.1146/annurev-nucl-102010-130438 (doi:10.1146/annurev-nucl-102010-130438) - DOI
    1. Chatrchyan S, et al. [CMS Collaboration] 2008. The CMS experiment at the CERN LHC. J. Instrum. 3, 08004. 10.1088/1748-0221/3/08/S08004 (doi:10.1088/1748-0221/3/08/S08004) - DOI
    1. Aamodt K, et al. [ALICE Collaboration] 2008. The ALICE experiment at the CERN LHC. J. Instrum. 3, 08002. 10.1088/1748-0221/3/08/S08002 (doi:10.1088/1748-0221/3/08/S08002) - DOI
    1. Augusto Alves A., Jr [LHCb collaboration] 2008. The LHCb detector at the LHC. J. Instrum. 3, 8005. 10.1088/1748-0221/3/08/S08005 (doi:10.1088/1748-0221/3/08/S08005) - DOI
    1. Faulkner PJW, et al. [GridPP Collaboration] 2006. GridPP: development of the UK computing Grid for particle physics. J. Phys. G 32, 1. 10.1088/0954-3899/32/1/N01 (doi:10.1088/0954-3899/32/1/N01) - DOI

LinkOut - more resources