In a previous post, I mentioned that ATLAS will be collecting enormous amounts of data, approximately 6 Petabytes/year (i.e., 6,000,000 Gigabytes). How in the world are we going to handle it, and how do we make it available to all physicists on ATLAS? I spoke to one of my colleagues at Indiana University, Fred Luehring, who has major responsibilities for the US part of the Grid, to get some details.
First, let me mention some ATLAS jargon; if you wish, you can skip this paragraph for now, and come back to it when you run into strange acronyms.
The raw data that we collect is called ByteStream, basically, it is a stream of 1’s and 0’s, and is approximately 3-6 Megabytes/event. This gets “massaged” into Raw Data Objects (RDO); the only difference is that the data now has a “structure” that can be analyzed with software written in C++; it is now about 1 Megabyte/event. At this point, the ATLAS reconstruction software (written in C++) runs over these RDOs and produces tracks in the inner detector, electron and muon candidates, jets, etc., and outputs two other structured formats, ESD (Event Summary Data), AOD (Analysis Object Data), which contain different levels of detail; they are approximately 500 Kilobytes and 100 Kilobytes/event. As you can tell by the name, most physicists will run on AOD’s; there are other smaller formats, but I will skip them.
In a “normal” year, we expect to collect about 2 billion events. How do we handle all this data? To do this, physicists and computer scientists have been working on the Grid. It is basically a whole lot of computers spread all over the world that are networked with very fast connections; typical data transfer rates are 1-10 Gigabits/sec (in contrast, broadband connections to your home are about a thousand times slower).
You may ask why we need the Grid. Why can’t all the collaborating institutes just buy computers and send them to CERN, and let them set up a gigantic processing center? That is one approach. However, funding agencies don’t like this mode of operation. They would much rather keep the hardware in their respective countries, and build upon existing infrastructure at universities and laboratories, which includes people, hardware, buildings, etc. Another advantage of the Grid approach is the built-in redundancy; if one site goes down, jobs can be steered to others. Also, if a Grid site is appropriately configured, then if ATLAS is not using the computers, other scientists can use them (in an opportunistic manner); although, we keep the system pretty busy. In the US, each LHC experiment has its own grid sites, whereas in Europe, they tend to share them.
For ATLAS, we have a processing center at CERN; this is called Tier0. At this center, we will do the first-pass reconstruction of the data (and make ESDs), and also check it very quickly to ensure that the detector is working well. At the next level, we have about 12 Tier 1 centers in some of the bigger member countries; the biggest one is in the US, and others are in UK, France, Germany, Italy, Spain, Netherlands, Taiwan, Canada, and in the Scandinavian countries. You can see an artist’s rendition in Figure 1 (you can also read about a recent Grid test here)

Figure 1.
Two or three of these Tier 1 centers also store a backup copy of the raw data. The Tier 1 centers basically reprocess the data and make the AODs that will be used by most physicists. Further down in the chain are Tier 2 centers. In the US we have about five of them for ATLAS use; Fred is co-Principal Investigator of the Tier 2 center that is jointly hosted by Indiana University and U. of Chicago. Large-scale user analysis will be done at these sites; once the AODs are made at the Tier 1 centers, they are shipped here. At the next level are the Tier 3 centers; basically these are small to medium size clusters at the institution level, such as my research group at Indiana University. We will do the final analysis steps here.
ATLAS policy is that we send our jobs to the data, i.e., my analysis code will have to run on one of the Tier 2 sites. People on the experiment have written software that allows me to do this in a very simple way; basically, it is a one line command. The software then chooses a site where the job is to be run, and stores the output there, which I retrieve once it is finished.
To authenticate myself to the various resources, I have to apply for a Grid Certificate from an issuing authority; in the US, it is the Department of Energy. This is like having a passport. I also need to have permission to run my job at the various sites. I do this by applying to the ATLAS virtual organization; this is like getting a visa to visit a foreign country. I need both.
Currently, the Tier 0 center is made up of about 5,000 to 10,000 cores. Each core is “standard”, i.e., processor speed of about 2-3 GHz, about 2-4 Gigabytes of RAM, and runs Linux; the cost/core is about $300. In the US, we have about 10,000 cores distributed over the Tier 1 and Tier 2 centers, and the rest of the world contributes another 20,000 or so cores. Of course, we also have to have the necessary amount of disk space; currently, in the US we have about 10 Petabytes of storage space. As mentioned above, we also need very fast connections. The various Tier 1 sites are connected to the Tier 0 center with 10 Gigabit/sec connections, whereas the Tier 2/Tier 1 connection speeds depend on the country; in the US it is 10 Gigabits/sec, and in some of the countries, it is as low as 1 Gigabit/sec. So, you see, it is quite an enterprise! Unfortunately, the Grid does not make coffee, as originally advertised, but we are working on it.
— Vivek Jain, Indiana University