If you are a frequent visitor of blogs on High Energy Physics, you might know that our calendar is filled with a plethora of different meetings. We also have meetings to discuss how to reduce the number of meetings! This structure guarantees constant communication
among experts and non-experts and allows for constant improvement in one’s knowledge about the experiment, physics, etc. Besides meetings, ATLAS also hosts the “ATLAS e-News” (google can find it for you) which has somewhat a different goal. The idea is to keep all ATLAS members informed about what goes on in the experiment, the latest news, etc. Here is an summary of what we’ve just published about our data quality control.
The assessment of data quality is an integral and essential part in any High Energy experiment. It is even more so for ATLAS, given the extreme detector complexity and the challenging experimental environment. Ultimately, a priori checks assure the scientific validity of the data. The status of ATLAS data taking is evaluated based on information from the data acquisition and trigger systems, and the analysis of events reconstructed online and offline, constituting the Data Quality Assessment. Let me explain a little what we mean by this. Data are recorded by the ATLAS detector: when a particle flies through the calorimeter detector for instance, it interacts with the material, produces a signal which gets read out by the calorimeter electronics. If the event is interesting, it fires a trigger system which causes the event to be saved and stored for future data analysis. What about the jargon words “online” and “offline” ? With “online” we mean the data processing carried out while collisions happen (almost real time), while “offline” indicates any study you would perform once data are stored. While taking data, quality control is essential to make sure you are actually recording meaningful data. We should always bear in mind that millions of channels are read out and any of those could have an intermittent failure at any level. In the almost real time analysis, raw detector data is accessed in the event flow and examined for hardware and configuration failures. Over 10 million histograms are produced by more than 150 monitoring applications, the ATLAS software automatically checks 50,000 histograms every minute. It also visualizes the results in a graphical form, adopting a hardware view to facilitate their interpretation. Once data are stored, we have a more detailed validation carried out in computer farms where the full event gets reconstructed. This happens within an hour from the beginning of the data taking. An even deeper study will provide results within 24h. All this will become routine as soon as the LHC turns on!