OK, so I’ll try to give you a flavour of how the data that we collect gets turned into a published result. As the title indicates, it takes a while! When an experiment first turns on this process is longer than when it has been running for a while. It also depends on the complexity of the analysis one is doing. To be familiar with some of the terms I mention below, you should take the electronic tour of the ATLAS experiment ; slides 7 and 8 will give you an overview of how different particle species are detected and what the various sub-systems look like. For more details you should go take the whole tour; it is meant for non-scientists.
For each event, data recorded by ATLAS is basically a stream of bytes indicating whether a particular sensor was hit in the tracking detectors or the amount of energy deposited in the calorimeter or the location of a hit in the muon system, etc. Each event is then processed through the reconstruction software. For instance, the part of the software that deals with the tracking detectors will find hits that could be due to a charged particle like a pion or a muon or an electron; in a typical event there may be as much as 100 such particles, mostly pions. By looking at the curvature of the trajectory of a particle as it bends in the magnetic field, we determine its momentum (see Seth’s post on tracking ). Similarly, the software dealing with the calorimeter will look at the energy deposits and try to identify clusters that could be due to a single electron or to a spray of particles (referred to as a “jet”), and so on. I believe the ATLAS reconstruction software runs to more than 1 million lines of code!
However, before the reconstruction software can do its magic, a lot of other things need to be done. All the sub-detectors have to be calibrated. What this means is that we need to know how to convert, say, the size of the electronic signal left behind in the calorimeter into energy units such as MeV (million electron volts – the mass of the electron is 0.5 MeV). This work is being done now using data taken with test beams, simulation, and cosmic rays . Similarly, we have to know the location of the individual elements of the tracking detectors as precisely as possible. For instance, by looking at the path of an individual track we can figure out precisely where detector elements are relative to one another; this step is known as alignment. Remember, the Pixel Detector can measure distances of the order of 1/10th the thickness of human hair, so knowing its position is critical. This work is going on as we speak, but we will need real data for the final calibrations and alignment.
At this point, let’s say, I decide to use the data to prove/disprove the latest version of string theory or extra dimensions or what have you. What steps do I need to take? Well, first I have to understand what prediction this theory is making; is it saying that there will be multiple muons in an event or there will be only one very energetic jet in the event, etc? If the signature is unique, then my life is considerably simpler; essentially, I will write some software to go through each event and pick out those that match the prediction (you can think of this as finding the proverbial (metal) needle in a haystack). If I find some candidate events, the excitement level starts to increase!
But before I contact my travel agent to buy a ticket to Stockholm (for the Nobel Prize ceremony), I need to do a lot more work. I have to check whether some garden variety physics effect (which usually occurs much more frequently) will produce a similar signature. This can happen because our reconstruction software could mis-identify a pion as a muon, or make a wrong measurement of an electron’s energy, or if we produce enough of these garden-variety events a few of them may look like new physics. So, I have to think of all the standard processes that can mimic what I am searching for. One way to do this is to run my analysis software on simulated events; since we know what this garden variety process looks like, we generate tons of fake data and see if some events look like the new effect that I am looking for. Of course, physicists being skeptics, we also have to check if our simulation is correct! So, that takes more time and effort.
If the signal I am searching for is not very unique, then I have to be much cleverer. I have to figure out how to tease out the signal from a large background of garden variety physics (you can think of this as finding a wooden needle in a haystack). Also, since there is no fixed recipe to do an analysis, I can sometimes run into obstacles, or my results may look “strange”; I then have to step back and think about what is going on.
After I get some preliminary results I have to convince my colleagues that they are valid, which involves giving regular progress reports within my analysis group; these are usually phone meetings, since everyone is on a different continent. I then write an internal note, which is reviewed by experts within the group. If the experts are happy, the note is bumped up to the Physics coordinator. If I pass this hurdle, the note is released to the entire collaboration for further review. All along this process, people ask pointed questions, ask me to do all sorts of checks, or tell me that I am completely crazy, or whatever. Given that every physicist thinks that he/she is smarter than the next, this process can be a little cantankerous at times.
Then the note is sent to a peer-reviewed journal for publication, where the external referee(s) can make you jump through hoops, essentially challenging the validity of your work; sometimes their objections are valid, sometimes not. I know because I have been on both sides of this process.
Depending on the complexity of my analysis, the time from the start to finish can be anywhere from a few months to a year or more (causing a few more grey hair or in my case a few less hair!).
– Vivek Jain, Indiana University