I’ve been following the discussion over the CNGS neutrino velocity measurement from the OPERA collaboration with great interest. A lot of excellent stuff has already been written on this blog on the subject. A couple of my favorite posts are this early take by Kathy Copic, and Michael Schmitt’s post, which has some interesting links. I’m not going to even attempt to duplicate their efforts. Instead, I just want to share a few of my impressions from the last several days.
The very short version is that the OPERA collaboration has measured the travel time (a little under 2.5 milliseconds) of neutrinos from CERN, where they are produced, to their detector in the Gran Sasso laboratory in Italy, with an uncertainty of 5 parts per million. They also measure the distance traveled (730 km) with better than a part per million accuracy. They divide and find a that the neutrinos seem to arrive 60 ns faster than expected, corresponding to a velocity greater than the speed of light by a part in 10,000.
Yeah, exactly.
It’s a startling result. The collaboration was right to submit their work (http://arxiv.org/abs/1109.4897) for public review at this point, I think, and they’ve done a good job of not overstating their claims. There has been a huge response from the particle physics community and the tone is skeptical but collegial. I tried to go to the seminar here at CERN on Friday but it was already standing room only, so I fled back to my office to watch the webcast (see Aidan’s liveblog of the seminar). There was nearly an hour of questions afterwards, many of which were quite good.
The questions at first centered on the distance measurement, which may seem at first blush to be the weak link in the measurement, until you realize that the distance is measured rather more precisely than the time of flight (see the numbers in my first paragraph). The speaker explained that the distance measurement is based on well-established geodesy techniques and confirmed that the precision of the velocity measurement is really determined by the precision of the time-of-flight measurement.
The rest of the questions that I thought were good revolved around one key fact: when OPERA measures the time of flight of the neutrinos, they don’t measure the time of flight of an individual neutrino. Rather, a batch of neutrinos are produced in some short window of time (10.5 μs) by a bunch of protons hitting a target. Then, the time-of-flight measurement is based off of fitting the recorded times of the events with a template based off of the measured time structure of the proton bunch created at CERN.
One question asked at the talk is whether the time structure of the neutrino bunch is somehow modified between departure and arrival, possibly by correlations between the position in time within the bunch and the way that the beam spreads out as it travels to Grand Sasso. It’s an open question, as far as I can tell, and it strikes me as a potentially important one.
The majority of the community’s attention right now seems to be on the statistical analysis of the data as a possible source of unaccounted-for systematic uncertainty in the determination of the travel time. Because they are measuring the travel time of a bunch of neutrinos, they are essentially measuring the timing of the leading and trailing edge of the bunch at CERN and then again at Gran Sasso. But there’s a lot packed into that “essentially”, which people are now unpacking. Related to the above concern, the seminar speaker said in response to a question that the bunch length — the distance between the leading edge and trailing edge — is fixed in the analysis so that it’s not possible to account for the bunch somehow stretching or shrinking between CERN and Gran Sasso. The speaker at the seminar pointed to the good chisquared (chisquared is a standard measure of goodness of fit) as evidence for the quality of the fit (i.e. how well the model matches the shape of the data). It was pointed out by a questioner that the chisquared is not a useful measure of goodness of fit in this case, since the information it contains is diluted by the good fit to the points in the middle which don’t actually contribute that much information on where the edges are. My variation on this theme is to wonder if you get a different answer by only fitting the leading half or only fitting the trailing half of the distribution.
It’s going to be interesting to see how this shakes out over time. In the meanwhile, I think that so far it’s a pretty great example of science working the way it’s supposed to.