It’s Labor Day weekend. You’ve probably been on vacation sometime in the past month. So have a lot of physicists (and a lot of bloggers). But the LHC hasn’t been on vacation, by any means. As we last saw, the accelerator was running flat out to produce enough collisions to give us a shot at a Higgs-boson observation. Now we know that it worked; the Higgs results that the CMS and ATLAS experiments released had made use of data that had been collected through the middle of June, and that was enough to claim a discovery. But here’s what’s happened since:
(This plot is a direct link from the LHC luminosity page; it will get updated to the latest version if you read this post sometime later.) Since a technical stop in late June, the LHC has roughly doubled the number of collisions provided. In fact, about a week ago the LHC achieved its highest collision rate ever, which should allow us to accelerate the production of integrated luminosity. We have three months left in the proton-proton run (before a one-month heavy-ion run, and then a two-year shutdown), and we can be optimistic about how much more data we might record in that time.
What are the implications of doubling the size of the dataset? Here is a totally unofficial, totally back-of-the-envelope estimate. In a perfect world, uncertainties due to counting statistics, i.e. the amount of data you are using to make a measurement, fall like the square root of the number of events. Thus, a doubling of the data should result in reducing your uncertainties by a factor of the square root of two, or about 1.4. In our imperfect world, it’s a lot more complicated than that — the uncertainty on a measurement comes from more than just counting statistics. But let us suspend disbelief for a moment and consider the case of the measurement of the probabilities for the presumed Higgs boson to decay into its various final states. Here are the CMS measurements, based on the data that was in hand for the publications that have been submitted:
A value of one on the x axis would constitute a measurement in agreement with the predictions of the standard model. With the uncertainties shown here, you would say that at the very least the measurements do not disagree with the predictions. But now imagine those same error bars reduced by 40%, while the central values stay in the same place. Then the probability to decay to photons would start to look discrepantly large, while that for taus would look discrepantly small, and that would start getting really interesting — perhaps the “Higgs” we’re observing is not the standard-model Higgs.
Of course, this is pure speculation — when the additional data is incorporated into the analysis, all the points might start moving closer to the expected values. But the additional data we are recording will help make the picture more clear no matter what. 2012 has already been an exhilarating year, but as we head into the final one-third of it, we can imagine that it will get more exciting still, thanks to the excellent operation of the Large Hadron Collider.