• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘faster than light’

A Grumpy Note on Statistics

Tuesday, March 13th, 2012

Last week’s press release Fermilab about the latest Higgs search results, describing the statistical significance of the excess events, said:

Physicists claim evidence of a new particle only if the probability that the data could be due to a statistical fluctuation is less than 1 in 740, or three sigmas. A discovery is claimed only if that probability is less than 1 in 3.5 million, or five sigmas.

This actually contains a rather common error — not in how we present scientific results, but in how we explain them to the public. Here’s the issue:

Wrong: “the probability that the data could be due to a statistical fluctuation”
Right: “the probability that, were there no Higgs at all, a statistical fluctuation that could explain our data would occur”

Obviously the first sentence fragment is easier to read — sorry![1] — but, really, what’s the difference? Well, if the only goal is to give a qualitative idea of the statistical power of the measurement, it likely doesn’t matter at all. But technically it’s not the same, and in unusual cases things could be quite different. My edited (“right”) sentence fragment is only a statement about what could happen in a particular model of reality (in this case, the Standard Model without the Higgs boson). The mistaken fragment implies that we know the likelihood of different possible models actually being true, based on our measurement. But there’s no way to make such a statement based on only one measurement; we’d need to include some of our prior knowledge of which models are likely to be right.[2]

Why is that? Well, consider the difference between two measurements, one of which observed the top quark with 5 sigma significance and the other of which found that neutrinos go faster than light with 5 sigma significance. If “5 sigma significance” really meant “the probability that the data could be due to a statistical fluctuation,” then we would logically find both analyses equally believable if they were done equally carefully. But that’s not how those two measurements were received, because the real interpretation of “5 sigma” is as the likelihood that we would get a measurement like this if the conclusion were false. We were expecting the top quark, so it’s a lot more believable that the excess is associated with the top quark than with an incredibly unlikely fluctuation. But we have many reasons to believe neutrinos can’t go faster than light, so we would sooner believe that an incredibly unlikely fluctuation had happened than that the measurement was correct.[3]

Isn’t it bad that we’d let our prior beliefs bias whether we think measurements are right or not? No, not as long as we don’t let them bias the results we present. It’s perfectly fair to say, as OPERA did, that they were compelled to publish their results but thought they were likely wrong. Ultimately, the scientific community does reach conclusions about which “reality” is more correct on a particular question — but one measurement usually can’t do it alone.

———————————

[1] For what it’s worth, I actually spent a while thinking and chatting about how to make the second sentence fragment simpler, while preserving the essential difference between the two. In this quest for simplicity, I’ve left off any mention of gaussian distributions, the fact that we really give the chance of a statistical fluctuation as large or larger than our excess, the phrase “null hypothesis,” and doubtless other things as well. I can only hope I’ve hit that sweet spot where experts think I’ve oversimplified to the point of incorrectness, while non-expert readers still think it’s completely unreadable. 😉

[2] The consensus among experimental particle physicists is that it’s not wise to include prior knowledge explicitly in the statistical conclusions of our papers. Not everyone agrees; the debate is between Frequentist and Bayesian statistics, and a detailed discussion is beyond the scope of both this blog entry and my own knowledge. A wider discussion of the issues in this entry, from a Bayesian perspective, can be found in this preprint by G. D’Agostini. I certainly don’t agree with all of the preprint, but I do owe it a certain amount of thanks for help in clarifying my thinking.

[3] A systematic mistake in the result, or in the calculation of uncertainties, would be an even likelier suspect.

Share

This week the OPERA experiment released a statement about their famous “faster than light” neutrino measurement. In September scientists announced that they had measured the speed of neutrinos traveling from CERN to Gran Sasso and they found that they arrived slightly sooner than they should do according to special relativity. There was a plethora of scientific papers, all kinds of rumors and speculation, and most physicists simply refused to believe that anything had traveled faster than light. After months of diligent study, OPERA announced that they may have tracked down two sources of experimental error, and they are doing their best to investigate the situation.

But until we get the results of OPERA’s proposed studies we can’t say for sure that their measurement is right or wrong. Suppose that they reduce the lead time of the neutrinos from 60ns to 40ns. That would still be a problem for special relativity! So let’s investigate how we can get faster than light neutrinos in special relativity, before we no longer have the luxury of an exciting result to play with.

The OPERA detector (OPERA Collaboration)

The OPERA detector (OPERA Collaboration)

Special relativity was developed over a hundred years ago to describe how electromagnetic objects act. The electromagnetic interaction is transferred with electromagnetic waves and these waves were known to travel extremely quickly, and they seemed to travel at the same speed with respect to all objects, no matter how those objects were moving. What Einstein did was to say that the constancy of the speed of light was a fundamental law of nature. Taking this to its logical conclusion meant that the fastest speed possible was the speed of light. We can call the fastest possible speed \(s\) and the speed of light \(c\). Einstein then says \(c=s\). And that’s how things stood for over a century. But since 1905 we’ve discovered a whole range of new particles that could cast doubt on this conclusion.

When we introduce quantum mechanics to our model of the universe we have to take interference of different states into account. This means that if more than one interaction can explain a phenomenon then we need to sum the probabilities for all these interactions, and this means we can expect some strange effects. A famous example of this is the neutral kaon system. There two lightest neutral kaons are called \(K^0\) and \(\bar{K}^0\) and the quark contents of these mesons are \(d\bar{s}\) and \(s\bar{d}\) respectively. Now from the “outside” these mesons look the same as each other. They’ve got the same mass, they decay to the same particles and they’re made in equal numbers in high energy processes. Since they look identical they interfere with each other, and this gives us clues about why we have more matter than antimatter in the universe.

Since we see interference all over the place in the Standard Model it makes sense to ask if we see interference with a photon. It turns out that that we do! The shape of the Z mass peak is slightly asymmetric because of interference between virtual Z bosons and virtual photons. There are plenty of other particles that the photon can interfere with, including the \(J/\psi\) meson, and the \(\rho\) meson. In fact, any neutral vector meson with no net flavor will do. Einstein didn’t know about any of these particles, and even if he did he never really accepted the conclusions of quantum mechanics, so it’s no surprise that his theory would require that the speed of light is the fastest speed (that is, \(c=s\).) But if the photon interferes with other particles then it’s possible that the speed of light is slightly lower than the fastest possible speed (\(c<s\)). Admittedly, the difference in speed would have to be very small!

In terms of quantum mechanics we would have something like this:
\[
|light>_{Einstein} = |\gamma>
\]
\[
|light>_{reality} = a_\gamma |\gamma> + a_{J/\psi} |J/\psi> + a_Z |Z> + \ldots
\]

As you can see there are a lot of terms in this second equation! The contributions would be tiny because of the large difference in mass between the massive particles and the photon. Even so, it could be enough to make sure that the speed of light is ever so slightly slower than the fastest possible speed.

At this point we need to make a few remarks about what this small change in speed would mean for experiments. It would not change our measurements of the speed of light, since the speed of light is still extremely fast and no experiment has ever showed a deviation from this extremely fast speed. Unless somebody comes up with an ingenious experiment to show that the difference between the speed of light and the fastest possible speed is non-zero we would probably never notice any variation in the speed of light. It’s a bit unfortunate that since 1983 it’s been technically impossible to measure the speed of light, since it is used in the definition of our unit of length.

Now we know that photons can interfere with other particles it makes sense to ask the same question about neutrinos. Do they interfere with anything? Yes, they can interfere, so of course they do! They mix with neutrinos of other flavors, but beyond that there are not many options. They can interfere with a W boson and a lepton, but there is a huge penalty to pay in the mass difference. The wavefunction looks something like this:
\[
|\nu_e>(t) = a(t)_{\nu_e}|\nu_e> + a(t)_{\nu_{\mu}}|\nu_\mu> + a(t)_{\nu_{\tau}}|\nu_\tau> + a(t)_{We}|We>
\]
(I’ve had to add a time dependence due to neutrino mixing, but it’s essentially no more complicated than what we had for the photon.)

That means that the photon could get slowed down slightly by the interference with other particles (including particles in the vacuum) and that neutrinos could get slowed down more slightly by their interference terms with other particles. And that way we could get neutrinos traveling faster than the speed of light and special relativity could remain intact. (In this description of the universe we can do what used to seem impossible, we can boost into the rest frame of a photon. What would it mean to do that? Well I suppose it would mean that in this frame the photon would have to be an off-shell massive particle at rest.)

The SN 1987 supernova, a rich source of slower than light electron neutrinos (Hubble, ESA/NASA)

Now I’ll sit back and see people smarter than I am pick holes in the argument. That’s okay, this isn’t intended to be a serious post, just a bit of fun! There are probably predictions of all kinds of weird effects such as shock waves and time travel that have never been observed. And there are plenty of bits I’ve missed out such as the muon neutrinos traveling faster than electron neutrinos. It’s not often we get an excuse to exercise our analytic muscles on ideas like this though, so I think we should make the most of it and enjoy playing about with relativity.

Share

New Information on “FTL Neutrinos”

Thursday, February 23rd, 2012

We have new information, but my position on the OPERA experiment’s FTL neutrino measurement hasn’t changed.

First, here’s what we know. Members of the OPERA experiment has been working diligently to improve their measurement, better understand their uncertainties, and look for errors. Yesterday, the discovery of some possible problems was leaked anonymously (and vaguely) in Science Insider. This compelled OPERA to release a statement clarifying the status of their work: there are two possible problems, which would have opposite effects on the results. (Nature News has a good summary here.)

The important thing to learn here, I think, is that the work is actually ongoing. The problems need further study, and their overall impact needs to be assessed. New measurements will be performed in May. What we’ve gotten is a status update whose timing was forced by the initial news article, not a definitive repudiation of the measurement.

Of course, we already knew with incredible confidence that the OPERA result is wrong. I wrote about that last October, but I also wrote that we still need a better understanding of the experiment. Good scientific work can’t be dismissed because we think it must have a mistake somewhere. I’m standing by that position: it’s worth waiting for the final analysis.

Share

To start, let me say that there are extremely strong reasons to believe that the OPERA experiment’s measurement of neutrinos travelling faster than light is flawed. We knew that from the moment it came out, because it contradicts General Relativity (GR), which is an extraordinarily well-tested theory. Not only that, but the most obvious ways to modify GR to allow the result to be true give you immediate problems that contradict other measurements. To my knowledge, there’s no complete theoretical framework that makes predictions consistent with existing tests of GR and allows the OPERA result to be right.

But in my view of how experimental physics is done, history has shown us that once in a great while, something is discovered that nobody thought of and nobody can fit into the existing theoretical mold. The measurements that led to the discovery of GR in the first place provide a good example of this. Such shifts are extremely rare, but I don’t like the idea of ignoring a result because it doesn’t fit with the theories we have.

No, we have to address the measurement itself, and satisfy ourselves that there really was a mistake. There are many ideas for what might have gone wrong, and as far as I know, the discussion is ongoing. I’m not an expert on it, but I know enough to disagree with some of the blogosphere discussion lately that has pronounced that the case is closed. There seem to be two categories of claims going around:

  1. Articles that point out that the OPERA result is inconsistent with other measurements, as in this piece by Tommaso Dorigo (who is, incidentally, my colleague now that I’ve joined CMS). These are of course correct within the context of GR or any straightforward modifications thereof, as I said right at the start of this post. The question is whether there’s some modification that can accomodate the results consistently, and that’s a very hard thing to exclude. (There is some good discussion in the comments of Tommaso’s post about this, in fact.)
  2. Articles that the OPERA result has been refuted because someone posted an idea on the arXiv server. A current example is this preprint, which asserts that a 60 nanosecond delay might be explained by OPERA having made a relatively trivial mistake in their GPS calculations. Of course, it’s possible that a trivial mistake has been made. But I’m not inclined to consider it definitive, especially because the author has already partially backpedaled upon learning more about how GPS works.

It’s great that people are sending ideas for what might have gone wrong with the result, or how it might be explained. But let’s wait for the discussion to settle down — and, indeed, for OPERA to finalize their paper — before we conclude that the case is closed. I do expect the result to be disproven, but what I want to see is one of these things:

  1. OPERA finds that there really was a problem with their measurement, revises it, and the “superluminal” effect goes away.
  2. Another experiment makes the same measurement, and gets a result consistent with GR.

Either way, I’ll consider the case closed, but there’s no reason to get ahead of ourselves. Doing science usually doesn’t mean knowing the answer in time for tomorrow’s news.

Share

Live blog: neutrinos!

Friday, September 23rd, 2011

This is a live blog for the CERN EP Seminar “New results from OPERA on neutrino properties“, presented by Dario Autiero. Live webcast is available. The paper is available on the arXiv.

The crowd in the auditorium (Thanks to Kathryn Grim)

The crowd in the auditorium (Thanks to Kathryn Grim)

15:39: So here I am at CERN, impatiently waiting for the Colloquium to start on the OPERA result. The room is already filling up and the chatter is quite loud. I’m here with my flatmate Sudan, and we have a copy of the paper on the desk in front of us. I just bumped into a friend, Brian, and wished him look finding a chair! (He just ran to get me a coffee. Cheers Brian!)

15:53: Wow, the room is really crowded now! People are sitting on the steps, in the aisles, and more are coming in. The title slide is already up on the projector, and some AV equipment is being brought in. I was just chatting to Sudan and Brian, and we commenting that this is probably the biggest presentation that the world’s biggest physics lab has seen in a long time! As Sudan says, “The whole world is going to be watching this man.”

15:55: Burton and Pauline are here too, getting some photos before the talk begins. Expect to see more (less hastily written) blog posts about this talk!

15:59: We’re not allowed to take photos of the talk itself, but there will be a video feed that you can watch. See this link for details about the live webcast.

16:03: The talk begins. A fairly straightforward start so far. As usual, the speaker introduces the OPERA Collaboration, and gives a bit of background. Nothing ground breaking so far!

16:06: The analysis was performed blind, which means that the physicists checked and double checked their systematic uncertainties before looking at the data. This is a common best practice in these kinds of experiments and it is a good way to eliminate a lot of experimenter bias. The speaker is now discussing past results, some of which show no faster than light speed, and one of which (from MINOS) that shows a small effect which is less than 2σ.

16:16: Autiero is currently discussing the hardware of the experiment. It looks like a standard neutrino observatory setup- large amounts of dense matter (Pb), scintillation plates and tracking hardware for the muons which get produced when the neutrinos interact. By the time the beam reaches Gran Sasso it is about 2km wide! At CERN the neutrinos are produced by accelerating protons at a target, producing pions and kaons, which are then allowed to decay to muons and muon neutrinos. The hadrons are stopped with large amounts of Carbon and Iron, so that only the neutrinos and some muons survive. By the time the neutrino beam reaches Gran Sasso the muons have long since interacted and are no longer present in the beam. The neutrinos have 17GeV of energy when they leave CERN, so they are very energetic!

16:29: The discussion has moved onto the timing system, probably the most controversial aspect of the experiment. The timing challenge is probably the most difficult part of the whole analysis, and the part that particle physicists are least familiar with. Autiero points out that the same methods of timing are commonly used in metrology experiments. For OPERA, the location of each end of the experiment in space and time is determined using GPS satellites in the normal way, and then a “common view” is defined, leading to 1ns accuracy in synchronization. It looks like variations in the local clocks are corrected using the common view method. The time difference between CERN and Gran Sasso was found to be 2.3 ± 0.9 ns, consistent with the corrections.

16:36: Things are made trickier by identifying where in the “spill” of protons a neutrino came from. For a given neutrino it’s pretty much impossible to get ns precision timing, so probability density functions are used and the time interval for a given proton spill is folded into the distribution. We also don’t know where each neutrino is produced within the decay tube. The average uncertainty in this time is about 1.4ns. Autiero is now talking about the time of flight measurement in more detail, showing the proton spills and neutrino measurements overlaid.

16:39: Geodesy is important to this analysis. OPERA need to know the distance between CERN and Gran Sasso to good precision (they need to know the distances underground, which makes things more complicated.) They get a precision of 20cm in 730km. Not bad! Autiero is now showing the position information, showing evidence of continental drift and even an earthquake. This is very cool!

16:47: Two techniques are used to verify timing, using Caesium clocks and optical fibers. These agree to ns precision. The overall timing system is rather complicated, and I’m having trouble following it all!

16:48: I just got a message from a friend who saw this blog via Twitter. Hello Angela! Welcome to all the readers from Twitter!

16:52: Currently discussing event selection at Gran Sasso. Events must have a highly relativistic muon associated with them. (The speed of the muon and slight difference in direction of flight can only increase the measured time of flight.)

16:54: Autiero is telling us about how the analysis is blinded. They used very old calibrations, intentionally giving meaningless results. A novel approach to blinding!

16:56: No evidence of variation with respect to time of day or time of year. So that’s the “Earth moved!” theory sunk.

17:01: Unblinding: Δt = -987.8ns correction to time of flight after applying corrections (ie using up to date calibration.) Total systematic uncertainty is 7.4ns. Time of flight obtained using maximum likelihood. Measured difference in time of flight between speed of light and speed of neutrinos is

\[
\delta t (c-\nu) = (60.7 \pm 6.9(stat) \pm 7.40 (syst)) ns
\]

\[
\frac{c-v_{\nu}}{c} = -(2.4 \pm 0.28 \pm 0.30)\times 10^{-5}
\]

17:03: ~16,000 events observed. OPERA has spent six months checking and rechecking systematic uncertainties. Cannot account for discrepancy in terms of systematic uncertainties.

17:04: “Thank you”. Huge ripple of applause fills the auditorium.

Questions

(These questions and answers are happening fast. I probably make an error or omission here and there. Apologies. Consult the webcast for a more accurate account or for any clarifications.)

17:05: Questions are to be organized. Questions about the distance interval, then the time interval, then the experiment itself. There will be plenty of questions!

17:08: Question: How can you be sure that the timing calibrations were not subject to the same systematic uncertainties whenever they were made? Answer: Several checks made. One suggestion is to drill a direct hole. This was considered, but has an uncertainty associated of the order of 5%, too large for this experiment.

17:12: Question: Geodesy measurements were taken at one time. There are tidal effects (for example, measured at LEP.) How can you be sure that there are no further deviations in the geodesy? Answer: Many checks made and many measurements checked.

17:14: Question: Looking for an effect of 1 part in 105. Two measurements not sufficient. Movement of the Moon could affect measurements, for example. Answer: Several measurements made. Data taken over three years, tidal forces should average out.

17:15: Question: Is the 20cm uncertainty in 730km common? Answer: Similar measurements performed elsewhere. Close to state of the art. Even had to stop traffic on half the highway to get the measurement of geodesy!

17:16: Question: Do you take into account the rotation of the Earth? Answer: Yes, it’s a sub ns effect.

17:23: Question: Uncertainty at CERN is of the order of 10μs. How do you get uncertainty of 60ns at Gran Sasso? Answer: We perform a maximum likelihood analysis averaging over the (known shape) of the proton spill and use probability density functions.

(Long discussion about beam timings and maximum likelihood measurement etc.)

17:31: Large uncertainty from internal timers at each site (antenna gives large uncertainty.) Measurements of timing don’t all agree. How can you be sure of the calibration? Answer: There are advanced ways to calibrate measurements. Perform inclusive measurement using optic fibers. Comment from timing friends in the audience? Audience member: Your answer is fine. Good to get opportunity to work on timing at CERN.

17:33 Question: What about variation with respect to time of day/year? Answer: Results show no variation in day/night or Summer vs Spring+Fall.

17:35: Question: How can you be sure of geodesy measurements if they do not agree? Answer: The measurements shown are for four different points, not the same point measured four times. Clocks are also continually resynchronized.

17:37: Question: Do temperature variations affect GPS signals? Answer: Local temperature does not affect GPS measurements. Two frequencies are used to get the position in ionosphere. 1ps precision possible, but not needed for OPERA.

17:41: Question: Can you show the tails of the timing distributions with and without the correction? Is selection biasing the shapes of the fitted distributions? Answer: Not much dependence on spatial position from BCT at CERN. (Colleague from audience): The fit is performed globally. More variation present than is shown in the slides, with more features to which the fit is sensitive.

17:43: Question: Two factors in the fit: delay and normalization. Do you take normalization into account? Answer: Normalization is fixed to number of events observed. (Not normalized to the cross section.)

17:45: Question: Do you take beam stretching/squeezing into account? Answer: Timing is measured on BCT. No correlation between position in Gran Sasso and at CERN.

17:47: Question: Don’t know where muons were generated (could be in rock.) How is that taken in to account? Answer: We look at events with and without selections on muons.

17:49: Question: Do you get a better fit if you fit to the whole range and different regions? What is the χ2/n for the fits? Answer: We perform the fit on the whole range and have the values of χ2/n, but I can’t remember what they are, and they are not on the slides.

17:50: Question: What about any energy dependence of the result? Answer: We don’t claim energy dependence or rule it out with our level of precision and accuracy.

17:52: Question: Is a near experiment possible? Answer: This is a side analysis. The main aim is to search for τ appearance. (Laughter and applause from audience.) We cannot compromise our main physics focus. E-mail questions welcome!

17:53: End, and lots of applause. Time for discussion over coffee! Thanks for reading!

The start of the neutrinos journey, taken from the OPERA paper.  (http://arxiv.org/abs/1109.4897)

The start of the neutrinos journey, taken from the OPERA paper. (http://arxiv.org/abs/1109.4897)

Share