• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Aidan Randle-Conde | Université Libre de Bruxelles | Belgium

View Blog | Read Bio

Live blog: neutrinos!

This is a live blog for the CERN EP Seminar “New results from OPERA on neutrino properties“, presented by Dario Autiero. Live webcast is available. The paper is available on the arXiv.

The crowd in the auditorium (Thanks to Kathryn Grim)

The crowd in the auditorium (Thanks to Kathryn Grim)

15:39: So here I am at CERN, impatiently waiting for the Colloquium to start on the OPERA result. The room is already filling up and the chatter is quite loud. I’m here with my flatmate Sudan, and we have a copy of the paper on the desk in front of us. I just bumped into a friend, Brian, and wished him look finding a chair! (He just ran to get me a coffee. Cheers Brian!)

15:53: Wow, the room is really crowded now! People are sitting on the steps, in the aisles, and more are coming in. The title slide is already up on the projector, and some AV equipment is being brought in. I was just chatting to Sudan and Brian, and we commenting that this is probably the biggest presentation that the world’s biggest physics lab has seen in a long time! As Sudan says, “The whole world is going to be watching this man.”

15:55: Burton and Pauline are here too, getting some photos before the talk begins. Expect to see more (less hastily written) blog posts about this talk!

15:59: We’re not allowed to take photos of the talk itself, but there will be a video feed that you can watch. See this link for details about the live webcast.

16:03: The talk begins. A fairly straightforward start so far. As usual, the speaker introduces the OPERA Collaboration, and gives a bit of background. Nothing ground breaking so far!

16:06: The analysis was performed blind, which means that the physicists checked and double checked their systematic uncertainties before looking at the data. This is a common best practice in these kinds of experiments and it is a good way to eliminate a lot of experimenter bias. The speaker is now discussing past results, some of which show no faster than light speed, and one of which (from MINOS) that shows a small effect which is less than 2σ.

16:16: Autiero is currently discussing the hardware of the experiment. It looks like a standard neutrino observatory setup- large amounts of dense matter (Pb), scintillation plates and tracking hardware for the muons which get produced when the neutrinos interact. By the time the beam reaches Gran Sasso it is about 2km wide! At CERN the neutrinos are produced by accelerating protons at a target, producing pions and kaons, which are then allowed to decay to muons and muon neutrinos. The hadrons are stopped with large amounts of Carbon and Iron, so that only the neutrinos and some muons survive. By the time the neutrino beam reaches Gran Sasso the muons have long since interacted and are no longer present in the beam. The neutrinos have 17GeV of energy when they leave CERN, so they are very energetic!

16:29: The discussion has moved onto the timing system, probably the most controversial aspect of the experiment. The timing challenge is probably the most difficult part of the whole analysis, and the part that particle physicists are least familiar with. Autiero points out that the same methods of timing are commonly used in metrology experiments. For OPERA, the location of each end of the experiment in space and time is determined using GPS satellites in the normal way, and then a “common view” is defined, leading to 1ns accuracy in synchronization. It looks like variations in the local clocks are corrected using the common view method. The time difference between CERN and Gran Sasso was found to be 2.3 ± 0.9 ns, consistent with the corrections.

16:36: Things are made trickier by identifying where in the “spill” of protons a neutrino came from. For a given neutrino it’s pretty much impossible to get ns precision timing, so probability density functions are used and the time interval for a given proton spill is folded into the distribution. We also don’t know where each neutrino is produced within the decay tube. The average uncertainty in this time is about 1.4ns. Autiero is now talking about the time of flight measurement in more detail, showing the proton spills and neutrino measurements overlaid.

16:39: Geodesy is important to this analysis. OPERA need to know the distance between CERN and Gran Sasso to good precision (they need to know the distances underground, which makes things more complicated.) They get a precision of 20cm in 730km. Not bad! Autiero is now showing the position information, showing evidence of continental drift and even an earthquake. This is very cool!

16:47: Two techniques are used to verify timing, using Caesium clocks and optical fibers. These agree to ns precision. The overall timing system is rather complicated, and I’m having trouble following it all!

16:48: I just got a message from a friend who saw this blog via Twitter. Hello Angela! Welcome to all the readers from Twitter!

16:52: Currently discussing event selection at Gran Sasso. Events must have a highly relativistic muon associated with them. (The speed of the muon and slight difference in direction of flight can only increase the measured time of flight.)

16:54: Autiero is telling us about how the analysis is blinded. They used very old calibrations, intentionally giving meaningless results. A novel approach to blinding!

16:56: No evidence of variation with respect to time of day or time of year. So that’s the “Earth moved!” theory sunk.

17:01: Unblinding: Δt = -987.8ns correction to time of flight after applying corrections (ie using up to date calibration.) Total systematic uncertainty is 7.4ns. Time of flight obtained using maximum likelihood. Measured difference in time of flight between speed of light and speed of neutrinos is

\delta t (c-\nu) = (60.7 \pm 6.9(stat) \pm 7.40 (syst)) ns

\frac{c-v_{\nu}}{c} = -(2.4 \pm 0.28 \pm 0.30)\times 10^{-5}

17:03: ~16,000 events observed. OPERA has spent six months checking and rechecking systematic uncertainties. Cannot account for discrepancy in terms of systematic uncertainties.

17:04: “Thank you”. Huge ripple of applause fills the auditorium.


(These questions and answers are happening fast. I probably make an error or omission here and there. Apologies. Consult the webcast for a more accurate account or for any clarifications.)

17:05: Questions are to be organized. Questions about the distance interval, then the time interval, then the experiment itself. There will be plenty of questions!

17:08: Question: How can you be sure that the timing calibrations were not subject to the same systematic uncertainties whenever they were made? Answer: Several checks made. One suggestion is to drill a direct hole. This was considered, but has an uncertainty associated of the order of 5%, too large for this experiment.

17:12: Question: Geodesy measurements were taken at one time. There are tidal effects (for example, measured at LEP.) How can you be sure that there are no further deviations in the geodesy? Answer: Many checks made and many measurements checked.

17:14: Question: Looking for an effect of 1 part in 105. Two measurements not sufficient. Movement of the Moon could affect measurements, for example. Answer: Several measurements made. Data taken over three years, tidal forces should average out.

17:15: Question: Is the 20cm uncertainty in 730km common? Answer: Similar measurements performed elsewhere. Close to state of the art. Even had to stop traffic on half the highway to get the measurement of geodesy!

17:16: Question: Do you take into account the rotation of the Earth? Answer: Yes, it’s a sub ns effect.

17:23: Question: Uncertainty at CERN is of the order of 10μs. How do you get uncertainty of 60ns at Gran Sasso? Answer: We perform a maximum likelihood analysis averaging over the (known shape) of the proton spill and use probability density functions.

(Long discussion about beam timings and maximum likelihood measurement etc.)

17:31: Large uncertainty from internal timers at each site (antenna gives large uncertainty.) Measurements of timing don’t all agree. How can you be sure of the calibration? Answer: There are advanced ways to calibrate measurements. Perform inclusive measurement using optic fibers. Comment from timing friends in the audience? Audience member: Your answer is fine. Good to get opportunity to work on timing at CERN.

17:33 Question: What about variation with respect to time of day/year? Answer: Results show no variation in day/night or Summer vs Spring+Fall.

17:35: Question: How can you be sure of geodesy measurements if they do not agree? Answer: The measurements shown are for four different points, not the same point measured four times. Clocks are also continually resynchronized.

17:37: Question: Do temperature variations affect GPS signals? Answer: Local temperature does not affect GPS measurements. Two frequencies are used to get the position in ionosphere. 1ps precision possible, but not needed for OPERA.

17:41: Question: Can you show the tails of the timing distributions with and without the correction? Is selection biasing the shapes of the fitted distributions? Answer: Not much dependence on spatial position from BCT at CERN. (Colleague from audience): The fit is performed globally. More variation present than is shown in the slides, with more features to which the fit is sensitive.

17:43: Question: Two factors in the fit: delay and normalization. Do you take normalization into account? Answer: Normalization is fixed to number of events observed. (Not normalized to the cross section.)

17:45: Question: Do you take beam stretching/squeezing into account? Answer: Timing is measured on BCT. No correlation between position in Gran Sasso and at CERN.

17:47: Question: Don’t know where muons were generated (could be in rock.) How is that taken in to account? Answer: We look at events with and without selections on muons.

17:49: Question: Do you get a better fit if you fit to the whole range and different regions? What is the χ2/n for the fits? Answer: We perform the fit on the whole range and have the values of χ2/n, but I can’t remember what they are, and they are not on the slides.

17:50: Question: What about any energy dependence of the result? Answer: We don’t claim energy dependence or rule it out with our level of precision and accuracy.

17:52: Question: Is a near experiment possible? Answer: This is a side analysis. The main aim is to search for τ appearance. (Laughter and applause from audience.) We cannot compromise our main physics focus. E-mail questions welcome!

17:53: End, and lots of applause. Time for discussion over coffee! Thanks for reading!

The start of the neutrinos journey, taken from the OPERA paper.  (http://arxiv.org/abs/1109.4897)

The start of the neutrinos journey, taken from the OPERA paper. (http://arxiv.org/abs/1109.4897)


Tags: , , ,

  • Andrew Foland

    To locate the detector (http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf) requires importing, by traverse sruvey, coordinates from GPS monuments at the outside of the lab in to the interior where the detector is. Internal to the traverse survey coordinate system I am sure they measure everything very, very accurately, in line with the quoted errors. My question is, in the initial setup of the traverse survey coordinate system, what degree of mis-alignment to the global coordinate system would be required to account for the results, and is it a plausible amount?

    FWIW I believe the answer to the questions are “2 degrees”, and “no”, but seems like it would be worth knowing what the experimenters think of these questions.

  • Its great to find your post! Keep updating. Thanks!

  • Jim Truskoski

    I find this result fasinating, however, how is the actual speed of light determined? Is it possible that the test results show that the real speed of light is 60 nanoseconds faster than we thought?

  • Rich

    Physicists discussing surveying…. They may be out of their depth…. 😀

  • Hello back! It’s hard to concentrate on work with all this going on, but I’ll manage it.

  • Don Denesiuk

    Very interesting indeed. Are the any other laboratories that have the resources to attempt to duplicate these results?

  • Quantum Diaries

    MINOS also observed similar results a few years back, but their certainty was much smaller. T2K has also said they plan to replicate the experiment. http://www.latimes.com/news/science/la-sci-0923-speed-of-light-20110923,0,497738.story

  • Quantum Diaries

    Thanks for the live blog, Aidan. It’s great to get the results right from the source.

  • Totally enjoyed this post – the webcast got a bit iffy here during the questions so it was nice to be able to catch up via the blog. Am definitely impressed by the experiment and how much care was taken to ensure results. Could you give a little bit more info about how the double path fiber measurement they used to perform time calibration? Wasn’t quite following there. Thanks!

  • Torbjörn Larsson, OM

    So … has everybody caught where they goofed yet?*

    It is an easy one. According to the paper the distance measurement procedure use the geodetic distance in the ETRF2000 (ITRF2000) system as given by some standard routine. The european GPS ITRF2000 system is used for geodesy, navigation, et cetera and is conveniently based on the geode.

    I get the difference between measuring distance along an Earth radius perfect sphere (roughly the geode) and measuring the distance of travel, for neutrinos the chord through the Earth, as 22 m over 730 km. A near light speed beam would appear to arrive ~ 60 ns early, give or take.

    Of course, they have had a whole team on this for 2 years, so it is unlikely they goofed. But it is at least possible. I read the paper, and I don’t see the explicit conversion between the geodesic distance and the travel distance anywhere.

    Unfortunately the technical details of the system and the routine used to give distance from position is too much to check this quickly. But the difference is a curious coincidence with the discrepancy against well established relativity.

    * Extraordinary claims need extraordinary evidence. Other outstanding concerns are:

    1. This needs to be repeated.

    2. It is not a clear photon vs neutrino race. Physicist Ellis and others here noted that the time differential for the supernova SN 1987A was a few hours, but at the distance of ~ 200 000 ly it should have been years if the suggested hypothesis would be correct.

    3. Analogous to the experiments where light waves seemingly travels faster than photon speed in vacuum, they don’t measure travel times of individual neutrinos but averages over a signal envelope. That must be carefully measured to establish that particles (or information, for that matter) travels faster than relativity allows.

    Especially since the neutrino beam oscillates between different kinds of particles!

  • Andrew Foland

    The coordinate system used in their geodesy note linked above is Cartesian. Should you doubt that, you can take the square root of the sum the squares of the locations in Table 1 and find that it equals the earth’s radius. So they did not goof on curvilinear coordinates (at least, not in this way.)

  • theSkipper

    Torbjörn, I don’t know how you got 22m. I’ve just done this calculation: the difference I get between chord length and arc length is nearly 400m (with arc length 730km, Earth radius 6370km) not 22m, and so far more than 60ns equivalent.

  • Shouldn’t measuring the distance between two locations on Earth by the use of a satellite in orbit involve parabolas somehow, simply because the the Earth’s gravitational well? And so, like an object in a parabolic side-view mirror, the two locations might be closer than they appear?

  • Shawn Corrado

    Point #2 seems almost too easy to me, but seems completely logical. Is there a reason this is not being brought up more?

  • scott gray
  • Joseph Bridgewater

    Can we review the design and simulation models for the FPGA used for time stamping?

  • Christer Svensson

    Concerving Einsteins hypothesis
    by means of “Micro black holes in motion at C and higher velocity” while mbh in themselves represent an alternate model for the redshift phenomena by mbh-photon interaction.
    mbh-neutrino coupling is a conceptual candidate that at the speed of light allows for motional time dilatation or motional gravitational time dilatation. Modeling mbh-neutrino interaction would propose that neutrino oscillation constitutes evidence of mbh-neutrino interaction through mbh spin charge distribution compressing the neutrino timeline.

    Best regards Christer Svensson

  • Pingback: CERN in a Tizzy « Truth not Beauty()

  • So many comments! Unfortunately I’ve not had time to catch up with them until now. And answering comments during the seminar was out of the question, it took all my concentration to listen and write this post!

    Andrew, a back of the envelope calculation gives a distance of about 20m. (Take 60ns and multiply by c.) I’m not sure how we’d translate this into a misalignment (as an angle, for example) except to say that this kind of discrepancy would probably be noticed by geodesy experts pretty quickly. Also, the timing available is 1000 times more precise than OPERA needed, so the GPS experts could probably add a few more decimal places to some of their measurements.

    Jim, the speed of light has been measured many times with increasing precision. Measurements from 1972 obtained a precision to one part in 108. (The discrepancy OPERA saw was 1,000 times larger than this.) Since 1983 it’s become meaningless to “measure” the speed of light, since it was set to a constant value against which every other speed and distance is measured. Ideally, physicists would drill a long hole from CERN to Gran Sasso and fire a laser down it. This would give them a direct measurement of the distance in light-seconds which they could compare to what they see with the neutrinos. (If only we could be sure of seeing a given neutrino, we could race the neutrinos and photons! Alas, the couplings of nature don’t allow us to do this.)

    Rich, my own suspicion is that the surveyors may be getting things perfect, but forgetting to include the effects of the Earth’s motion through space. As the Earth passes through space the points of CERN and Gran Sasso move about by about 11m, and since the neutrinos are traveling close to light speed for long distances we need to take this motion into account. While the geodesy experts may have the best knowledge in the world about separation of points on the Earth, including differences due to relativistic motion on Earth might not be something they usually include in their calculations!

    Sarai, I had a bit of trouble with that point as well, to be honest! I just consulted the webcast to refresh my memory (see part 34). A fiber optic cable is used to connect two pieces of equipment and light is sent down the fiber optic from the first piece of equipment to the second. Suppose the signal takes tA nanoseconds to go from the first piece of equipment to the second piece of equipment, and the light takes tB nanoseconds to travel that distance. At the second piece of equipment the physicists measure the difference in time between the signal arriving and the light arriving and call this t(A-B). Then the sender and receiver are changed (so now the second piece of equipment sends the light and the first piece of equipment receives it) and the second piece of equipment sends light when it receives the signal. The physicists at the first piece of equipment record the difference in time between sending the signal and receiving the light, and call this t(A+B). The time it takes the signal to go between the two pieces of equipment is then (t(A+B)-t(A-B))/2/. This is applied to all stages in the chain. It’s double checked by using Cesium clocks, which use phase information and they are usually precise to about 1ns. From what I remember, Cesium clocks are still state of the art when it comes to time measurements, and Cesium’s properties are used in the definition of the second.

    Tjorborn, you raise some interesting points. Some preliminary thoughts on them: 1) It’s possible than MINOS could repeat this measurement (they already measured about (64 ± 60)ns, if I remember rightly) and some Japanese experiments could repeat the measurement too, although I understand they still have to recover fully from the damage caused by the earthquake. 2) The neutrinos from the supernova were of a different initial flavor, so they would have different flavor composition while traversing space. This may have an effect on their speed. The effect may also depend on the neutrinos being undisturbed for long periods of time (so their wavefunctions remain intact, allowing them to be virtual until they’re seen) and that might not be possible over hundreds of thousands of ly of distance. 3) Good point! A theoretical treatment would require careful analysis of quantum mechanical effects and there are a lot of subtleties when dealing with large ensembles of particles which may not interact with their environment. (In fact writing down an accurate wavefunction for a single electron propagating through empty space is already a very difficult business!) Food for thought…

    Conan, I’m not sure how parabolas play a role in this (then again it is late at night where I am!) Could you expand a bit more about why we need parabolic measurements?

    Christer, could you provide some more details? There are quite a few proposed variations of relativity, but without details it’s difficult to determine what would constitute a viable alternative to special relativity.

    To everyone else, thank you so much for your comments and your contribution to the discussion! And thanks for reading. There is plenty of reading material out there for those who are interested and finding out more, and a lot it has already been presented on this blog. If you’re looking for somewhere to start, then Wikipedia is usually a good guide. Read their articles to get a feel for the subject, and (more importantly) follow the links to their sources! Wikipedia is only as good as the information it cites, so follow the trail back to the sources, to the experimental websites and to the articles. If you have questions, just ask!

  • jal

    “2. It is not a clear photon vs neutrino race. Physicist Ellis and others here noted that the time differential for the supernova SN 1987A was a few hours, but at the distance of ~ 200 000 ly it should have been years if the suggested hypothesis would be correct.”

    If I make the assumption that light goes slower when it goes through a different medium and that on a distance of ~ 200 000 ly, surely, light would have the same difficulty of reaching us at a constant speed of c, because there would be atoms, clouds of atoms etc. that would have slowed light momentarily. Therefore, the time difference of a few hours would indicate that the neutrinos are not traveling at the speed of light but rather slower.

    Therefore, both experiment need further explanations.

  • Asko Perho

    Hanig 5 questions and assumes abt radiations speed, masses, energys.. To who could i send my little a4 letter?

  • I missed the presentation, and didn’t see it in the paper …
    How well do they know the stability of the beam dispersion (not pointing)? A systematic variation of 0.6% from beginning to end of the proton spill would give an effect of a similar size, due to variation of the neutrino beam intensity at the OPERA detector.

    An interesting puzzle.

  • inga karliner

    re “Rich, my own suspicion is that the surveyors may be getting things perfect, but forgetting to include the effects of the Earth’s motion through space. As the Earth passes through space the points of CERN and Gran Sasso move about by about 11m, and since the neutrinos are traveling close to light speed for long distances we need to take this motion into account. While the geodesy experts may have the best knowledge in the world about separation of points on the Earth, including differences due to relativistic motion on Earth might not be something they usually include in their calculations!”
    – John Ellis asked if the earth rotation was taken into account. John also pointed out that effective earth radius is different at Cern and Gran Sasso.

  • Hi Inga. I was there at the talk and heard Jon Ellis’s question- he’s referring to the rotation of the Earth about its own axis which accounts for a shift of 1m, whereas I’m referring to the motion of the Earth along its orbit, which accounts for a shift of 10m. It’s important not to get these two effects confused. (When the two effects are combined they contribute a shift of -11 to +11m, depending on how they align with respect to each other.)

  • Hi Jon, I think they got the shape of the proton spills measured quite precisely. The width of the proton spill is 10μs, with an uncertainty of about 10ns. That’s an uncertainty of 0.1%. The properties of the proton spills are monitored using high precision Beam Current Transformers (referred to as BCT in the talk.) The webcast has been archived on the CERN Document Server. The timing of the protons is discussed around part 25. Does this answer your question?

  • Jeffrey Matheson

    I am not a scientist, but I have a couple questions. One, was the law of diminishing returns discussed in conjunction to your findings? Two, could the Neutrinos have gone though parts of the earth that have similar properties of the oil or water that as shown to make them go faster? I was just wondering.

  • Of course, they monitor the spill. However, (I think) this monitor involves measuring the fluxes of particles from the target, which is insensitive to small changes in the beam dispersion. Perhaps they are making more detailed measurements that monitor the dispersion more directly. That is what I am wondering.

  • Hi Jeffrey, good to hear from you. How is the law of diminishing returns relevant to this result? The rate of production of neutrinos and the rate of interaction of neutrinos doesn’t vary with time. (Nature doesn’t “care” how long the experiment has been running for.) In fact the performance of an experiment like this usually improves with time as the physicists understand the experiment better and the engineers improve the hardware.

    As far as I know neutrinos don’t travel faster in a medium. Are you thinking of some other particles?

  • Hi Jon. I’m afraid I don’t have any more details than what I’ve already said and I’m not expert enough to comment on the details of proton spill dispersion. There may be more details in the paper. It’s an interesting point, so if you find the answer please let us know about it!

  • Pingback: 超光速中微子 « 瑞狮()

  • Kit Adams

    Regarding Jeffery’s 2nd point, I suspect there is no way to measure the speed of light through dense matter without the electromagnetic interaction dominating overwhelmingly, except using neutrinos. That same dense mass-energy (dominated by the kinetic energy of quarks and gluons) is modifying space-time, effectively compressing it to create a gravitational field, via some mechanism which is currently not understood at the microscopic level. Maybe that unknown mechanism effects the speed of light relative to that in vacuum.

  • Pingback: Neutrins superlumínics: retorn al futur? | L'ase quàntic()

  • Bert Morrien

    What about Heisenberg uncertainty principle: does it’s effect changes the event time distribution in the same way at both sides of the baseline so that this effect averages out and not add to the time uncertainty?

  • Hi Bert, the uncertainty principle would affect the event time distribution by spreading out the timing at both ends, and the effect at each end would be independent. If the uncertainty in timing at one end is \(\delta t\) then the uncertainties would add in quadrature to give an overall average uncertainty of \(\sqrt{2}\delta t\). In order to get a timing uncertainty of 60ns, we’d need to have an energy uncertainty of at most \(10 neV\), much smaller than the experimental sensitivity at either end of the baseline. It looks the uncertainty principle is not the cause of the 60ns time difference.

  • Pingback: Faster-Than-Light Neutrinos? | Cosmic Variance | Theoretical Physics()

  • Bert Morrien

    I can think of two mechanisms that shorten the tail of the neutrino pulse, resulting in a backward shift of the center of this pulse, thereby causing a measurement of the neutrino speed that is too high.

    1. Quote from report 1109.4897
    “The point where the parent meson produces a neutrino in the decay tunnel is unknown”
    A meson may not even decay in the decay tunnel; slamming into the rock they may interact with the rock in such a way that no detectable neutron is produced.
    2. It is expected that mesons in the tunnel decay in mode M1 that produces a detectable neutrino. There might be a lower probability that the meson decays in a mode M2,that does not produce detectable neutrons. The longer the meson does not decay, the greater chance of decaying in mode M2, i.e. at the end of the tunnel.

    Question: Are these two mechanisms real and if so, taken into account?

  • Bert Morrien

    Correction on September 29, 2011 at 4:57 pm entry:
    I must apologize, because the two mechanisms have no bearing on the neutrino pulse other than that the amplitude may become a bit smaller.

  • Jordi

    This is a very important scientific finding.
    It is likely another proof that Einstein was right.
    I would like to ask the CERN-OPERA scientists the following questions
    They have found a little excess of speed of neutrinos to light at this shallow Earth Crust depth. Do you think you may find higher speeds if you shoot neutrinos right through the Earth’s Core. Or even better if, in the future it is possible to have the beam go through the sun?
    Did you detect any change in the neutrino properties after crossing the earth’s crust.?



  • Hi Bert. There’s no need to apologize! Science is about never having to say “Sorry”. (In fact, there’s a really cool quote along the lines of “Science: If you don’t make mistakes, you’re not doing it right. If you don’t acknowledge mistakes, you’re definitely not doing it right. If you don’t correct mistakes you’re not doing it at all!”

    Anyway, any effect associated with the point of production of the neutrinos would tend to increase the amount of time between production and detection, as the other particle (the meson or the muon) would travel more slowly than the neutrino. In effect, the physicists measured the average time it takes particles to travel from “somewhere in the decay tube” to Gran Sasso. The point of production of the neutrinos is taken into account, and it can’t lead to noticeable decrease in the time it takes the neutrinos to reach Gran Sasso. Any effect that happens in or near the decay tube (eg neutrinos coming from slow mesons, neutrinos being produced when muons hit the rocks) can only lead to delays in neutrino arrival.

    Thanks for the comments!

  • Hi Jordi, some great questions there! I just want to add a few points (although someone from OPERA would be able to provide more information.)

    Ideally we would like to have very long baseline neutrino experiments to measure these effects, but in reality this is not the best idea for two reasons. First of all, mixing properties are very sensitive to the length of the path the neutrinos take, so we tend to choose lengths that maximize the measuring effect we want to measure. Secondly, it’s impossible to focus a beam of neutrinos, so once they get produced their beam keeps getting wider and wider. Going from CERN to Gran Sasso the beam becomes 2km wide. If we make the neutrinos go through the Earth the beam would become about 110km wide (about 68 miles wide) which means that the experiment would need to be 2500 times more massive to detect the same number of neutrinos, a huge number! The next generation of neutrino experiments may try this approach, but right now I don’t think there are any mature plans for such an experiment. Given how rarely neutrinos interact with matter, I doubt there’s any reason to think that traveling through different types of matter will have any effect. After all, it’s all just protons, neutrons and electrons in different arrangements.

    The detection of the neutrinos at Gran Sasso is a vital part of the work at Gran Sasso, and that requires a careful measurement of the neutrino properties (in particular, it’s flavor.) Understanding the properties of the interactions at Gran Sasso is also important to remove any backgrounds. You can rest assured that so far, the speed of the neutrinos is the only property that has shown serious disagreement with previous results, otherwise we would have heard about the other results already. (In fact, the speed of the neutrinos is a “bonus” measurement. The main purpose of OPERA is to measure how often muon neutrinos turn into tau neutrinos.) Let’s hope this stimulates further work and a closer look at the interactions! If the neutrinos violate Lorentz invariance then we may expect to see, for example, a decrease in helicity suppression.

  • Bert Morrien

    Adrian, you are very friendly. That encourages me to make the following remarks about the data selection window.
    CNGS events are preselected by requiring that they fall within a window of ± 20 μs with respect
    to the SPS kicker magnet trigger-signal, delayed by the neutrino time of flight assuming the speed of light and corrected for the various delays of the timing systems at CERN and OPERA.
    The relative fraction of cosmic-ray events accidentally falling in this window is 1e-4, and it is therefore negligible [1, 28].
    End Quote.

    I assume that the data selection window is centered around the expected center of the event time distribution, so that sporadic cosmic-ray (CR) events do not cause a shift in the result.
    The 1e-4 relative CR fraction is unclear. Relative to what?
    If relative to the total number of detected events, then there were only about 16 CR events, no problem.
    However, if there is a 1e-4 chance of detecting a CR event every time the window is open, then it may be that a deviation of the position of the window will have a notable effect.
    Am I right that only 16 cosmic-ray events were detected in those three years?

  • Hi Bert. I looked up the information in reference 28 (Link) and it looks like the 1e-4 corresponds to the fraction of events in the final dataset which are used in the analysis. In total there were about 16,000 events, so I think there were only about 3 cosmic ray events in the final dataset. Not many at all!

    (The paragraph from reference 28 states:

    The event analysis was performed in two ways. In the first one the event timing information
    was treated as a basic selection tool, since the time window of beam events is well sized
    in a 10.5μs interval, while the uniform cosmic-ray background corresponded to 10−4 of the
    collected statistics (figure 4).

    Figure 4 then shows the number of events collected as a function of time before selections are applied. The same plot is shown in slide 15 of the talk. The cosmic ray background is indeed small!)

  • Bert Morrien

    Aidan, sorry for misspelling your name.
    I’ts clear that my 16 CR events should read as 1.6, but I don’t know how you come up with 3.
    I’ve scanned the reference 28 document for “Cosmic” and besides becoming mighty impressed by the scale of the experiment, I still feel uneasy for 2 reasons.
    1. If you ignore the prominent peaks in fig. 4 (left graph), then there is a steady level of events of about 1 average. These are probably CR events and most of them can be ignored because they fall outside the window. However, within the window this average event level of about 1 is still present within the peaks of about respectively 92 and 72 events. (It strikes me that 1 is close to 1e-6 times 16000, but I assume this is a coincidence).
    2. If CR events are said to be negligible, what is the relevance of Figure 6. “Angular distribution of beam-induced and cosmic-muon events taken with the electronic detectors”?

  • Hi Bert, no worries about the name- I’ve been called worse! (I made a mistake in getting 3, I multiplied by 1.6 twice instead of once!)

    From what I understand, selections are applied to the neutrino candidates, and after these selections are applied cosmic rays account for 1 part in 1e4 of events. The physicists can verify this by selecting events which fall outside the window in figure 4 (which are almost all cosmic rays) and determine the efficiency of selecting cosmic rays. (In fact, it doesn’t matter what the source of these candidates are, as long as they follow the same distribution. For all we know there could be a nuclear reactor nearby giving off a constant stream of neutrinos, and this would all cancel out once the physicists measure the backgrounds.)

    To answer your second question, figure 6 is presented before the analysis is even performed. It looks like it is presented to show how well (or not) the physicists understand the composition of events in the sample. From the plot, and the inset it looks like they have accounted for the signal and the cosmic ray background. Although they do not say so explicitly, the physicists place selections on the direction of the incoming neutrino, and in order to have confidence in this selection, we must be confident that we understand the angular distribution of the incoming neutrinos. (Placing a requirement on the energy of the incoming particle, as well as the geometry and timing of the detector implicitly places a selection on the incoming angle.) In similar experiments observing solar neutrinos, similar angular requirements were made. In particular, the momenta of candidates were required to make a small angle with the line of sight of the sun. I know I wouldn’t be confident in the result if this plot had not been shown!

  • Bert Morrien

    Hi Aidan,
    Do you mean that background events are only for a small part caused by cosmic-rays?
    Then I should have used the term background instead of cosmic-rays, which makes my concern bigger, because the background seems much higher than 1e-4.
    Fig. 4 shows two peaks with in total 85+73=158 events of which about one should be a background event, 1/158=6e-3, much higher than 1e-4.
    I presume that for the original goal of OPERA the exact timing of the window is not very important; in the report it is only mentioned superficially.
    Therefore, assuming that the window has no preference for early background events may be not justfied, given the almost unbelievable test result.
    I understand that the raw test data is recorded, so that it seems feasible to “repeat” the this test with a different window position, one
    that has in worst case no preference for early events.
    It should even be possible to use no window, but to correlate the data with the proton distribution waveforms so that the data is not only selected but also weighted.
    At a certain delay this gives a maximum output, that delay is also equal to the flight time.
    But I may be talking nonsense.

  • Hi Bert. No, I just mean that background is background, and it may have many sources (cosmic rays, neutrinos from the sun, from the atmosphere, radioactive rocks, nuclear reactors, a crate of bananas in a nearby market etc. I recently read about a dark matter search experiment where some of the equipment was made from a metal which had a radioactive isotope!) Most of the background probably is cosmic rays, but there may be other sources.

    The proportion of background events in figure 4 is not actually relevant to the final analysis. When physicists perform an analysis like this we usually make several different selections and we show plots at various stages in the analysis. There’s no reason to believe that the events shown in figure 4 are the events that pass all of the selection criteria, and it must be the case that many of these events don’t make it into the final sample. (As you say, if they did then we would have a lot more background!)

    It’s common practice to define a “signal region” and a “sideband region”. A “sideband region” is usually defined by reversing one or more of the selection criteria, so that it excludes the signal region, but is otherwise very similar to it. In order to study a background it’s important that we store information about background events, even if we don’t include them in our final result. By allowing some background events to survive some (but not all) of our selection criteria we get an independent sample of background events which we can use to estimate the background events that do pass all our selection criteria.

    In this case, the physicists could use the timing information to determine the shape of the random background, from cosmic rays and any other sources, by calling the regions outside the 10ms window the sideband. This gives information about how often these random events occur. The physicists can then apply the other selection criteria (ie only select events where they see a muon or tau candidate, make sure the tracks are of a certain quality/energy etc) and calculate how many random background events survive these selection criteria. They can then use this information to estimate how many random background events make through the final selection in the signal region.

    In principle the physicists at Gran Sasso could have recorded all events, without a window, but in practice there are usually problems with this approach. There may be issues associated with disk space or bandwidth (I find this unlikely, given how rare candidates are) or with timing calibration (this may be more likely, as accurate timing information of events is expensive in terms of maintenance and labor.) And we must remember that the main purpose of OPERA is to measure the ratio of tau events to mu events, and this isn’t helped much by adding more to the background sample. The timing window is probably motivated by practical concerns (there may be people on shift who have to monitor the data taking, so only take data when there are incoming neutrinos. The hardware and software may have small buffers so they can’t take data all the time.)

    With regards to weighted data… that would make a lot of people more skeptical. Weighting data usually leads to non trivial bias. (A simple and obviously biased example would be to weight events so that those that arrive before they “should” get a higher weighting. Then no matter how good the measurements are, these outliers will always dominate the measurement, increasing the average speed significantly.) Scientists are already (rightly) skeptical of this result, and adding weighting isn’t going to help OPERA convince people of their result.

    A lot what I’ve said here is a collection of educated guesses based on experience. I’m not part of the OPERA experiment, so I don’t have access to the details of the experiment (beyond what we can read in the papers and the talk.) If you want definitive answers to any of these questions then you really need to talk to someone from the OPERA experiment.

    Anyway, don’t worry too much about figure 4. It just shows what the distribution of some (but not all) of the candidates looked like after some (but not all) of the selection criteria are applied. It’s by no means a complete picture of what is happening with respect to timing and event selection.

  • Bert Morrien

    Hi Aidan,
    OK, they dropped not only events outside the window but also those with unexpected properties like angular momentum, etc.
    With weighting I meant the following.
    Assuming that the probability of detecting a neutrino is proportional to the neutron density and in turn to the proton density waveform
    and further assuming that this waveform is delayed with the exact flight time, each time an event is detected the amplitude of the waveform is sampled. The samples are summed, the sum is set to 0 before the first waveform starts and at the end of the last waveform the sum will reach a value S.
    A assume (but cannot prove) that the sum must be lower than S for all other delays.
    Since the exact flight time is not known, one can repeat this procedure with various delays and select the delay with the highest sum.
    This should be perfectly possible because I assume all 16000 event times were recorded as well as the associated proton density waveforms.
    It is my feeling that the high statistics described in the paper have also elements of assumption in them.
    Well, with 20 years electronic design, 20 years software ebgineering and 55 years Scientific American readership, I cannot think of other possible doubts in the experiment, so I’ll leave it to that.
    Aidan, thank you for your responses; e-mail me if you want.

  • Jordi

    Thanks Aidan for your accurate answer.
    I hope it will soon be possible to focus neutrinos.
    My concern was that, perhaps, the increased speed was due to the gravitational field of the Earth. The denser field closer to the core of Earth or the Sun, could be perhaps the responsible of the acceleration and of perhaps deviation. As neutrinos interact very little with matter, perhaps as the field becomes denser interaction is stronger and perhaps this affects speed and direction.
    My question about the change in properties was for that reason, if neutrinos have variations in speed, maybe something else could have change in them.
    It this is true, neutrino beams could be used to track gravitational fields in space.
    Never mind, I think it’s a very important result.
    Thanks again for answering


  • Bert Morrien

    Still a remark about the data analysis in report 1109.4897
    For each event the correspondent proton extraction waveforms were taken, summed up and normalised to build two PDF’s, one for the first and one for the second SPS extractions, see fig 9 or fig 11 red lines.
    The events were used to construct an event time distribution (ETD), see fig 11, black dots, apparently the number of events in 150 nS intervals, starting at a fixed time tA after the kicker magnet signal.
    My point is, that the PDF’s are different from the individual proton extraction waveform (PEW) associated with a particular event. In my opinion, this makes using the PDF for maximum likelyhood analysis questionable.
    Via the same line of reasoning the grouping the events should also not done if the PEW amplitude may vary too much in the grouping time interval.
    Another way to construct the PDF is to sample the PEW for each event and construct the PDF by positioning this sample at a fixed time tB after the kicker magnet signal.
    The problem is, that the PDF is correct only if tA is exactly equal to tB + flighttime.
    If the flighttime obtained as described in the report is used as a start, maybe one can obtain a better result of the maximum likelyhood analysis via a number of iterations.
    I presume all relevant raw expirimental data is still available, so a new analysis should be entirely feasible.

  • Hey Bert! I’m not sure I follow the weighting scheme exactly, but it seems to me that it would lead to bias. (If I understand you correctly, the window is varied until the number of candidates at Gran Sasso that match to the waveform at CERN is maximized.) If that’s the case then this value can be maximized by applying no window at all, simply recorded all candidates at Gran Sasso. However, if we require a window to be imposed, and this window is found my maximizing the number of events (given some constraint, for example the window can exceed a certain period of time) then the choice of window will be influenced more by early and late arrivals than by “on time” arrivals. This becomes dangerous if some of these events are background events.

    Moving on to the PDFs, we come face to face with quite a few problems associated with any neutrino analysis, if you’ll allow a little diversion (they really are the most awkward particles to work with!) In general the experiment will not know where the particular neutrino was produced. In principle in an ideal experiment it would be possible to point the neutrino back using information about its momentum, and match that up to a muon that points back to the same point in the space. Unfortunately that requires extremely precise measurements of the neutrino momentum. (Let’s say we want 1mm precision at CERN, which is already very large. If the incoming momentum of the neutrino is 20GeV and the distance traveled is 730km, then the precision needed is about 0.2eV, which is far below the resolution of most large modern detectors.) The other problem in a “perfect” experiment is just one of statistics. Since the neutrino interacts so rarely, OPERA would need to keep information about millions of millions of muons and mesons in order to be sure that they could match up the neutrino and muon or neutrino and meson. That’s just not practical. In any case, I don’t think OPERA kept information about the paths of the individual mesons or muons used to produce the neutrinos.

    Given that OPERA don’t know where a given neutrino was produced, they must resort to statistical methods. Here’s how I think they would produce their PDFs. They can measure the waveforms of the proton spills very precisely. They then use Monte Carlo simulation to estimate the time of arrival at Gran Sasso, the probabilities of interactions and the results of the interactions. From this they can produce a PDF for neutrino interaction on a per-spill basis. They can then “integrate” up both PDFs by simply adding them. This would take the variation on a spill-by-spill basis into account and give a meaningful PDF that they can work with at Gran Sasso. (From experience on BaBar and ATLAS I can say that this is what usually happens as we look at data taken over large periods of time. Parts of the detector break or get old, and we have to update our PDFs to take it into account, otherwise we get inconsistent results.) The likelihood is usually obtained through an iterative process, and these kinds of fits often take several days of CPU power to complete. The likelihood analysis is a bit contentious in this analysis. I heard some colleagues commenting that the fit does look better with the 60ns lead, but does it really look 6σ better? I’m not sure if that’s a fair comment, to be honest, as they were referring to confidence of the fit and not the final result.

    Just another note, I’ve noticed that you have referred to neutrons quite a lot in your comments, and you seem to have had experience with them before. The most common source of neutrinos on Earth is probably the neutron decay process n->p e ν in radioactive isotopes, but that’s not how things work for OPERA. The source of neutrinos at CERN is mesons (quark-antiquark pairs.) Charged pion and kaon mesons decay almost exclusively to a muon-muon neutrino pair, and that’s where we get the neutrinos from. At CERN, the protons are collided with a fixed target, and then mass spectroscopy is used to separate out the charged pions and kaons. These are focused in a beam and then decay (in the decay tube.) Because of the kinematics of the decay the neutrino comes out highly collimated with respect to the initial beam of pions and kaons, which leads to small transverse spreading of the beam of neutrinos. (Even so, it’s still 2km wide by the time it gets to Gran Sasso.) Incidentally, the kinematics that lead to a very collimated beam also lead to a very strong preference for μν decays at the expense of eν decays.

    Thanks again for your comments!

  • Hi Jordi! I’d been considering the effect of the Earth’s gravitation field as well (any excuse to read my general relativity textbooks…) but came to the conclusion that any effect would be tiny. In principle it’s possible to make a geodesic between two points with a shorter path by passing through a gravitational well. What matters then is the distance from the center of the Earth to the object (in Schwartzchild coordinates) compared to the Schwarzchild radius of the Earth. The Schwarztchild radius of the Earth is about 9mm, so the difference would be about 1 part in 1e9, which is tiny and would account for ps differences, a thousand times smaller than what OPERA see.

    I was thinking about neutrino observatories last night and it occurred to me that in principle we could neutrinos to “see” the big bang. (We use cosmic microwave background to see the echo of the big bang.) If that were possible we could all sorts of information about the very early universe. Unfortunately it’s so impractical that I think it’s science fiction. Still, with enough of them we could get information about gravitational lensing and map our gravitation fields, which would be cool!

  • Bert Morrien

    Hi Aidan,
    You didn’t grab my alternative data analysis scheme,
    I’ll try to make myself more clear.
    I am quite confident that this scheme will work.

    Alternative data analysis
    The idea is to use all available information to enable discriminating the correct flighttime from incorrect ones.
    1. The PEWs are delayed with the expected flighttime (dPEW), which is first assumed to be correct.
    2. An accumulator is availble that is cleared before the first neutrino is detected.
    3. At an event, i.e. the point in time a neutrino is detected, the amplitude of the dPEW is added to the accumulator.
    Note that between the dPEWS the proton density is assumed zero, so that events in these
    regions don’t count.
    4. After the last neutrino is detected, the accumulator will have an endvalue S.
    5. For a correct flighttime the number of expected events is higher during high values of the dPEW, many additions of high values to the accumulator take place; during low values of the dPEW fewer additions of low values will occur.
    6. For incorrect flighttimes, the many events that caused addition at the correct flighttime of a high value will now add a lesser total value; also fewer events with a higher value will be added, eventually resulting in an endvalue that is less than S.
    7. The correct flighttime can be established via an iteration procedure with various flighttimes; the one resulting of the highest accumulator endvalue is selected.

    Concluding, some remarks on the data analysis in the paper
    1. At the point in time a neutrino is detected, the PDF has a certain value.
    The value of interest, namely the current amplitude of the proton extraction
    waveform (PEW) is lost due to the summing of the PEWs.
    2. The events are grouped in 150 nS bins as is suggested by fig. 11.
    This is apparently done to enable applying the maximum likelihood procedure.
    By grouping, part of the precious time information is lost.
    3. The paper assumes an (unspecified) relation between the number of events in a bin and the summed amplitudes of the corresponding PEWs associated with these events so that applying the maximum likelihood analysis will reveil the correct flighttime.
    Fig 4 shows that the PEW has quite an irregular form; moreover, the kicker magnet signal is not synchronised with the PS turn extraction mechanism, otherwise the shape of the PDFs would resemble the PEW.
    This means that for individual events in one bin the corresponding PEW amplitudes are different, using the same PDF value for all events also means that available information is lost.

  • Bert Morrien

    Hi Aidan,
    I forwarded my analysis idea to the project team and they will consider it.
    In the mean time I will take this a step further by preparing 3 computer programs.
    1. Synthesis of 16000 proton extraction waveforms (PEW), using fig. 4 as a template.
    Amplitude resolution seems not too important, 8 bits should be enough.
    Because the analysis relies strongly on the good correlation of PEW and event distribution, it is important that the 200 MHz SPS radiofrequency is resolved, so a time resolution of 1 ns is choosen. One PEW needs 10 kB, totalling 160 MB for 16000 PEWs.
    2. Generation of the events. It is clear that one event should be derived from each PEW by taking one sample of it, while respecting the mentioned correlation. Currently I’ve got no good idea how to do that.
    3. The analysis program itself. Seems not to be complicated.
    Once completed, the result should emerge quickly.

  • Jordi

    Hi Aidan,
    The result you found of 1 part in 1e9 was in the calculation of the deviation of the neutrino trajectory, along the shallow path inside the Earths Gravitational field, is it?
    Does this account for the speed. I imagine no mechanism is known that can explain the difference in the speed of a neutrino through matter when compared to the vacuum.
    The speed of light through matter is slower than vacuum because light interacts with matter, and at each absroption/emission step, the speed must be reduced, even the mechanism is fast, around the femtosecond scale.
    That’s why I though the only explanation to the increased speed of the neutrino through matter would be due to something that comes with matter is speeding the neutrino.
    The only model I can compare, with a reasonable distance, is the speed of sound. The densest the matter the sound travels through the higher the speed when compared with air (vacuum does not apply here, as the mechanism is different, but, a similarity may give us a clue to continue this way of reasoning).
    Hence wether it’s the matter by itself or the gravitational field of the Earth can only be told if the next neutrino beam can ever be aimed through a deeper level inside de Earth, ultimately the core. A kind of seismic wave experiment, like an earthquake location network, but with neutrinos.




  • Bert Morrien

    Event generation.
    1. Group PEWs so that group[D] contains X times D PEWs; 0 < D < maximum proton density of the PEW
    2. For all PEWs in group[D] select at random a sample having density D and add the corresponding time to the group of events. This guarantees the required correlation between PEW and event distribution.

  • Hi Bert. First of all, apologies for a slower reply than usual! I’ve had a very busy few days and reading your analysis and the analysis in the paper took quite a lot of time to digest. Thanks for the extended explanation, and good for you for contacting OPERA directly!

    I think that your proposal and the analysis used by OPERA are actually very similar. If we take a close look at the paper under “Data analysis” we see the following:

    For each neutrino interaction measured in the OPERA detector the analysis procedure used the corresponding proton extraction waveform. These were summed up and properly normalised in order to build a PDF w(t).

    This suggests to me that the analysis does not lose any timing information in the summing process. The timing information on an event by event basis is considered before the PDFs are summed.

    The timing information is not lost due to a choice of binning either. (If this was the case, then the uncertainty on a per event basis would be about 75ns, which seems too coarse for the precision they give. It would also be impossible to bin in time intervals of 50ns in figure 12.) In order to make a plot the collaboration had to choose a binning, but that doesn’t mean that the data were binned for the maximum likelihood analysis. Generally, unless otherwise specified, a maximum likelihood analysis is unbinned.

    There are still a few points I’m not clear about in your suggestion, and it’s mainly just amiguity of nomenclature. (What is standard language in your field of research is probably jargon to me and vice versa!) When you say “endvalue” I interpret that as being the upper range of a time interval, but it could also mean the final spectrum in the accumulator after the final neutrino is detected. If I understand the sentence “For a correct flighttime the number of expected events is higher during high values of the dPEW, many additions of high values to the accumulator take place; during low values of the dPEW fewer additions of low values will occur.” I can just replace the words “high values” with “peaks” and “low values” with “troughs” (or even “regions which are not peaks”), is that right? I want to make sure that “high values” corresponds to the number of counts or the height of the PEW at a given point, and not the value along the x-axis of the PEW.

    For the model I would propose the following Monte Carlo method, if you can afford the computing time. (If I find time I’ll run something similar myself to see how small variations lead to a different result.) The first extraction PEW gives us the time evolution of the proton spill, essentially a PDF of the proton density. For each proton there will be associated multiplicities to produce a π meson or a K meson. (These can sum up to more than 1, so they’re not actually probabilities.) So if there are on average 2 π mesons produced per proton then the multiplicative factor would be 2. The π mesons and K mesons have characteristic lifetimes of 2.60e-8s and 1.24e-8s, after which they will decay to other particles. The probability of decay falls exponentially with a boosted characteristic lifetime γτ. That means the flight length will have an exponential decay with mean path length βγτ. The distribution of β is not well known, but it will rise sharply as β&rightarow;1. The variation of such a function would be a key systematic uncertainty for such a model. From this information we can get the PDF for the point of production of a neutrino for a given proton spill.

    π mesons decay to μν more than 99% of the time and K mesons decay to μν 64% of the time. The direction of the neutrino will vary slightly, according to the kinematics of the decay. In the rest frame of the mesons the decay is isotropic. The transformed angle goes as cosθ’=(cosθ-β)/(1-βcosθ) where θ’ is the angle as seen in the lab and θ is the angle as seen in the meson rest frame. When β&rightarrow;1 this looks like a sharply focused beam, so we can ignore small variations in the angle of the neutrinos.

    We then assign probabilities for the interaction of a neutrino with the detector at Gran Sasso, and for the interaction of a neutrino with the rock before the detector, as well as efficiencies of reconstruction. These are extremely difficult to predict, but they don’t change in time so we can use the OPERA dataset to get the product of the efficiencies and proabilities. We find 8525 external events and 7586 internal events, which gives relative products of probailities and efficiencies of 0.53 and 0.47. We can then add an extra delay to external events, distributed according to a Gaussian with a mean and sigma which can be varied. We should then add timing resolution uncertainties (both at CERN and at Gran Sasso, with different characteristic means and sigmas.) Once we have all that we should be able to predict when we expect a neutrino to arrive and thereby get a PDF for the neutrinos, with associated resolution effects.

    The remaining piece is then the maximum likelihood, which is straightforward. For each neutrino we know how long it took to arrive, and in the case of internal events, where it arrived. We form a PDF for the arrival time of the neutrino wrt to both proton extractions as a function of the change in time δt (time difference wrt c). Then we minimise the log likelihood for all events to find the optimal value of δt. It should look like figure 8, but with our own pet values of δt. Then this needs to be performed with all the variations discussed in order to obtain the systematic uncertainties.

    Good luck with the studies!

  • Hi Jordi. Yes, I got the value of 1 part in 1e9 by a simple ratio of radii. The distance from the neutrinos to the center of the Earth from the deepest point in their path is about 6356km (take 11km from the radius of the Earth at the point between CERN and Gran Sasso). The Schwartzchild radius of the Earth is about 9mm. This gives a ratio of about 1e-9, so when we expand the term for gravitational time dilation we get about the same value for the change in the passage of time. It would be great to see the results of seismic activity from the other side of the world, if sound waves traveled very close to the center of the Earth.

    The analogy of different speeds in different media is better suited to light than to sound for a very simple reason. Neither neutrinos nor light can speed up in a medium because they already travel at (or very near to) the speed of light! If they’re going to change their speed, they’ll slow down. Sound has the luxury of speeding up, and as you say it does this in very dense media.

  • Bert Morrien

    Hi Aidan, no need for apologising, I immensly enjoy our exchanges.
    It’s not the first time I can’t make myself clear.
    Another try:

    There is one important relation:
    the neutrino density at the target correlates with with the probability of neutrino detection.
    This density is virtually equal to the proton density and this was recorded as a set of waveforms, each a set of of density samples so that the sample time can be used as an index to select a specific density.
    The neutrino detection events were also recorded as a set of time-values.
    All times are assumed to have the same reference, e.g. the kicker magnet signal.
    If the correct TOF is known, the associated density can be selected from the
    recorded densities using the detection time minus the TOF as an index.
    My proposition:
    For each detection the associated density is summed; (not the whole waveform, only the associated density value) note that this is only true if the TOF is known.
    Before the first detection occurs, this sum is set to zero.
    After the last detection the sum is assumed to be S.
    For a correct TOF a majority of high densities and a minority of low ones will be selected, because of the mentioned correlation and this results in S=S_correct.
    For a wrong TOF a wrong density is selected; consequently randomly one half
    high and one half low densities are summed and thus S must be less than S_correct
    The correct TOF can be found via iteration.
    That’s it.
    It seems to me that this simple setup could be verified very quickly.
    The difference between S_correct and other S values depends on the variation of the PEW which is
    considerable and even more because the kicker magnet signal is not synchronised with the bunches of protons in the PS ring. If this was the case, the PDFs should also display a sawtooth pattern.
    The PEW contains also a strong 200 MHz component. Enough variation I think.
    Maybe S_correct is typically equal to the integration of the square of the PEW
    for other S values it could be the integration of the square of the average value of the PEW, but I am not realy sure.
    In this case, events are weighted with the associated density, maybe there is a density**x that works better. Food for mathematicians.

  • Brian O’Brien

    A quote from Wikipedia in regards to SN 1987A

    Approximately three hours before the visible light from SN 1987A reached the Earth, a burst of neutrinos was observed at three separate neutrino observatories.

    Is this relevant?

  • Jordi

    Hi Aidan,
    thanks for the interesting discussion.
    Sorry for misunderstanding, I thought neutrinos showed higher speeds through the earth’s crust than in vacuum, that’s why the sound model fitted better.
    About the seismic waves, this was a reference, I thought, perhaps in the future, detection of neutrinos emitted from one place on earth could be detected by a network of neutrino detectors over the globe, and may detect changes in the neutrino properties as they travel through different layers of the earth, and the gravitational field.
    Go on, you are doing a great job with this research.


  • Bert Morrien

    You are forcing me to be very clear; trying that I improve not only my own understanding but it also

    gives me more confidence in my own little analysis setup.

    I scrutinised the “Data analysis”.
    1. A specific event Es has some relation R with only a small part of one associated probabiliy

    waveform, namely the part that coincides with the event time of Es minus the TOF.
    2. Summing the probabilty waveforms associated with all other events introduces an extra relation with

    Es, namely one with a set of probabilty waveforms that have nothing to do with Es.
    2a. This extra relation can be called contamination and the relation in 1. is almost completely overwhelmed in this way.
    2c. What is true for Es is also true for all other events.
    2d. There is no evidence that this contamination somhow is canceled or that no timing bias is


    3. Now my analysis setup.
    3a. Only the small part of 1. is considered and this is true for all events.
    3b. No contamination is introduced, hence also no timing bias.
    3c. Noise in the data is supressed by the summing operation.
    3d. Possible timing bias in the raw data is not adressed. The only assumption I make is that this is accounted for in the available probability waveforms and the event data.

    About the model and the simulation
    You went to great lengts explaining that there may be some bears on the road. Thanks for that.
    In my view, those bears translate to extra noise and maybe even timing bias.
    This is also adressed in the paper.
    My simulation may not be realistic, but that was not my intention.
    It is only meant for debugging the analysis program and seeing how it behaves and for tinkering with some parameters. You know: the proof of the pudding.
    I could try nonlinear weigting, interpolating between density samples, test the sensitivity to noise in timing and proabilty density, the sensitivity to uneven positioning of events etc.
    You see, there is plenty to explore.

    As for computer time, I think it’s hardly a problem for my high end PC.
    It might be a problem for my wife and (grand)children, but that is an entire other story.

    Concluding, I must thank you for your patience with me.
    Albertus M. Morriën

  • Bert Morrien

    One last thing: With first even T mean the real first one of 3 years ago, last event is the most recent one. My analysis comprises thus all events.

  • Bert Morrien

    Sorry, that should read:
    One last thing: With first event I mean the real first one of 3 years ago, last event is the most recent one. My analysis comprises thus all events.

  • Bert Morrien

    Now I think it can be proved that the data analysis in the paper is wrong.

    There is one important relation R:
    The neutrino density at the target correlates with with the probability
    of neutrino detection (event).
    However, a specific event E has relation R with only a small part of one
    associated neutrino density probabiliy waveform, namely the part that
    coincides with the event time of E minus the TOF.
    The remainder of the waveform is irelevant, because you cannot extract
    any useful information from it
    Conclusion 1: Only one sample of one PEW is relevant for an event.

    Summing the probabilty waveforms associated with all other events
    introduces an extra relation with E, namely one with a set of probabilty
    waveforms that have nothing to do with E.
    Conclusion 1 has the consequence that this is a contamination and the
    relation R is almost completely overwhelmed by it.
    What is true for E is also true for all other events.
    Conclusion 2: It is improbable that this contamination is canceled out
    in any way.

    Conclusion 3: It cannot be proved that this contamination does not
    introduces timing bias.

    So now I am pretty sure of
    Conclusion 4: The analysis in the report is wrong.

    Conclusion 6: The 60 ns anomaly is not real. Trust Einstein!

  • Bert Morrien

    It’s a classic error.
    For each event one sample of the PEW is relevant.
    For all other parts of the PEW Richard Feynman’s quote applies:
    “We don’t have the experiments and thus we do not
    know which results to calculate”.

  • Bert Morrien

    Some notes about my data analysis setup

    1. Because of the strong 200 MHz sine wave component in de PEW, as shown
    in fig.4 of the report, correlation probably fails completely if a wrong
    TOF differs only 2.5 ns from the correct oneC; this is favorable.
    However, the same sinewave has about the same maximum value each 5 ns,
    which may cause a problem in the sense that S_correct may become less
    prominent, even to the point that it becomes ambiguous.
    2. The highest peaks in the PEW waveform are adjacent to the lowest, this is
    favorable, but this occurs only 5 to 6 times in a typical PEW.
    In case finding S_correct is difficult, events only associated with the 200 MHz
    sinewave coult be left out the analysis; if in that case a prominent S_Correct
    is found, the iteration cold be repeated with a very narrow iteration range
    with the complete set of events for improved precision.

    Note that leaving out events, effectively emulates a shorter proton burst, but
    at the cost of a greater uncertainty.
    Nevertheless the outcome may have a higher quality.

    3. Although noise is supressed due to the summing a less prominent S_correct
    can be expected.
    4. The mechanism has no preference for bias, it tries to align the neutron
    probability distribution with the event distribution, which is exactly
    the goal of the data analysis.
    5. In the proposition above, each event adds a value that is equal to the
    associated probability P.
    It is also possible to add a function of P; in the explanation f(P)=P, another
    choice could be f(P)=P**2, etc.; furter study is needed.

    Provided that no irelevant PEW data is used anymore, other methods might work better.

    I take some rest and then see what happens.

  • Bert Morrien

    I’m a bit restless. Here the proof of the wrong analysis again, can’t say it shorter.

    OPERA Collaboration violated Feynman’s rule, neutrino velocity is not higher than c
    (Read first http://static.arxiv.org/pdf/1109.4897.pdf)
    Richard Feynman said: “We don’t have the experiments and thus we do not know which results to calculate”
    What is the experiment?
    It comes down to obtaining an neutrino detection probability associated with a neutrino detection (event);
    the neutrino starts it’s journey at time T, location L, having a probability being detected at
    a distance D from L. This experiment is repeated many times and the data data is stored and later analysed;
    the analysis consist of calculating the velocity of the neutrino which is basically D/(T2-T1).
    For the almost all of the obtained probabilities no event occurred, for these probabilities there was no experiment;
    and they must be discarded.

    In reality, the probabilities were stored in PEW’s and the PEW’s without an associated event were discarded.
    So far, so good.
    Of the set of probabilities in a remaining PEW, only one has an associated event and the remainder must be discarded, but this was not done, making the data analysis invalid.
    Hence the supposed higher-than-c velocity of the neutrino was not demonstrated.

  • Hi Bert. Thanks for the comments! I’ve been a bit absent in the past few days (I had a friend visiting, so I had very little time to ponder this analysis and could not give it the attention it deserves.) After reading your comments I think I understand the situation much better now.

    I suppose that neutrinos arrive at a time TOF+δt where the δt term takes into account all the noise and uncertainty. For a given event the best we can do is estimate the most probable time the neutrino was produced, using the PEW. This would be okay if the PEW was smooth (ideally if it was flat) but it’s not, or if δt=0, but it doesn’t. The “best” TOF for a given neutrino could be shaped by the PEW distribution and averaged over a large number of neutrinos, this would lead to a biased result. In the simplest case, if the PEW was a step function then at the point of the step there would be a preferred TOF at this point, which could lead to bias.

    Keep us posted, I’d love to hear what you find!

  • Hi Brian. Great question! The supernova would have certainly generated a lot of neutrinos and a lot of light.

    The light from the supernova gets slowed down as it leaves the supernova because it has to pass through a medium. (It passes through the gas that makes up the supernova.) The neutrinos couple so weakly to the gas that they hardly get slowed down at all. If we know the density of the gas and how much there is between us and where the light was produced then we can work out what kind of delay to expect for the light.

    A lot of the light will be produced after the neutrinos are released. This is because particles will bump into each other and absorb each others light before it has a chance to escape into space. This is analogous to heating up water in a pan. The water at the bottom of the pan gets heated first and it’s slightly hotter than the water at the top of the pan. The heat is transferred by water molecules bumping into each other and exchanging energy. We don’t feel the effect of the heat at the top of the pan until it has had time to make its way, via particle collisions, to the surface.

    After the experts performed their calculations they expected to see neutrinos a few hours before seeing the light. (I’m no expert in astrophysics, so don’t I’ll have to trust our colleagues to get their calculations right!) This doesn’t provide evidence for faster than light neutrinos, and in fact it strongly suggests that neutrinos travel at very close to the speed of light. If we apply the same change in speed for the neutrinos (1 part in 1e5) then we would expect the neutrinos to arrive a few years before the light, not a few hours.

    From what I understand, the neutrinos which were detected from the supernova were electron neutrinos, and OPERA looks at muon neutrinos. It could be the case that different flavors of neutrino travel at different speeds through space. (Or to be more exact, different mass eigenstates travel at different speeds, and the flavors map onto the mass eigenstates favoring one state over another.) If the different types of neutrino travel at different speeds, then by placing different neutrino observatories in the same region on the Earth, and patiently gathering data (waiting for another supernova!) we should see the difference in neutrino speed. As far as I know, nobody was looking at high intensity muon neutrino sources a few years ago, so nobody would have seen any faster than light neutrinos. That’s a shame, but maybe something we can fix in the next generation of neutrino experiments!

  • Bert Morrien

    Hi Aidan,
    I exchange also thougths with John Costella.
    Part of my last message to him:
    Quote ”
    Your remark that I would have to analyse their methodology explicitly to criticise it,
    is right to te point and I thank you for that.

    I only said that I scrutinised the report and that they did not sufficiently remove invalid parts of the obtained PEW’s. I agree that’s quite meager.
    Being no part of the scientific world, I am a 70 year amateur with extensive experience in electronic design and SW engineering, holding 8 patents, so I have to apologise for giving you the impression of jumping to conclusions. It’s all in my head and I must ventilate it.

    After 40+ years readership of the Scientific American I am well aware that researchers are used to prune their experimental data extensively before drawing conclusions, which makes it also for me quite unlikely that the team made an error, nevetheless I am quite sure they did.

    I will try to convince you that the information provided in the paper is sufficient to justify dismission of their data analysis.
    ” Endquote
    I also mentioned this blog.
    Although you seem to understand my objections to the data analysis, you are not yet quite convinced, probably for the same reasons as those mentioned above.

    I want to proceed as follows.
    I divide my reasoning in several parts.
    Each part and it’s formulation is either accepted, by the two of you or not.
    If not, I will try to convince you.
    If I can’t convince you, I withdraw; if at any time more substance to my arguments could be added, we could proceed.
    I a part is accepted, we proceed to the next step.
    After accepting of the last part, we have a convincing proof that the data analysis is invalid.
    This was the first part.

    Since I spent a bit too much time on this subject the last 14 days, I must first take a little break.

  • Bert Morrien

    Hi Aidan,
    OK, let’s start trying to topple the data analysis.
    Step 1 is the easiest I think.


    Paper: http://static.arxiv.org/pdf/1109.4897.pdf
    P,T is the probability of detection of a neutrino at time T.
    E is the time a detection occurs.
    PEW’s are sets of probabilities of 10,500 ns length spaced apart not less
    than 150 ms with the property that the probabilities are ordered for T and
    spaced apart about 1 ns (refer to the paper).

    The probabilities are derived from proton extraction waveforms
    (fig. 4 in the paper depicts a typical PEW)
    TOC is the time of flight; the TOC of interest is unknown because the goal
    of the experiments is to establish it.
    The T values of the probabilities and the T values of associated events
    differ with the TOC.
    Time uncertainties will be adressed after step 1.
    Proposition 1
    Given probability P,T and event E, then P is only valid at E=T.
    Rationale: only the combination P,T and E form an experiment
    If one is missing, the Feynman’s quote applies:
    “We don’t have the experiments and thus we do not know which results to

    Proposition 2
    The course value of TOC is known and alows the discarding of all PEWs
    without associated events.
    This was done (refer to the paper).

    Proposition 3
    The PEWs with associated events contain invalid probabilities which cannot
    be identified, because the TOC is not known, hence they cannot be removed
    before starting the data analysis.

    STEP 1 END


  • Bert Morrien

    Hi Aidan,
    Continuation of attempt to topple the data analysis.
    Step 2 does not look difficult either, but let’s see.


    Proposition 4
    It looks like the team forgot that the invalid probabilities should also be
    adressed in the chapter “Data selection” of the paper, because this was not
    mentioned here.

    Now the next quote from chapter “Data analysis” of the paper will be
    For each neutrino interaction measured in the OPERA detector the analysis
    procedure used the corresponding proton extraction waveform.
    These were summed up and properly normalised in order to build a PDF w(t).

    Proposition 5
    Invalid probabilities were introduced in the analysis procedure.
    This is clear from the sentence “For each neutrino interaction measured in
    the OPERA detector the analysis procedure used the corresponding proton
    extraction waveform”

    Proposition 6
    The summing of the PEWs does not remove the invalid probabilities
    Nevertheless, “These were summed up”

    Proposition 7
    The summing of the PEWs removes in essence all probabilities an this is
    absurd. This can be explained as follows.
    Consider a specific event. It’s time T corrsponds with one probability in
    the associated PEW
    If we have 10**4 events, the probability value will be incremented with
    10**4 values for T in the other PEWs, which are almost certain invalid in
    the first place.
    The same reasoning applies also for all other events.

    Proposition 8
    Whatever “Properly normalised” means, it does not remove the invalid

    Proposition 9
    Time uncertainties will not remove the invalid probabilities.

    Proposition 10
    Only if “Properly normalised” and the remainder of the data analysis means
    magic, the data analysis is valid.


    If we have to discuss the data analysis that was done, I think I must
    But again, let’s see.

  • Bert Morrien

    The last END QUOTE should read END STEP 2

    I hope we can continue dealing with the data analysis in the same way.

  • Bert Morrien

    Hi AIdan,

    John did not understand proposition 3 and he was right.
    Therefore the vollowing changes.

    In Definitions
    “P,T is the probability of detection of a neutrino at time T.”
    must be replaced by
    “P,Ts is the probability of detection of a neutrino at time T=Ts+TOF.”

    In Definitions
    “PEW’s are sets of probabilities, i.e. P,Ts tuples, of 10,500 ns length
    spaced apart not less
    than 150 ms with the property that the probabilities are ordered for T and
    spaced apart about 1 ns (refer to the paper).”
    must be replaced by
    “PEW’s are sets of probabilities of 10,500 ns length spaced apart not less
    than 150 ms with the property that the probabilities are ordered for time
    spaced apart about 1 ns (refer to the paper).”

    Proposition 3a.
    “The PEWs with associated events contain invalid probabilities”
    This is because almost all probabilities are not associated with an event
    Proposition 3b.
    “which cannot be identified because the TOC is not known”
    Proposition 1: Given probability P,T and event E, then P is only valid at
    TOF is the value that is to be measured, so it is not known.
    Only P,Ts is known, and since Ts =T-TOF, you cannot match a probabilty with
    an event.
    Proposition 3c.
    “hence they cannot be removed”
    If you cannot identfy a valid probability then you surely cannot identify
    an invalid one.
    Proposition 3d.
    “before starting the data analysis”
    We are talking about data removal, which should take place before data
    analysis is commenced.
    Add the following to Proposition 3.
    “Data removal should take place before data analysis is commenced.”

    I keep a draft of the proof up-to-date at
    I don’t keep a changelog, because these mails and Aidan’s blog.
    Of course all is backed up.

    I sent also mail to CERN about seminar
    “Theoretical assessment of the Opera report and its possible implications”
    This is to draw your attention that there is serious evidence that the data
    analyse in
    “Measurement of the neutrino velocity with the OPERA detector in the CNGS
    is wrong. See draft document
    This document is currently scrutinised by two qualified scientists.

    Maybe it is a bit premature; on the other hand, the seminar
    “Theoretical assessment of the Opera report and its possible implications”
    might be a bit premature as well.

    Maybe the seminar could also touch the following question.

    Why is it that all these scientists were not missing the missing
    probability data or stumble over the rumble in the PDF used by
    the data analysis?

    My motivation for sending this mail:

    Maybe if I was a neutrino, I could penetrate the wall around the OPERA
    however, being an amateur, they would not notice me.
    But maybe a heavyweight core crossed my path and then you will see a bright flash!

    I had also a contact with Jos Engelen. I hope to motivate this former CERN director to give our project a little suport.
    I mentioned that if my view is correct, the same 60 ns anomaly should emerge when they repeat the analysis with any subset of invalid PEWs. I don’t know if that’s feasible, but that would
    be a convincing indication that sonmething is very wroing.
    I am also curious about the MINOS analysis, because they might be victim of the same trap.

  • Bert Morrien

    Hi Aidan,

    John had still problems with prop. 3, I did some explaining and invited him to rephrase it.

    Please, can we continue this project via e-mail? It’s too complicated for me and I don’t see any added value.

    Thanks, Bert

  • Bert Morrien

    I mentioned that if my view is correct, the same 60 ns anomaly should emerge when they repeat the analysis with any subset of invalid PEWs.
    It’s only speculation, but if the valid probabilities are completely absent, maybe the result IS the bias, which can be attributed to the invalid probabilities.
    Then the the invalid probabilities can be removed by subtracting the bias of the published value.
    Food for mathematicians.

    I noticed the difficulty for John of accepting the consequences of Feynman’s quote, which comes down to the proposition that using invalid data is not justfied.
    AT a certain point it must be clear that invalid data is discarded.
    In our little project, the proof of an invalid result would fail if the data analysis somehow removes the invalid probabilities, which I do not believe without a proof.
    I don’t think there is succh a proof, given that people hesitate to take a stand in this matter.

  • Pingback: Behind the Scenes at CERN | Cogito()

  • Hi Bert. Sure, you can E-mail me at [email protected]. That redirects to my gmail account.

  • Bert Morrien


    By now it is clear that I was not only very arogant, but wrong and my arguments futile.

    The culprit was my wrong interpretation of the effect of summing up the PEW’s
    The effect of summing up of the PEW’s to construct the PDF does indeed bury
    the timing information hidden in the PEW’s under a “lot of noise”.
    However, the noise is random while the timing information is not.
    Hence, after adding 10,000 PEW’s we have amplified the timing information with 10,000
    while half of the lot of noise is added and the other half is subtracted,
    so that thenoise is not amplified at all, making “lot of noise” untrue.
    As a technician I am familiar with this princilple a long time; how it is possible
    that I stubbornly ignored my own knowledge in the last weeks is a riddle for me.

    I ow you and others an apologise and want to do something to make it good.
    By now I think to be able to explain the experiment in simple terms,
    so that everybody interested could understand the essence of it.
    When finished, I would have verified that it can be understood by
    interested people in my social network and by a scientist familiar with
    this matter; eventually I ‘ll try to get it published in one way or the other.


  • Bert Morrien

    Hi Aidan,
    A question about possible bias.
    The team performed lots of tests before the TOF was established.
    After that, they checked for consistency of the TOF.
    Did they ever checed for bias after the TOF was established?
    I cannot find anything of the sort in the report.

    A possible check could be done as follows.
    For each event, a small region in the corresponding PEW can be identified that must contain the extremely noisy time information corresponding to this event;
    this region is replaced by noise, so that the time information is erased, assuming the TOF is correct.
    Then the PDF is rebuilt and the maximum likelihood analysis is performed.
    Could the result reveil something about bias?

    We can assert 2 situations:
    1. The TOF is not correct, the time information was not removed and the result would be the same incorrect TOF.

    2. The time information was removed, then, what would be the result? I can’t work this out.
    The event distrbution is matched with an average of the PEW’s then.

    Does this make sense?

  • Hi Bert! There’s no need for an apology. If we got fired every time we needed to be convinced of something then there would be a lot fewer scientists and a lot more unfalsifiable ideas around. The great thing about science is that when there are different interpretations it becomes a constructive discussion and the final arbiter is always the data. In this case we had a paper that could have been written better, vastly different previous experiences (and with them, prejudices) and the medium we interacted with wasn’t the best for a clear discussion. I’ve enjoyed this exchange and I hope you did too! The last thing I want is for people to feel uncomfortable talking on the blogs.

    For some of the bias, OPERA used simulations. They compared the simulations with and without a different TOF and looked for bias in the results. This lead to an uncertainty of 0.3ns, which is much smaller than their overall uncertainty. The systematic uncertainties in the measurement are summarized in table 1. It’s common practice to estimate a systematic uncertainty every time the data are manipulated in some way. For example, if a piece if equipment is used to measure the timing, the uncertainty in the precision and accuracy of measuring device are estimated and added to the overall uncertainty in quadrature. The lines in table 1 seem quite exhaustive to me, so if you can find another source of uncertainty of bias then let me know!

    Determining an overall bias that affects all events is pretty tricky. The timing and location information are calibrated using state of the art GPS technology (we physicists are not GPS experts1) which have precision of 1ps, 1000 times more precise than required by OPERA. We will have to rely on people who are more experienced with the GPS technology to comment on the credibility of the GPS measurements.

    This leaves two major points of discussion, the measurement of the baseline and the synchronicity of the clocks at CERN and Gran Sasso. The baseline would have to be incorrect by about 20m to account for the deviation in the TOF. Given that both sites are located using four satellites it’s probably safe to assume that the uncertainty on the positions of CERN and Gran Sasso above ground are too small to account for this. What remains then is to determine the positions below ground. I can’t remember the details of how this measurement was performed. However, an uncertainty of 10m at each site between the surface and the caverns is rather large and would probably be noticed!

    A synchronicity issue would be rather difficult to pin down exactly. Measuring a time interval at a given position is easy, but synchronizing two clocks which are large distances from each other is an extremely difficult problem. It is generally a solved problem for GPS systems (otherwise satellites would move out of orbit rather quickly) so we would need a good reason to think that the synchronicity breaks down below the surface. To be honest I’m still not 100% confident that the synchronicity is correct, and it would take a very careful measurement to persuade me otherwise. (The way to do this would be to send a signal from CERN to Gran Sasso and back again. Call the time taken for the round trip, measured at CERN, to be 2δt, and the time the signal was sent as T, assuming that the trip takes the same amount of time in both directions. Since we know the time the signal was sent from CERN, we can then calibrate the clock at Gran Sasso so that the signal arrived at time T+δt. If the clock at Gran Sasso disagrees with this choice of time origin by 60ns then this could account for the different in the TOF between neutrinos and light. Initially I thought this would have be to done by firing a light source from CERN to Gran Sasso, but a high precision fiber optic cable would be good enough.) On the other hand, the GPS experts may have already solved this problem long ago, in which case they would be in a better position to comment on it than I would.

    In response to your proposed study of bias, what exactly do you mean by “noise” in this context? The PEW for a given neutrino is not smooth, but that does not mean that it’s any more or less noisy than a smooth PEW. In that case, how would you “add noise” to the PDF?

    Let’s assume that the neutrinos travel at exactly the speed of light. Then the result would tell us that there is a systematic uncertainty of 60ns which affects every event. Where would this bias come from? If we can work that out then we could have an explanation for the result which would not require faster than light travel.

    We must remember that OPERA have spent 6 months scrutinizing the results in an attempt to find an error and they have not found any error so far.

  • Bert Morrien

    Hi Adian,
    A little history and some questions.
    The past week or so I had the privilege to exchange some thouhts with two eminent, friendly and patient physicists.
    The way it went can best being described as chaotic; I couldn’t help it, I’m not a scientist but a retired technician with ample experience in hard- and software engeneering.
    After some futile attempts to show that the analysis was wrong, I had a still a few questions.
    My first question was: Is the summing operation valid?
    Eventually I realised that under certain circumstances, it is valid.
    Another question was: Is the use of a window to select parts of the PEW?
    The unambiguous answer was: No, because bias would be the result. The only window that was allowed is the whole PEW.
    Eventally, after about 50 e-mail exchanges, I withdraw, having mixed feelings.

    Now, the question that was never asked is:
    If a window can cause bias, why do you think using a window does not cause bias if the whole PEW is selected?
    Another question is: During the Maximum Likelihood Analysis (MLA), the PDF is accessed, probably by selecting a part of it; why isn’t that a window? The answer would probably be; Yes, but not realy, because we access also all other parts.
    Now, this would be a good answer, if there is an event for each part that is selected.
    But it is not a good answer, because there is not an event for each part that is selected; we have say 10,000 parts in the PDF and only 8,000 or 16,000 events, some of them associated with the same part in the PDF.
    The unavoidable conclusion is: the events themselves are imposing a window that can cause bias if they are not evenly spread;
    only after a long time they are.
    E.g. if 200 ns of the leading edge is underpopulated, then the MLA would have a tendency to deviate towards the trailing edge.
    Because of summing the PEWs, the shape of the PDF is probably too dull to prevent that.
    The question that was actually addressed was: Did the experiment lasted long enough?
    The answer: Yes, prolonging the experiment would hardly improve the result.
    Having no acess to the data, this must be an educated guess.

    Can anybody understand why the crucial question is: Is the event distribution sufficiently populated to be sure no bias is introduced?

    (To be continued, next time I want to question the value of the summing operation)

  • Hi Bert! Some good questions here.

    In response to the first point: “Is the use of a window to select parts of the PEW?” Generally, no. This is the kind of thing that leads to a lot of skepticism and bias in the results (the bias that’s known is fine. The bias that’s unknown can cause problems, but could probably be determined using a study where the window is shifted around.) In any case, the ML method should be able to deal with the full range. (Formally the PDF for a given neutrino should cover the full range of time from -infinity to the point of neutrino detection. A window can be imposed using a step function. In any case, if the PDF doesn’t go to -infinity, whether or not there’s a step function in there, it’s not a complete PDF. In reality, in order to make the computation possible no PDF can ever go to infinity without some very clever trick.) The safest option is to include the full PEW and be agnostic about which proton the neutrino came from.

    “Because of summing the PEWs, the shape of the PDF is probably too dull to prevent that.” The PEWs are not summed in the analysis. The paper reads “For each neutrino interaction measured in the OPERA detector the analysis procedure used the corresponding proton extraction waveform. These were summed up and properly normalised in order to build a PDF w(t).” The second sentence in this quote is ambiguous and it sounds like the PEWs were summed. However, the first sentence is not ambiguous and explicitly states that for a given neutrino, only one PEW is used, and this is used to form a PDF.

    To answer this question: “Is the event distribution sufficiently populated to be sure no bias is introduced?” in principle yes, this should be taken into account. When there are few events in a bin the contents need to be treated with a Poisson uncertainty (then as you know, in the limit of large number of events this tends to the good old Gaussian uncertainty.) We can’t really tell if this is case using the plots in the paper or the talk though, since the resolution is so low. The error bars at the tails do look asymmetric, but that could be an artifact of the graphics. (This is the 21st century! Why can’t everyone use vector graphics for their plots?) Anyway, when Poisson uncertainties are used the tails should get an unbiased weight so that the low statistics are taken into account. If the uncertainties are treated as Gaussian then there could be a serious problem. It might cancel out between the two tails, or it might not. (If there is a bias in the “inward” or “outward” direction this would be noticed. If there is a bias in the “leftward” or “rightward” direction at both ends and of the same kind of size, then of course the result will be biased.) Given that these are neutrino physicists who work with low statistics studies a lot, I’d be surprised if the error bars weren’t Poisson. I suppose it’s possible that the normal fluctuations could affect the tails and shift the whole distribution along by 60ns, but even if that is the case it wouldn’t mean the analysis method is biased. It could be biased for other reasons, but not because of bad luck with the data set.

  • Bert Morrien

    “Because of summing the PEWs, the shape of the PDF is probably too dull to prevent that.”
    I am well aware that only PEWs were summed for which there is an event.
    Those were summed and this removes the essential iregularities from the PDF.
    The terrible fit of the event distribution on the PDF can bee seen in fig.11, even at that bad resolution and even after 16,000 events.
    For me this is evidence that it will take a long time before the event distribution will look similar as the PDF and then the analysis will give a correct result. Before that, I wouldn’t bet my life on it. Were talking about only 60 ns, or 0,57% of a 10,500 ns PDF.

  • Bert Morrien

    The event distribution is Poisson, or spot noise as we called it in th 70’s.
    But I don’t think the PDF is Poisson, I think it’s Gaussian.
    Did the neutrino physicists ever undertook an experiment like this?

  • Hi Bert. I did a quick search for a neutrino experiment I know is still active and relatively high statistics, MINOS. If we take a look at some of their recent plots (link) we can see that they’re using error propagation properly (Poisson for low statistics samples. It looks Gaussian for the PDFs they use, but it’s a bit harder to tell.)

    I’d disagree that the fit looks awful in figure 11, I’d say is looks excellent. OPERA provide the chi squared per degrees of freedom for both fits and they come out at around 1, which is what we expect if the PDF is a good model for the distribution. (Values much smaller than one indicate the error bars are too big and not consistent with statistical fluctuations. Values much larger than one indicate poor agreement between the PDF and distribution.) As OPERA gather more data the fit will look better and they could see an improvement in their precision by up to 32%, but the chi squared distribution is going to stay roughly the same. The “goodness” of the fit will probably not vary much with the addition of more data.

    The fact that the difference in time is 60ns is irrelevant to the width of the PEW or the window used (as long as the window is significantly larger than this interval) as OPERA are trying to measure a single value in the TOF distribution. (If they measured 600ns would you want to keep more of the PEW? If they measured 0ns would have taken almost none of the PEW?) In any case, this 60ns difference occurs over the whole range the PEWs. Any sufficiently large choice of window is a valid one. I think that at this point you have a well defined suggestion for OPERA to cross check their result. If you want to take it further then by all means suggest a study to them. They can vary the window size (using blinded data) and pick the points with the best precision, checking to make sure that there is no bias in the TOF measurement by plotting the obtained value of TOF against the window parameters. If there are no serious biases and they improve their blinded precision then they can apply the same method to the unblinded data you could claim credit for improving their result!

  • Bert Morrien

    Hi Aidan,
    Forwarding quiestions to Opera won’t work, because even if they did not throw away my question immediately, the would not understand me. Think about Robinson Crusoe and Friday.

    The exchanges with the scientists were not a waiste of time, because they forced me to formulate my points in a comprehensable way. I think now I can indicate a critical error in Opera’s analysis method in a convincing way.

    The basic experiment in
    “Measurement of the neutrino velocity with the OPERA detector in the CNGS beam”
    was: “Fire a bunch of protons and produce an event”

    What dit Opera do:
    They repeatedly fired a bunch of protons, only a very tiny fraction of them were producing an event.

    Opera must neglect all firings dat did not produce event.
    Why is that?
    For these firings, they did not do the basic experiment and thus they do not know which result to calculate.

    The firings were grouped in big parts, each associated with a proton extraction.
    Then the big parts were subdivided in about 10,000 small parts.
    Opera neglected the big parts that did not produce an event; that’s good.
    Opera did not neglect the small parts that did not produce an event; that’s wrong.


  • Hi Bert. This is part of the nature of working with neutrinos. I’m not sure how many protons would interact with the target to produce mesons, but I’d imagine it’s an appreciable fraction. When a proton does interact it will produce a large shower of mesons, half of which will be charged. Of those that are charged, about 10%/90% will be Kaons/Pions, and 63%/99% of these will produce a muon neutrino heading in the direction of Gran Sasso. Taking all that into account OPERA have an awful lot of neutrinos! Even so, out of 10^20 protons, only 16,000 produced a neutrino which gets detected. In order to be sure which proton produced a given neutrino, OPERA would have to store information about the three momenta of roughly 10^11 protons (assuming there are a similar number of protons per spill as there are protons per bunch in the LHC) and timing information precise to 1ns. With the current technology that’s simply not possible. OPERA performed the experiment properly. Do you really expect OPERA to perform the analysis proton by proton to get this measurement? That can’t be done. If they had the freedom to adjust the spill size and focus all their efforts on the TOF measurement they may have come up with a different arrangement (more shorter, denser spills, for example) but the fact is that they didn’t because they’re most interested in how many neutrinos mix, and they have to fit in with the physics program of the LHC.

    Also, just to clarify the comment on the efficiency, the main problem is with the detection of the neutrino. If we could see every neutrino that made it to Gran Sasso you would have no problem at all with their distribution as it would be very well populated. The distribution of neutrinos Gran Sasso sees is a small sample of all neutrinos produced and every point in every PEW can probably be associated with several (unseen) neutrinos.

    Your analysis proposal is essentially the same as OPERA’s, as both require scanning across the TOF distribution to find the point of best agreement. They differ in the details and in the figure of merit, but they’re both basically the same thing. Yours might feel more natural, but that doesn’t mean that it’s fundamentally different to another approach.

    On another note, let’s suppose that the method used produces a result that gives a δt of 60ns, just because of statistical fluctuations. If that’s the case then no method will give a “more correct” result than the one OPERA have provided. They have a tried and tested method to find an unbiased estimator of δt given all the data, and they have performed several cross checks on the method. If the data are biased because of statistical fluctuations then no fit strategy will recover from that. If the data are biased for some other reason (eg geodesy being off by 20m) then no fit strategy will be able to recover from that either.

  • Bert Morrien

    Hi Aidan,

    Beyond you see the first part of “A strange experiment”
    which is a rather similar to
    “Measurement of the neutrino velocity with the OPERA detector in the CNGS beam”
    There is one difference.
    In OPERA all proton spills are different
    In “A strange experiment” they are equal, but I try to modify “A strange experiment” to “A very strange experiment” by replacing the swing with another contraption”
    I asked a friend of mine who is a graphical artist, to think about how to make a strip of it and/or a video.
    I will check that he understands the whole story.
    Believe me, if he understands, everybody will.
    BTW. I succeeded in explaining my previous entry to 2 of my acquaintances who have not a clue about statistics.

    A strange experiment.

    The attributes:
    A camcorder with a built-in clock
    A set of SD cards that are good for 1 hour recording each.
    A time clock which runs late in comparison to the camcorder clock.
    A swing with a man on it.
    The camcorder can see the man, but not what he is doing.
    The man tries the time clock to produce a timestamp, but, being on a swing, he succeeds only about 10 times a day and then only if the swing is near it’s lowest point.

    The objective is to establish the precise time difference between the two clocks and we call that time_delay.

    The basic experiment is:
    Hold the swing in its highest position and let go and catch it at the other highest position;
    while swinging, let the man produce a timestamp.
    Let’s call the exact time the man produces a timestamp start_time.
    If we assume the man produces a timestamp at exactly the lowest point of the swing
    then start_time is correct, however, we can’t see what the man is doing so we are not quite sure, so the real start time is

    start_time +/- time_unsure

    Now we can say

    time_delay = stop_time – start_time +/- time_unsure

    If we are able to make time_unsure zero then we have

    time_delay = stop_time – start_time

    which gives the solution.
    Let’s see how to get rid of time_unsure.

    Well, lets try to perform the basic experiment once.
    Start the recorder, perform a swing.. Whow, it didn’t succeed, I must remember that so far the video can be discarded because the experiment failed, i.e. it was in fact not done at all because it did not produce a timestamp.

    Repeat trying the basic experiment until the cassette is full.
    If still no timestamp is produced the video recording is useless, so we erase it and start again.
    If a timestamp was produced, we exchange it by an empty one.

    At some time we think that we have collected enough data and we stop.
    We thank the man, who is not feeling very well at all and start thinking about how to proceed.


  • Bert Morrien

    Probably not such a strange experiment

    1. A dart game
    2. a camcorder looking to the throwing
    3. a camcorder looking to the bull
    4. Two counters, n_throws counts the throws and the n_hits counts the hits.

    The objective: determine the ratio between the number of throws and the number of hits in the
    1. stimulus: throwing the dart; datum: video_throw
    2. response: the dart in the bullseye; datum: video_hit

    We take every precaution to avoid contamination; the experiment takes place in an Italian tunnel, so that stray darts are kept out of the experiments.

    The basic experiment:
    1. start both camcorders
    2. Throw a dart and hit the bullseye
    3. stop both camcorders

    The main experiment
    We perform the basic experiment twice.

    The analysis.
    1. clear both counters
    2. We examine the video’s and increment the counters for each valid datum
    2a. video_throw, scene_1: valid throw, increment n_throws
    2b. video_throw, scene_2: valid throw, increment n_throws
    2c. video_hit, scene_1: valid hit, increment n_hits
    2d. video_hit, scene_2: no valid hit, so we must decrement n_thows, because using 2b was not
    3. n_hits / n_throws = 1; we are happy

    We want to verify this result and repeat the main experiment
    The analysis.
    1. clear both counters
    2a. video_throw, scene_1: no throw,
    2b. video_throw, scene_2: valid throw, increment n_throws
    2c. video_hit, scene_1: valid hit, increment n_hits
    2d. video_hit, scene_2: valid hit, increment n_hits
    3. n_hits / n_throws = 2
    We don’t understand, can’t really believe it, because this would mean 1+1!=2 and we are
    convinced that 1+1=2.
    So we repeat the main experiments many times more.
    Eventually we publish the result and ask for comment.

    Bert says: sdiua? sdj! einhj!! IUYIUY!!!!!
    We have a good laugh
    Bert says: sdiua? . rysd? asdh! asasd!!
    We try seriously to understand and answer “We trust our methods 376 1223 238”
    Bert says: sdiua? sdj? error better way IA!
    We are a bit annoyed but answer 12376 122398123 1238
    A seminar to explore the condequences of 1+1!=2 is organised.
    Bert says: data removal? asaas IA
    We ask Bert to keep his mouth shut.

    Now Bert thinks really hard and eventually says:
    If 2a is wrong, then you are not allowed to use 2c.
    The good analysis would be
    2a. video_throw, scene_1: no throw,
    2b. video_throw, scene_2: valid throw, increment n_throws
    2c. video_hit, scene_1: valid hit, don’t use this result, because there is no corresponding
    throw *)
    2d. video_hit, scene_2: valid hit, increment n_hits
    3. n_hits / n_throws = 1

    Bert’s wording of a 70 year old quote.
    Hits are data, invalid hits must be removed, this was done
    Throws are also data, invalid throws must be removed, this was not done PERIOD

    Now it is the time to discuss how te get rid of all invalid data and to mention it in the right place in the report, instead of doing that somewhere in the middle of a sentence elswhere; and then we could talk about data analysis.

  • David ConnerShover

    I will not even begin to claim any high academic acheivements (never had formal education past high school), but, after very close perusal of the paper, every reasonal consideration seems to have been made in reference to timing.

    On the first rough reading of the press release, which didn’t give too many details, but 5 seconds with a calculator bore out what appeared to be an 18M forward displacement of the neutrino stream from the expected arrival of light at C in vacuum at OPERA from CERN. I cannot believe that a simple conceptual to math error (like distance measured as surface of the earth against the cord (straight line path)) could account for this discrepancy.

    My first thought after perusing the paper, where every pain appears to have been taken in reference to the timing and nearly exact distance (within 20cm) from the target to the OPERA array, and tearing apart almost every measurement, timing and distance for a discrepancy, and, with the exception of mayby a few ns in the time stamp FPGA, which I’m sure had gone through much simulation testing before actual service, I cant see much wrong.

    the paper devoted much to the mechanics of the experiment but deliberately left any theories as to this roughly 60ns discrepency out of the paper. This is, in my opinion good science.

    As another experiment along the same lines, would it be feasable to run this experiment across longer distances? say, somewhere not far from the diameter of the earth? through the earth? and see if similar results appear? or does the current beam collimation ability not give enough of a sample on the receiving end to provide accurate results? another possibility would be to run this experiment to, say the nearest celestial body (through mostly vacuum)
    Granted, at this time, setting up a detector like OPERA on the moon is not feasable, but, just a thought. running this experiment in both of these cases (with different distances and transit media) the results of such experiments I think, will go a long way toward narrowing down gravitational theory. and, just as likely, generate many more questions than answers.

    I can think of a few theories off the top of my head that would likely have sound scientific backing to account for this. can you maybe lead me in the general direction of these discussions? I would be most thankful

  • Hi David, thanks for the comment! To answer your question about a longer distance experiment (in neutrino physics, we call the distance the “baseline”) we need to understand why the OPERA baseline was chosen. The main purpose of the OPERA experiment is to measure neutrino oscillations from the muon neutrinos to tau neutrinos, and this process is sensitive to the baseline. Calculations show that the optimal baseline for making this measurement is about 732km, and that’s what motivated the baseline of OPERA. This presents a few challenges for OPERA, because they need to synchronize their clocks over a large distance (which isn’t easy), and because the neutrino beam gets wider as the neutrinos pass from CERN to Gran Sasso. In fact, by the time the neutrinos reach Gran Sasso, the beam is 2km wide! Extrapolating to the diameter of the Earth, we could expect to see a beam about 17.5km wide. Making a detector that would be as sensitive as OPERA’s would require 64 times the material (as the area presented to the beam would have to scale with the width of the beam squared.) Making a longer baseline experiment would require a huge investment in resources, and it would most likely not be sensitive to any significant amount of mixing, and that’s why a larger scale experiment is not currently being planned.

    The time of flight measurement is a nice “bonus” measurement that OPERA can make with their dataset. Physicists are always trying to get as much as they possibly can out of their experiments, and there are often some nice surprises. Unfortunately this comes at the expense of sub-optimal design for these surprises, and in the case of OPERA this means that the baseline is “only” 730km long, and not longer. Don’t despair though, the MINOS experiment in Minnesota has a similar baseline and they have actually made a similar measurement with their data. (They reported a much larger uncertainty, so their result was consistent with a 0ns lead.) Hopefully the new running period that is planned for OPERA will answer some of the questions.

    I’m afraid I don’t know a great deal about the theoretical implications of the result. In my experience there are usually many theories that attempt to explains an observed effect, but they nearly always predict a second effect which is not seen. Constructing a self consistent theory is remarkably difficult and I would be surprised (and delighted!) to find a paper written before the OPERA result was announced which explains the result without contradicting other observed effects. I know that there have been several papers on the arXiv on the subject (http://arxiv.org/find/all/1/all:+OPERA) but I haven’t had time to read through them.

  • David ConnerShover

    Yes, the 2Km wide beam was mentioned in the paper, the reason I was asking if anything might be in he works as of different baselines (distances) through different media (solid rock vs near vacuum) with the same or similar timing accuracy would yield comparative data that may bear out or disprove many theories to account for this roughly 60ns early arrival of neutrinos vs photon of light.
    My personal theory falls along the lines of a reverse refraction of a neutrino stream through rock vs light in vacuum. I suspect that a neutrino’s mass interacts more strongly with solid matter along its travel path than vacuum than previously thought. further testing using different baselines and transit media density would bear this out. it would also not involve a violation of Einstein’s Theory of Relativity.
    considering they have different behaviors (uncharged neutrino generally passes through solid matter unimpeded with very little interaction except by predictable chance with matter vs most light which is stopped near the surface the presently used media) hence the size of OPERA.
    I can understand that the size of an OPERA like detector would have to be several orders of size larger (more along 100 times bigger with the present beam collimnation for a through planet shot if a neutrino stream follows the same inverse square law as darn near everything else) one of the questions I have is, can the proton extraction beam be collimnated tighter? or can a much larger proton extraction be produced so as no not require such a much larger detector as OPERA on the other side of the planet? with the current GPS techonlogy, I cannot see a significant difference in timing accuracy using 7200Km vs 742Km baseline

  • David ConnerShover

    Sorry about that last comment.. it is my understanding now, after being directed to that search page you mentioned on your last post, that it would not really be possible to obtain a tighter beam collimnation to much more of a degree than it already is. I can also say that both of us were truly off on the size of a neutrino detector needed for a much longer distance.. square of distance AND loss of neutrinos to impact in transit. sorry that i’m drifting into theory, but, until more similar experiments are run, IE more data.I cannot do much else. one other note, the published are a statistal mean, with alot of outliers (noise?). Unfortunately, most of my experiences lie in mathematics, but do not necessarily include theories and methods presented by others oft mentioned in most theoretical papers. Aside from a few names, Einstein, Schwartzchild, Lorentz (who’s mathematics extend FAR beyond theoretical physics and well into practical everyday application) and a few others. Alot of the references bewilder me, and would probably require many years of study to untangle. The mathematics are not so difficult to understand though at least to a layman such as myself, but the names.. oh well. Thank you for pointing me in the right direction!

  • Bert Morrien

    In an experiment there is always a stimulus and a response.
    Using a response for which there is no corresponding stimulus is invalid, because there was no experiment.
    Using a stimulus for which there is no corresponding response is invalid as well, for the same reason.
    The latter is the case in the current analysis of the OPERA Collaboration.

    Only a part of the PEW contains start time information of the proton (stimulus) that later resulted in a neutrino detection (response).
    The remaining parts or the PEW contain start time information of protons for which there was not a neutrino detection.
    The current analysis allows the remaining parts to determine the shape of the PDF; it cannot be ruled out that this results in bias, because of the irrelevant start time information in the PEWs.

    A number of physicists pointed out that these remaining parts are required for constructing the PDF to enable the maximum likelihood analysis and they dismissed the idea that this was invalid.
    This seems the mainstream view and I am wondering what to think about that.
    It explains why the analysis is taken for granted.

    See also https://sites.google.com/site/bertmorrien/


  • Bert Morrien

    Hi Aidan,

    The newest outcome of the experiment included also the result of an alternative analysis.
    This result was compatible with the earlier finding, and so was the result of a new experiment with much shorter pulses.
    This means, Opera’s current analysis must be valid.
    This means also that Opera knew exactly what they were doing.
    Consequently, the PDF is still valid, despite the lack of PEW parts with a corresponding event, because with enoough events the event distribution resembles the shape of the PDF good enough to trust the outcome of a maximum likelihood analysis.
    It is regrettable that this point never became clear to me before.

    The lesson learned is that declaring the PDF and Opera’s analysis invalid is a good example of narrow minded reasoning; a humble apology is in order here.


  • certainly like your web site but you need to take a look at the spelling on several of your posts. Several of them are rife with spelling issues and I to find it very troublesome to inform the truth on the other hand I will surely come back again.

  • Jordi

    Hi again,
    My concern on detecting the neutrinos through different distances across the planet’s core, was not the distance, which may indeed improve accuracy, but to the structure of the space-time fabric inside the earth.
    1.- Einstein’s theory predicts a space compression due to intense gravity fields.
    2.- neutrinos barely interact with matter compared to light, perhaps their velocity depends on the intrinsic properties of the space-time. Hence neutrinos could be used as probes of what happens inside and around massive bodies.
    3.- if the density of space-time increases as we approach the earth’s core, if neutrinos are affected by the density of space-time, their speed would change accordingly, with respect to that found in the shallow crust measurements as a reference.



  • Pingback: Faster-Than-Light Neutrinos? | Cosmic Variance « Science Technology Informer()

  • This is just the perfect answer for all forum mermbes

  • Pingback: How to gain super powers by sneaking into a particle physics lab « Creative Output()

  • Pingback: Faster then light » Hallo, here is Pane!()