• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘Statistics’

The IceCube Moon Shadow

Monday, June 2nd, 2014

In a previous post, Marcos Santander wrote about a paper he and other IceCubers were working on looking for the shadow of the Moon in  cosmic rays raining down on Earth. Now that paper has been published!

The shadow of the Moon as was observed with the 59-string configuration of IceCube.

The idea of the Moon shadow is simple: to make sure that our detector is pointed the way we think it’s pointed, we look for a known source. The Moon makes a very nice known source, because it blocks cosmic rays from reaching the Earth, and so we see a deficit of cosmic ray air showers (and thus the muons they produce) from the direction of the Moon. By seeing the deficit where we expect it, we know that we can trust directions within the detector, or as the paper puts it, “this measurement validates the directional reconstruction capabilities of IceCube.”

It’s always funny adding the language of modern statistical significance to discussions like this, because they make it sound rather absurd (at least using the frequentist school of statistics). We talk about the random probability that a null (boring) hypothesis could produce the same signal, so smaller probabilities are more significant, and we talk about those probabilities in terms of the area under a “normal” or “gaussian” distribution, measured in the width sigma of that gaussian. A 2-sigma result is farther out in the tail of the gaussian, and less likely (so more significant) than a 1-sigma result.

We’ve arrived at a convention in particle physics that when your data reach 3-sigma significance, you can call it “evidence,” and when they reach 5 sigma, you can call it “discovery.” That’s purely convention, and it’s useful, although scientists should know the limits of the terminology.

That leads to absurd sounding lines like “IC22 has seen evidence of the Moon, while IC40 and IC59 have discovered it.” This is, technically, correct. What we’re really discovering here, though, is not that the Moon exists but that the IceCube detector works the way we expect it to.

Another consideration demonstrated by this paper is that it takes a long time to get a paper through the publication process. Now that the whole process is completed, we can celebrate. I’ve been following this analysis since I started working on it for my masters thesis, then handed it off to other IceCubers while I switched to neutrino oscillations. Do any of you have stories of long review processes?  Does anyone have a favorite other experiment that has looked at the Moon shadow?

Share

Whenever we come across a new result one of the first things we ask is “How many sigma is it?!” It’s a strange question, and one that deserves a good answer. What is a sigma? How do sigmas get (mis)used? How many sigmas is enough?

The name “sigma” refers to the symbol for the standard deviation, σ. When someone says “It’s a one sigma result!” what they really mean is “If you drew a graph and measured a curve that was one standard deviation away from the underling model then this result would sit on that curve.” Or to use a simple analogy, the height distribution for male adults in the USA is 178cm with a standard deviation of 8cm. If a man measured 170cm tall he would be a one sigma deviation from the norm and we could say that he’s a one sigma effect. As you can probably guess, saying something is a one sigma effect is not very impressive. We need to know a bit more about sigmas before we can say anything meaningful.

The term sigma is usually used for the Gaussian (or normal) distribution, and the normal distribution looks like this:

The normal distribution

The normal distribution

The area under the curve tells us the population in that region. We can color in the region that is more than one sigma away from the mean on the high side like this:

The normal distribution with the one sigma high tail shaded

The normal distribution with the one sigma high tail shaded

This accounts for about one sixth of the total, so the probability of getting a one sigma fluctuation up is about 16%. If we include the downward fluctuations (on the low side of the peak) as well then this becomes about 33%.

If we color in a few more sigmas, we can see that the probability of getting two, three, four and five sigma effect above the underlying distribution is 2%, 0.1%, 0.003%, and 0.00003%, respectively. To say that we have a five sigma result is much more than five times as impressive as a one sigma result!

The normal distribution with each sigma band shown in a different color.

The normal distribution with each sigma band shown in a different color. Within one sigma is green, two sigma is yellow, three sigma is... well can you see past the second sigma?

When confronted with a result that is (for example) three sigma above what we expect we have to accept one of two conclusions:

  1. the distribution shows a fluctuation that has a one in 500 chance of happening
  2. there is some effect that is not accounted for in the model (eg a new particle exists, perhaps a massive scalar boson!)

Unfortunately it’s not as simple as that, since we have to ask ourselves “What is the probability of getting a one sigma effect somewhere in the distribution?” rather than “What is the probability of getting a one sigma effect for a single data point?”. Let’s say we have a spectrum with 100 data points. The probability that every single one of those data points will be within the one sigma band (upward and downward fluctuations) is 68% to the power 100, or \(2\times 10^{-17}\), a tiny number! In fact, we should be expecting one sigma effects in every plot we see! By comparison, the probability that every point falls within the three sigma band is 76%, and for five sigma it’s so close to 100% it’s not even worth writing out.

A typical distribution with a one sigma band drawn on it looks like the plot below. There are plenty of one and two sigma deviations. So whenever you hear someone says “It’s an X sigma effect!” ask them how many data points there are. Ask them what the probability of seeing an X sigma effect is. Three sigma is unlikely for 100 data points. Five sigma is pretty much unheard of for that many data points!

A typical distribution of simulated data with a one sigma band drawn.

A typical distribution of simulated data with a one sigma band drawn.

So far we’ve only looked at statistical effects, and found the probability of getting an X sigma deviation due to fluctuations. Let’s consider what happens with systematic uncertainties. Suppose we have a spectrum that looks like this:

A sample distribution with a suspicious peak.

A sample distribution with a suspicious peak.

It seems like we have a two-to-three sigma effect at the fourth data point. But if we look more closely we can see that the fifth data point looks a little low. We can draw three conclusions here:

  1. the distribution shows a fluctuation that has a one in 50 chance of happening (when we take all the data points into account)
  2. there is some effect that is not accounted for in the model
  3. the model is correct, but something is causing events from one data point to “migrate” to another data point

In many cases the third conclusion will be correct. There are all kinds of non-trivial effects which can change the shape of the data points, push events around from one data point to another and create false peaks where really, there is nothing to discover. In fact I generated the distribution randomly and then manually moved 20 events from the 5th data point to the 4th data point. The correct distribution looks like this:

The sample distribution, corrected.

The sample distribution, corrected.

So when we throw around sigmas in conversation we should also ask people what the shape of the data points looks like. If there is a suspicious downward fluctuation in the vicinity of an upward fluctuation be careful! Similarly, if someone points to an upward fluctuation while ignoring a similarly sized downward fluctuation, be careful! Fluctuations happen all the time, because of statistical effects and systematic effects. Take X sigma with a pinch of salt. Ask for more details and look at the whole spectrum available. Ask for a probability that the effect is due to the underlying model.

Most of the time it’s a matter of “A sigma here, a sigma there, it all balances out in the end.” It’s only when the sigma continue to pile up as we add more data that we should start to take things seriously. Right now I’d say we’re at the point where a potential Higgs discovery could go either way. There’s a good chance that there is a Higgs at 125GeV, but there’s also a reasonable chance that it’s just a fluctuation. We’ve seen so many bumps and false alarms over the years that another one would not be a big surprise. Keep watching those sigmas! The magic number is five.

Share

A Grumpy Note on Statistics

Tuesday, March 13th, 2012

Last week’s press release Fermilab about the latest Higgs search results, describing the statistical significance of the excess events, said:

Physicists claim evidence of a new particle only if the probability that the data could be due to a statistical fluctuation is less than 1 in 740, or three sigmas. A discovery is claimed only if that probability is less than 1 in 3.5 million, or five sigmas.

This actually contains a rather common error — not in how we present scientific results, but in how we explain them to the public. Here’s the issue:

Wrong: “the probability that the data could be due to a statistical fluctuation”
Right: “the probability that, were there no Higgs at all, a statistical fluctuation that could explain our data would occur”

Obviously the first sentence fragment is easier to read — sorry![1] — but, really, what’s the difference? Well, if the only goal is to give a qualitative idea of the statistical power of the measurement, it likely doesn’t matter at all. But technically it’s not the same, and in unusual cases things could be quite different. My edited (“right”) sentence fragment is only a statement about what could happen in a particular model of reality (in this case, the Standard Model without the Higgs boson). The mistaken fragment implies that we know the likelihood of different possible models actually being true, based on our measurement. But there’s no way to make such a statement based on only one measurement; we’d need to include some of our prior knowledge of which models are likely to be right.[2]

Why is that? Well, consider the difference between two measurements, one of which observed the top quark with 5 sigma significance and the other of which found that neutrinos go faster than light with 5 sigma significance. If “5 sigma significance” really meant “the probability that the data could be due to a statistical fluctuation,” then we would logically find both analyses equally believable if they were done equally carefully. But that’s not how those two measurements were received, because the real interpretation of “5 sigma” is as the likelihood that we would get a measurement like this if the conclusion were false. We were expecting the top quark, so it’s a lot more believable that the excess is associated with the top quark than with an incredibly unlikely fluctuation. But we have many reasons to believe neutrinos can’t go faster than light, so we would sooner believe that an incredibly unlikely fluctuation had happened than that the measurement was correct.[3]

Isn’t it bad that we’d let our prior beliefs bias whether we think measurements are right or not? No, not as long as we don’t let them bias the results we present. It’s perfectly fair to say, as OPERA did, that they were compelled to publish their results but thought they were likely wrong. Ultimately, the scientific community does reach conclusions about which “reality” is more correct on a particular question — but one measurement usually can’t do it alone.

———————————

[1] For what it’s worth, I actually spent a while thinking and chatting about how to make the second sentence fragment simpler, while preserving the essential difference between the two. In this quest for simplicity, I’ve left off any mention of gaussian distributions, the fact that we really give the chance of a statistical fluctuation as large or larger than our excess, the phrase “null hypothesis,” and doubtless other things as well. I can only hope I’ve hit that sweet spot where experts think I’ve oversimplified to the point of incorrectness, while non-expert readers still think it’s completely unreadable. 😉

[2] The consensus among experimental particle physicists is that it’s not wise to include prior knowledge explicitly in the statistical conclusions of our papers. Not everyone agrees; the debate is between Frequentist and Bayesian statistics, and a detailed discussion is beyond the scope of both this blog entry and my own knowledge. A wider discussion of the issues in this entry, from a Bayesian perspective, can be found in this preprint by G. D’Agostini. I certainly don’t agree with all of the preprint, but I do owe it a certain amount of thanks for help in clarifying my thinking.

[3] A systematic mistake in the result, or in the calculation of uncertainties, would be an even likelier suspect.

Share

Update: Section added to include LEP11 Results on Higgs Boson Exclusion (01 Sept 2011)

Expect bold claims at this week’s SUSY 2011 (#SUSY11 on Twitter, maybe) Conference at Fermilab, in Batavia, Illinois. No, I do not have any secret information about some analysis that undoubtedly proves Supersymmetry‘s existence; though, it would be pretty cool if such an analysis does exist. I say this because I came back from a short summer school/pre-conference that gave a very thorough introduction to the mathematical framework behind a theory that supposes that there exists a new and very powerful relationship between particles that make up matter, like electrons & quarks (fermions), and particles that mediate the forces in our universe, like photons & gluons (bosons). This theory is called “Supersymmetry”, or “SUSY” for short, and might explain many of the shortcomings of our current description of how Nature works.

At this summer school, appropriately called PreSUSY 2011, we were additionally shown the amount of data that the Large Hadron Collider is expected to collect before the end of this year and at the end of 2012. This is where the game changer appeared. Back in June 2011, CERN announced that it had collected 1 fb-1 (1 inverse femtobarn) worth of data – the equivalent of 70,000 billion proton-proton collisions – a whole six months ahead of schedule. Yes, the Large Hadron Collider generated a year’s worth of data in half a year’s time. What is more impressive is that the ATLAS and CMS experiments may each end up collecting upwards of 5 fb-1 before the end of this year, a benchmark number a large number of people said would be a “highly optimistic goal” for 2012. I cannot emphasize how crazy & surreal it is to be seriously discussing the possibility of having 10 fb-1, or even 15 fb-1, by the end of 2012.

Figure 1: Up-to-date record of the total number of protons collisions delivered to each of the Large Hadron Collider Detector Experiments. (Image: CERN)

What this means is that by the end of this year, not next year, we will definitely know whether or not the higgs boson, as predicted by the Standard Model, exists. It also means that by next year, experimentalists will be able to rule out the most basic versions of Supersymmetry which were already ruled out by previous, high-precision measurements of previously known (electroweak) physics. Were we to find Supersymmetry at the LHC now and not when the LHC is at designed specifications, which are expected to be reached in 2014, then many physicists would be at a loss trying to rectify why one set of measurements rule out SUSY but another set of measurements support its existence.

What we can expect this week, aside from the usual higgs boson and SUSY exclusion plots, are a set of updated predictions as to where we expect to be this time next year. Now that the LHC has given us more data than we had anticipated we can truly explore the unknown, so trust me when I say that the death of SUSY has been greatly exaggerated.

More on Higgs Boson Exclusion (Added 01 Sept 2011)

This morning a new BBC article came out on the possibility of the higgs being found by Christmas. So why not add some plots, shown at August’s Lepton-Photon 2011 Conference, that show this? These plots were taken from Vivek Sharma’s Higgs Searches at CMS talk.

If there is no Standard Model higgs boson, then the Compact Muon Solenoid Detector, one of the two general purpose LHC detectors, should be able to exclude the boson, singlehandedly, with a 95% Confidence Level. ATLAS, the second of the two general purpose detectors, is similarly capable of such an exclusion.

Figure A: The CMS Collaboration projected sensitivity to excluding the higgs boson with 5 fb-1 at √s = 7 TeV; the black line gives combined (total) sensitivity.

Things get less clear if there is a higgs boson because physical & statistical fluctuations adds to our uncertainty. If CMS does collect 5 fb-1 before the winter shutdown, then it is capable of claiming at least a 3σ (three-sigma) discovery for a higgs boson with a mass anywhere between mH≈ 120 GeV/c2 and mH ≈ 550 GeV/c2 . For a number of (statistical/systematic) reasons, the range might shrink or expand with 5 fb-1 worth of data but only by a few GeV/c2. In statistics, “σ” (sigma) is the Greek letter that represents a standard deviation; a “3σ result” implies that there is only a 0.3% chance of being a fluke. The threshold for discovery is set at 5σ, or a 0.000 06% of being a random fluke.

Figure B: The CMS Collaboration projected sensitivity to discovering the higgs boson with 1 (black), 2 (brown?), 5 (blue), and 10 (pink)  fb-1 at √s = 7 TeV.

By itself, the CMS detector is no longer sensitive. By combing their results, however, a joint ATLAS-CMS combined analysis can do the full 3σ discovery and a 5σ job down to 128 GeV/c2. The 114 GeV/c2 benchmark that physicists like to throw around is lower bound on the higgs boson mass set by CERN’s LEP Collider, which shutdown in 2000 to make room for the LHC.

Figure C: The projected sensitivity of a joint ATLAS-CMS analysis for SM higgs exclusion & discovery for various benchmark data sets.

However, there are two caveat in all of this. The smaller one is that these results depend on another 2.5 fb-1 being delivered by the upcoming winter shutdown; if there are any more major halts in data collection, then the mark will be missed. The second, and more serious, caveat is that this whole time I have been talking about the Standard Model higgs boson, which has a pretty rigid set of assumptions. If there is new physics, then all these discovery/exclusion bets are off. 🙂

Nature’s Little Secrets

On my way to PreSUSY, a good colleague of mine & I decided to stop by Fermilab to visit a friend and explore the little secret nooks that makes Fermilab, in my opinion, one of the most beautiful places in the world (keep in mind, I really love the Musée d’Orsay). What makes Fermilab such an gorgeous place is that is doubles as a federally sanctioned nature preserve! From bison to butterflies, the lab protects endangered or near-endangered habitats while simultaneously reaching back to the dawn of the Universe. Here is a little photographic tour of some of Nature’s best kept secrets. All the photos can be enlarged by clicking on them. Enjoy!

Figure 2: The main entrance to the Enrico Fermi National Accelerator Laboratory, U.S. Dept. of Energy Laboratory Designation: FNAL, nicknamed Fermilab. The three-way arch that does not connect evenly at the top is called Broken Symmetry and appropriately represents the a huge triumph of Theoretical (Solid State & High Energy) Physics: Spontaneous Symmetry Breaking. Wilson Hall, nicknamed “The High-Rise” can be see in the background. (Image: Mine).

Figure 3: Wilson Hall, named after FNAL’s first director and Manhattan Project Scientist Robert Wilson, is where half of Fermilab’s magic happens. Aside from housing all the theorists & being attached to the Tevatron Control Room, it also houses a second control room for the CMS Detector called the Remote Operations Center. Yes, the CMS Detector can be fully controlled from Fermilab. The photo was taken from the center of the Tevatron ring. (Image: Mine)

Figure 4: A wetlands preserve located at the center of the Tevatron accelerator ring. The preservation has been so successful at restoring local fish that people with an Illinois fishing license (See FAQ) are actually allowed to fish. From what I have been told, the fish are exceptionally delicious the closer you get to the Main Ring. I wonder if it has anything to do with all that background neutrino rad… never mind. 🙂
Disclaimer: The previous line was a joke; the radiation levels at Fermilab are well within safety limits! (Image: Mine)

Figure 5: The Feynman Computing Center (left) and BZero (right), a.k.a., The CDF Detector Collision Hall. The Computing Center, named after the late Prof. Richard Feynman, cannot be justly compared to any other data center, except with maybe CERN‘s computing center. Really, there is so much experimental computer research, custom built electronics, and such huge processing power that there are no benchmarks that allows for it to be compared. Places like Fermilab and CERN set the benchmarks. The Collider Detector at Fermilab, or CDF for short, is one of two general purpose detectors at Fermilab that collects and analyzes the decay products of proton & anti-proton collisions. Magic really does happen in that collision hall. (Image: Mine)

Figure 6: The DZero Detector Collision Hall (blue building, back), Tevatron Colling River (center) , and Collision Hall Access Road (foreground). Like CDF (Figure 5), DZero is one of two general-purpose detectors at Fermilab that collects and analyzes the decay products of proton & anti-proton collisions. There is no question that the Tevatron generates a lot of heat. It was determined long ago that by taking advantage of the area’s annual rainfall and temperature the operating costs of running the collider could be drastically cut by using naturally replenishable source of water to cool the collider. If there were ever a reason to invest in a renewable energy source, this would be it. The access road doubles as a running/biking track for employees and site visitors. If you run, one question that is often asked by other scientists is if you are a proton or anti-proton. The anti-protons travel clockwise in the Main Ring and hence you are called an anti-proton if you bike/run with the anti-protons; the protons travel counter-clockwise. FYI: I am an anti-proton. (Image: Mine)

Figure 7: The Barn (red barn, right) and American bison pen (fence, foreground). Fermilab was built on prairie land and so I find it every bit appropriate that the laboratory does all it can to preserve an important part of America’s history, i.e., forging the Great American Frontier. Such a legacy of expanding to the unknown drives Fermilab’s mantra of being an “Ongoing Pioneer of Exploring the Frontier of Discovery.” (Image: Mine)

Figure 8: American bison (bison bison) in the far background (click to enlarge). At the time of the photo, a few calves had just recently been born. (Image: Mine)

 

Happy Colliding.

 

– richard (@bravelittlemuon)

 

 

Share