Haim Harari said once, and for all, that “neutrino physics is largely an art of learning a big deal by observing nothing.” These words are as true today as they were when Harari stated them, but today we finally have the necessary tools to start seeing *something, *if we’re patient enough to wait for these neutrino sources to reveal themselves to our telescopes watching the whole sky. So, if you’re as patient as we are to read this long post, you’ll have a decent idea of how we try to pinpoint these sources.

#### The neutrino sky

So far, only two extraterrestrial objects have been observed using neutrinos as a detection channel: the Sun (relatively “easy” to spot since we are very close), and the supernova SN1987A (the brightest and closest

in almost 400 years.)

Since 1987, new and more powerful neutrino telescopes have come online, and the data being taken may reveal new point sources in the sky that are not as “evident” as a nearby supernova, or the Sun itself.

The detection technique is, in principle, quite simple. You take all the neutrino events that have been recorded by one telescope, and you plot them in a sky map to see how they are distributed. If some clustering appears around a certain direction, that may indicate that a distant astrophysical object is waving to us in neutrinos.

The key issue is, how do we know for sure that such clustering is not there by chance? How certain are we that this neutrino “bump” in the sky is a real signal, instead of a temporary pattern that will fade away as soon as we take more data?

**The method**

Let me explain the method by using a fictional detector. Let’s say that, after a lot of building and data taking, we finally obtain what we want: a set of 1000 neutrino events that come from almost every direction in the sky. Our goal is to determine if these neutrinos are distributed randomly across the sky, or if there’s some clustering around some preferential directions, which would indicate the presence of sources. Our fake neutrino sky is very small: say 10 x 10 degrees in size, which gives a density of 10 events per square degree

From our knowledge of the detector (from simulations mostly) we have determined that we cannot get the direction of the incoming neutrino to a precision better than, say, 1 degree. This means that if we were to draw a circle in the map with a radius of 1 degree, all the events within the circle could actually be coming from a single point inside it, but there’s no way to tell that by simply looking at the map because of our resolution. The shape and size of this “circle” are characteristic of the instrument, and are properties of what is called the “point spread function”, or PSF, of the apparatus. For purposes of clarity, let’s assume that the PSF of our detector is a square of 2 x 2 degrees (a rather unphysical shape, but that’s OK for this explanation.)

Here’s where some statistical tools enter the game. Since we have a 2×2 degrees PSF, and there are 1000 events, we expect that *on average* there will be 40 neutrino events inside our square no matter where we put it on the map. Of course, there will be times when the number is lower and times when it is higher than 40, but how far from 40 events do we need to get to say that there’s actually something significant in a region of the sky? Luckily for us, this question was answered more than 150 years ago by the French mathematician Simeon-Denis Poisson. Using Poisson probability distribution we see that, if we expect 40 events on average, the probability of getting at least 50 events in the square is of about 7%, and the probability of getting at least 60 would be just 0.2 %.

Now, 0.2% may seem as extremely rare in every day life (I mean, it’s just a chance in 500!) but for particle physics, it’s really nothing exceptional. The rules are that if you see something that will happen by chance with a probability of 1 in a 1000 (what is called a 3-sigma significance), that indicates only *evidence* that there’s something interesting going on in that part of the sky. A real *discovery *asks for a 5-sigma significance, which is something that will happen by chance with a probability of only 1 in 10000000! In our case we would need to see ~77 events inside the square to say we have a discovery.

**Great, you say, clearly relieved, now we just have to see if moving this square around I can get 77 events inside it. Well, actually, we’re not quite there yet. There’s still a subtle problem: since every random map will have “hotter” and “colder” places, and we’re looking for the hotter place throughout the entire map, we have to account for what we call the “trial factor.”**

**Trial factors**

**To describe this, let’s fast-forward to take a look at real data, from a real detector. I extracted the following plots from a very illustrative and entertaining talk given by my fellow IceCuber Chad Finley, so the credit goes to him. In the image below you see a skymap with 5114 events observed by IceCube in 2007-08 when the detector had 22 strings installed (~ 1/4 of the planned total.) The color map that you see is the result of scanning through the map with the real PSF of the IceCube detector and indicating in color what is the statistical significance of the events in any particular part of the sky. The scale is in -log _{10} p, with p being the probability as I explained above (although not exactly, since it involves many more details, but let’s say it is.) A -log_{10} p of 3 (dark blue) means that the probability of having such a configuration by total coincidence is 10^{-3} (or 0.1%.)**

Almost immediately, you’ll notice that there’s a “hot spot” in right ascension 153° and declination 11º. Again we ask: what are the chances that we see such a spot in the sky given our dataset *and* the fact that we scan the entire sky looking for it? To answer this question we use another technique: we take the same data and scramble the coordinates, generating a random map, and we keep doing this until we get 10000 of these random maps. Looking through each random map we look at the highest significance (the highest -log_{10} p) that each map had *out of pure luck* (since we know that they’re totally random) and we put these significances in a histogram like the one showed below.

In the histogram we have the 10000 scrambled maps, and we see that we get a -log_{10} p of 7, or above, in ~10 random maps, indicating that such a -log_{10} p happens with a probability of 0.1% (10 out of 10000 maps.) This drastically reduces our significance, since at the beginning we thought that a spot with a -log_{10} p of 7 had a 5-sigma significance, and now it gets scaled down to 3-sigmas after *accounting for the trials factor* (which is how we call this procedure.)

Going back to the real map with the “hot spot”, you see that the -log_{10} p is around 6 (6.18 when looking at the numbers) What is the significance after accounting for the trial factor? We go to the histogram and we count the number of map that had a -log10 p higher than 6.18 and the answer is 67. So the “real” significance is around 2.2-sigmas (which corresponds to 67/10000), well below the 3-sigma threshold for evidence of a point source. You can find the complete paper describing these results here

**Final remarks**

There’re new results for point source searches from IceCube available already, but it was not my idea to show the latest results but rather describe the procedure used to look for sources (don’t worry, we haven’t seen anything yet anyway.) I hope that during the course of the next years, as we finish the construction of the detector during the next winter and gather more data, we will be able to announce the discovery of neutrino point sources. I also hope, at a more personal and probably selfish level, that this happens before I end my PhD! 🙂