• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Laura Gladstone | University of Wisconsin, Madison | USA

Read Bio

A Physicist and Historian Walk Into a Coffee Shop

Saturday, July 26th, 2014

It’s Saturday, so I’m at the coffee shop working on my thesis again. It’s become a tradition over the last year that I meet a writer friend each week, we catch up, have something to drink, and sit down for a few hours of good-quality writing time.

photo09

The work desk at the coffee shop: laptop, steamed pork bun, and rosebud latte.

We’ve gotten to know the coffee shop really well over the course of this year. It’s pretty new in the neighborhood, but dark and hidden enough that business is slow, and we don’t feel bad keeping a table for several hours. We have our favorite menu items, but we’ve tried most everything by now. Some mornings, the owner’s family comes in, and the kids watch cartoons at another table.

I work on my thesis mostly, or sometimes I’ll work on analysis that spills over from the week, or I’ll check on some scheduled jobs running on the computing cluster.

My friend Jason writes short stories, works on revising his novel (magical realism in ancient Egypt in the reign of Rameses XI), or drafts posts for his blog about the puzzles of the British constitution. We trade tips on how to organize notes and citations, and how to stay motivated. So I’ve been hearing a lot about the cultural difference between academic work in the humanities and the sciences. One of the big differences is the level of citation that’s expected.

As a particle physicist, when I write a paper it’s very clear which experiment I’m writing about. I only write about one experiment at a time, and I typically focus on a very small topic. Because of that, I’ve learned that the standard for making new claims is that you usually make one new claim per paper, and it’s highlighted in the abstract, introduction, and conclusion with a clear phrase like “the new contribution of this work is…” It’s easy to separate which work you claim as your own and which work is from others, because anything outside “the new contribution of this work” belongs to others. A single citation for each external experiment should suffice.

For academic work in history, the standard is much different: the writing itself is much closer to the original research. As a start, you’ll need a citation for each quote, going to sources that are as primary as you can get your hands on. The stranger idea for me is that you also need a citation for each and every idea of analysis that someone else has come up with, and that a statement without a citation is automatically claimed as original work. This shows up in the difference between Jason’s posts about modern constitutional issues and historical ones: the historical ones have huge source lists, while the modern ones are content with a few hyperlinks.

In both cases, things that are “common knowledge” doesn’t need to be cited, like the fact that TeV cosmic rays exist (they do) or the year that Elizabeth I ascended the throne (1558).

There’s a difference in the number of citations between modern physics research and history research. Is that because of the timing (historical versus modern) or the subject matter? Do they have different amounts of common knowledge? For modern topics in physics and in history, the sources are available online, so a hyperlink is a perfect reference, even in formal post. By that standard, all Quantum Diaries posts should be ok with the hyperlink citation model. But even in those cases, Jason puts footnoted citations to modern articles in the JSTOR database, and uses more citations overall.

Another cool aspect of our coffee shop is that the music is sometimes ridiculous, and it interrupts my thoughts if I get stuck in some esoteric bog. There’s an oddly large sample of German covers of 30s and 40s showtunes. You haven’t lived until you’ve heard “The Lady is a Tramp” in German while calculating oscillation probabilities. I’m kidding. Mostly.

Jason has shown me a different way of handling citations, and I’ve taught him some of the basics of HTML, so now his citations can appear as hyperlinks to the references list!

As habits go, I’m proud of this social coffee shop habit. I default to getting stuff done, even if I’m feeling slightly off or uninspired.  The social reward of hanging out makes up for the slight activation energy of getting off my couch, and once I’m out of the house, it’s always easier to focus.  I miss prime Farmers’ Market time, but I could go before we meet. The friendship has been a wonderful supportive certainty over the last year, plus I get some perspective on my field compared to others.

Share

Welcome to Thesisland

Tuesday, July 22nd, 2014

When I joined Quantum Diaries, I did so with trepidation: while it was an exciting opportunity, I was worried that all I could write about was the process of writing a thesis and looking for postdoc jobs. I ended up telling the site admin exactly that: I only had time to work on a thesis and job hunt. I thought I was turning down the offer. But the reply I got was along the lines of “It’s great to know what topics you’ll write about! When can we expect a post?”. So, despite the fact that this is a very different topic from any recent QD posts, I’m starting a series about the process of writing a physics PhD thesis. Welcome.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

There are as many approaches to writing a PhD thesis as there are PhDs, but they can be broadly described along a spectrum.

On one end is the “constant documentation” approach: spend some fixed fraction of your time on documenting every project you work on. In this approach, the writing phase is completely integrated with the research work, and it’s easy to remember the things you’re writing about. There is a big disadvantage: it’s really easy to write too much, to spend too much time writing and not enough doing, or otherwise un-balance your time. If you keep a constant fraction of your schedule dedicated to writing, and that fraction is (in retrospect) too big, you’ve lost a lot of time. But you have documented everything, which everyone who comes after will be grateful for. If they ever see your work.

The other end of the spectrum is the “write like hell” approach (that is, write as fast as you can), where all the research is completed and approved before writing starts. This has the advantage that if you (and your committee) decide you’ve written enough, you immediately get a PhD! The disadvantage is that if you have to write about old projects, you’ll probably have forgotten a lot. So this approach typically leads to shorter theses.

These two extremes were first described to me (see the effect of thesis writing? It’s making my blog voice go all weird and passive) by two professors who were in grad school together and still work together. Each took one approach, and they both did fine, but the “constant documentation” thesis was at least twice (or was it three times?) as long as the “write like hell” thesis.

Somewhere between those extremes is the funny phenomenon of the “staple thesis”: a thesis primarily composed of all the papers you wrote in grad school, stapled together. A few of my friends have done this, but it’s not common in my research group because our collaboration is so large. I’ll discuss that in more detail later.

I’m going for something in the middle: as soon as I saw a light at the end of the tunnel, I wanted to start writing, so I downloaded the UW latex template for PhD theses and started filling it in. It’s been about 14 months since then, with huge variations in the writing/research balance. To help balance between the two approaches, I’ve found it helpful to keep at least some notes about all the physics I do, but nothing too polished: it’s always easier to start from some notes, however minimal, than to start from nothing.

When I started writing, there were lots of topics available that needed some discussion: history and theory, my detector, all the calibration work I did for my master’s project–I could have gone full-time writing at that point and had plenty to do. But my main research project wasn’t done yet. So for me, it’s not just a matter of balancing “doing” with “documenting”; it’s also a question of balancing old documentation with current documentation. I’ve almost, *almost* finished writing the parts that don’t depend on my work from the last year or so. In the meantime, I’m still finishing the last bits of analysis work.

It’s all a very long process. How many readers are looking towards writing a thesis later on? How many have gone through this and found a method that served them well? If it was fast and relatively low-stress, would you tell me about it?

Share

Why We Need an Event Viewer

Monday, June 30th, 2014

There’s a software tool I use almost every day, for almost any work situation. It’s good for designing event selections, for brainstorming about systematic errors, and for mesmerizing kids at outreach events. It’s good anytime you want to build intuition about the detector. It’s our event viewer. In this post, I explain a bit about how I use our event viewer, and also share the perspective of code architect Steve Jackson, who put the code together.

Steamshovel event viewer showing the event Mr. Snuffleupagus

The IceCube detector is buried in the glacier under the South Pole. The signals can only be read out electronically; there’s no way to reach the detector modules after the ice freezes around them. In designing the detector, we carefully considered what readout we would need to describe what happens in the ice, and now we’re at the stage of interpreting that data. A signal from one detector module might tell us the time, amplitude, and duration of light arriving at that detector, and we put those together into a picture of the detector. From five thousand points of light (or darkness), we have to answer: where did this particle come from? Does the random detector noise act the way we think it acts? Is the disruption from dust in the ice the same in all directions? All these questions are answerable, but the answers take some teasing out.

To help build our intuition, we use event viewer software to make animated views of interesting events. It’s one of our most useful tools as physicist-programmers. Like all bits of our software, it’s written within the collaboration, based on lots of open-source software, and unique to our experiment. It’s called “steamshovel,” a joke on the idea that you use it to dig through ice (actually, dig through IceCube data – but that’s the joke).

Meet Steve Jackson and Steamshovel

IceCube data from the event Mr. Snuffleupagus

Steve Jackson’s job on IceCube was originally maintaining the central software, a very broad job description. His background is in software including visualizations, and he’s worked as The Software Guy in several different physics contexts, including medical, nuclear, and astrophysics. After becoming acquainted with IceCube software needs, he narrowed his focus to building an upgraded version of the event viewer from scratch.

The idea of the new viewer, Steamshovel, was to write a general core in the programming language C++, and then higher-level functionality in Python. This splits up the problem of drawing physics in the detector into two smaller problems: how to translate physics into easily describable shapes, like spheres and lines, and how to draw those spheres and lines in the most useful way. Separating these two levels makes the code easier to maintain, easier to update the core, and easier for other people to add new physics ideas, but it doesn’t make it easier to write in the first place. (I’ll add: that’s why we hire a professional!) Steve says the process took about as long as he could have expected, considering Hofstadter’s Law, and he’s happy with the final product.

A Layer of Indirection 

As Steve told me, “Every problem in computer science can be addressed by adding a layer of indirection: some sort of intermediate layer where you abstract the relevant concepts into a higher level.” The extra level here is the set of lines and spheres that get passed from the Python code to the C++ code. By separating the defining from the drawing, this intermediate level makes it simpler to define new kinds of objects to draw.

A solid backbone, written with OpenGL in C++, empowers the average grad student to write software visualization “artists” as python classes. These artists can connect novel physics ideas, written in Python, to the C++ backbone, without the grad student having to get into the details of OpenGL, or, hopefully, any C++.

Here’s a test of that simplicity: as part of our week-long, whirlwind introduction to IceCube software, we taught new students how to write a new Steamshovel artist. With just a week of software training, they were able to produce them, a testament to the usability of the Steamshovel backbone.

This separation also lets the backbone include important design details that might not occur to the average grad student, but make the final product more elegant. One such detail is that the user can specify zoom levels much more easily, so graphics are not limited to the size of your computer screen. Making high-resolution graphics suitable for publication is possible and easy. Using these new views, we’ve made magazine covers, t-shirts, even temporary tatoos.

Many Platforms, Many People

IceCube has an interesting situation that we support (and have users) running our software on many different UNIX operating systems: Mac, Ubuntu, Red Hat, Fedora, Scientific Linux, even FreeBSD. But we don’t test our software on Windows, which is the standard for many complex visualization packages: yet another good reason to use the simpler OpenGL. “For cross-platform 3D graphics,” Steve says, “OpenGL is the low-level drawing API.”

As visualization software goes, the IceCube case is relatively simple. You can describe all the interesting things with lines and spheres, like dots for detector modules, lines and cylinders for the cables connecting them or for particle tracks, and spheres of configurable color and size for hits within the detector. There’s relatively little motion beyond appearing, disappearing, and changing sizes. The light source never moves. I would add that this is nothing – nothing! – like Pixar. These simplifications mean that the more complex software packages that Steve had the option to use were unnecessarily complex, full of options that he would never use, and the simple, open-source openGL was perfectly sufficient.

The process of writing Steamshovel wasn’t just one-man job (even though I only talked to one person for this post). Steve solicited, and received, ideas for features from all over the collaboration. I personally remember that when he started working here, he took the diligent and kind step of sitting and talking to several of us while we used the old event viewer, just to see what the workflow was like, the good parts and the bad. One particularly collaborative sub-project started when one IceCube grad student, Jakob, had the clever idea of displaying Monte Carlo true Cherenkov cones. We know where the simulated light emissions are, and how the light travels through the ice – could we display the light cone arriving at the detector modules and see whether a particular hit occurred at the same time? Putting together the code to make this happen involved several people (mainly Jakob and Steve), and wouldn’t have been possible coding in isolation.

Visual Cortex Processing

The moment that best captured the purpose of a good event viewer, Steve says, was when he animated an event for the first time. Specifically, he made the observed phototube pulses disappear as the charge died away, letting him see what happens on a phototube after the first signal. Animating the signal pulses made the afterpulsing “blindingly obvious.”

We know, on an intellectual level, that phototubes display afterpulsing, and it’s especially strong and likely after a strong signal pulse. But there’s a difference between knowing, intellectually, that a certain fraction of pulses will produce afterpulses and seeing those afterpulses displayed. We process information very differently if we can see it directly than if we have to construct a model in our heads based on interpreting numbers, or even graphs. An animation connects more deeply to our intuition and natural instinctive processes.

As Steve put it: “It brings to sharp relief something you only knew about in sort of a complex, long thought out way. The cool thing about visualization is that you can get things onto a screen that your brain will notice pre-cognitively; you don’t even have to consciously think to distinguish between a red square and a blue square. So even if you know that two things are different, from having looked carefully through the math, if you see those things in a picture, the difference jumps out without you even having to think about it. Your visual cortex does the work for you. [...] That was one of the coolest moments for me, when these people who understood the physics in a deep way nonetheless were able to get new insights on it just by seeing the data displayed in a new way. “

And that’s why need event viewers.

Share

IceCube DeepCore and Atmospheric Neutrino Mixing

Tuesday, June 3rd, 2014

Today at the Neutrino2014 conference in Boston, the IceCube collaboration showed an analysis looking for standard atmospheric neutrino oscillations in the 20-30 GeV region. Although IceCube has seen oscillations before, and reported them in a poster at the last Neutrino conference, in 2012, this plenary talk showed the first analysis where the IceCube error bands are becoming competitive with other oscillation experiments.

IC86Multi_NuMuOsc_results_Pscan_V1Neutrino oscillation is a phenomenon where neutrinos change from one flavor to another as they travel; it’s a purely quantum phenomenon. It has been observed in several contexts, including particle accelerators, nuclear reactors, cosmic rays hitting the atmosphere, and neutrinos traveling from our Sun. This is the first widely accepted phenomenon in particle physics that requires an extension to the Standard Model, the capstone of which was the observation of the Higgs boson at CERN. Neutrinos and neutrino oscillations represent the next stage of particle physics, beyond the Higgs.

IC86Multi_NuMuOsc_results_LEOf the parameters used to describe neutrino oscillations, most have been previously measured. The mixing angles that describe oscillations are the most recent focus of measurement. Just two years ago, the last of the neutrino mixing angles was measured by the Daya Bay experiment. Of the remaining mixing angles, the atmospheric angle accessible to IceCube remains the least constrained by experimental measurements.  

IceCube, because of its size, is in a unique position to measure the atmospheric mixing angle. Considering neutrinos that traverse the diameter of the Earth, the oscillation effect is the strongest in the energy region from 20 to 30 GeV, and an experiment that can contain a 20 GeV neutrino interaction must be very large. The Super Kamiokande experiment in Japan, for example, also measures atmospheric oscillations, but because of its small size relative to IceCube, Super Kamiokande can’t resolve energies above a few GeV. At any higher energies, the detector is simply saturated. Other experiments can measure the same mixing angle using accelerator beamlines, like the MINOS experiment that sends neutrinos from Fermilab to Minnesota. Corroborating these observations from several experimental methods and separate experiments proves the strength of the oscillation framework.

The sheer size of IceCube means that neutrinos have many chances to interact and be observed within the detector, giving IceCube a statistical advantage over other oscillation experiments. Even after selecting only the best reconstructed events, the experimental sample remaining still has over five thousand events from three years of data. Previous atmospheric oscillation experiments base analysis on hundreds or fewer events, counting instead on precise understanding of systematic effects. 

The IceCube collaboration is composed of more than 250 scientists from about 40 institutions around the world, mostly from the United States and Europe. The current results are possible because of decades of planning and construction, dedicated detector operations, and precise calibrations from all over the IceCube collaboration.

IceCube has several major talks at the Neutrino conference this year, the first time that the collaboration has had such a prominent presence. In addition to the new oscillations result, Gary Hill spoke in the opening session about the high energy astrophysical neutrinos observed over the last few years. Darren Grant spoke about the proposed PINGU infill array, which was officially encouraged in the recent P5 report. IceCube contributed nine posters on far-ranging topics from calibration and reconstruction methods to a neutrino-GRB correlation search. The conference-inspired display at the MIT museum is about half IceCube material, including an 8-foot tall LED model of the detector. One of three public museum talks on Saturday will be from (yours truly) Laura Gladstone about the basics of IceCube science and life at the South Pole.

One new aspect of the new oscillation analysis is that it uses an energy reconstruction designed for the low end of the energy range available to IceCube, in the tens-of-GeV range. In this range, only a handful of hits are visible for each event, and reconstructing directional information can be tricky. “We took a simple but very clever idea from the ANTARES Collaboration, and rehashed it to tackle one of our biggest uncertainties: the optical properties of the ice. It turned out to work surprisingly well,” says IceCuber Juan Pablo Yanez Garza, who brought the new reconstruction to IceCube, and presented the result in Boston.  By considering only the detector hits that arrive without scattering, the reconstruction algorithm is more robust against systematic errors in the understanding of the glacial ice in which IceCube is built. 

Share

The IceCube Moon Shadow

Monday, June 2nd, 2014

In a previous post, Marcos Santander wrote about a paper he and other IceCubers were working on looking for the shadow of the Moon in  cosmic rays raining down on Earth. Now that paper has been published!

The shadow of the Moon as was observed with the 59-string configuration of IceCube.

The idea of the Moon shadow is simple: to make sure that our detector is pointed the way we think it’s pointed, we look for a known source. The Moon makes a very nice known source, because it blocks cosmic rays from reaching the Earth, and so we see a deficit of cosmic ray air showers (and thus the muons they produce) from the direction of the Moon. By seeing the deficit where we expect it, we know that we can trust directions within the detector, or as the paper puts it, “this measurement validates the directional reconstruction capabilities of IceCube.”

It’s always funny adding the language of modern statistical significance to discussions like this, because they make it sound rather absurd (at least using the frequentist school of statistics). We talk about the random probability that a null (boring) hypothesis could produce the same signal, so smaller probabilities are more significant, and we talk about those probabilities in terms of the area under a “normal” or “gaussian” distribution, measured in the width sigma of that gaussian. A 2-sigma result is farther out in the tail of the gaussian, and less likely (so more significant) than a 1-sigma result.

We’ve arrived at a convention in particle physics that when your data reach 3-sigma significance, you can call it “evidence,” and when they reach 5 sigma, you can call it “discovery.” That’s purely convention, and it’s useful, although scientists should know the limits of the terminology.

That leads to absurd sounding lines like “IC22 has seen evidence of the Moon, while IC40 and IC59 have discovered it.” This is, technically, correct. What we’re really discovering here, though, is not that the Moon exists but that the IceCube detector works the way we expect it to.

Another consideration demonstrated by this paper is that it takes a long time to get a paper through the publication process. Now that the whole process is completed, we can celebrate. I’ve been following this analysis since I started working on it for my masters thesis, then handed it off to other IceCubers while I switched to neutrino oscillations. Do any of you have stories of long review processes?  Does anyone have a favorite other experiment that has looked at the Moon shadow?

Share

The story we expected for neutrino astronomy

Thursday, May 1st, 2014

Since IceCube was proposed, people have been claiming that you can get a new view of astrophysics by using particles instead of light, and we were pretty sure what the journey would look like. It hasn’t gone quite in the order we expected, but we’re getting that new view of astrophysics, and also, a few years later, filling in the steps we expected to fill first. When we find bits of scientific evidence in a different order than we expected, does that change how excited we get about them?

Sunrise over the IceCube laboritory

The sunrise at the South Pole over the IceCube laboratory, the central building on top of the IceCube Neutrino Observatory.

We been expanding astronomy since it started. First, astronomers used telescopes to resolve visible light better. Later, they expanded to different regions of the light spectrum like x-rays and gamma rays.  Then, it was a small step to expand from gamma rays, which are easier to think of as particles than as waves, to particles like the atomic nuclei that make up cosmic rays. Neutrinos are another kind of particle we can use for astronomy, and they have unique advantages and challenges.

The hard part about using neutrinos as a messenger between the stars and us is that neutrinos very rarely interact with matter. This means that if thousands pass through our detector, we might only see a few. There are some ways around this, and the biggest trick IceCube uses is to look in a very large volume. If we look for more neutrinos at a time, we have more of a chance of seeing the few that interact. The other trick is that we concentrate on high energies, where the neutrinos have a higher chance of interacting in our detector.

The great thing about using neutrinos as a messenger is that they hardly ever interact, so almost nothing can stop them from arriving at our door. If we see a neutrino in IceCube, it came to us directly from something interesting. We know that its direction wasn’t deflected in any magnetic fields, and it wasn’t dimmed by dust clouds or even asteroid clouds. Every (rare) time we see a high-energy neutrino, it tells us something about the stars, explosions, or black holes that created it.

That’s the story that people like Francis Halzen used to get funding for IceCube originally, and around Madison we still get to hear him tell this story, with his inimitable accent, when he speaks at museums or banquets.

Comparing neutrino astronomy to other new 20th century advances in astronomy, we expected the development of the field to follow a certain story.

We expected that first we would see a “diffuse” signal. This would be part of a large sample including a lot of background events, but some component would only be explained by including astrophysical sources. In IceCube, one of the best ways of reducing background noise is to look for events traveling up through the Earth, since only neutrinos can pass through the Earth. We could also look at high energies, since backgrounds like atmospheric neutrinos fall off exponentially with energy. So we thought the first diffuse astrophysics signal would come from the high-energy tail of an upgoing sample.

After that, we expected to resolve the diffuse sample into some clusters, and after a few of the clusters remained consistent, to declare them astrophysical sources.

What we did instead was to skip to the end of this story. We found astrophysical neutrinos first, and then a diffuse upgoing signal only two years after that (just this past spring). The exciting part about finding this recent diffuse signal isn’t that it’s the first detection of astrophysics, or even the strongest. It’s exciting because it follows the story we thought neutrino astronomy was going to follow.

The first detection was exciting too. That used a different kind of analysis: we identified only a few events (28 in two years) that were extremely likely to be from astrophysical sources. These were so special that each one got a name, using the theme of the Muppets, from Sesame Street and the Muppet Show. One is named Bert, one Ernie, one Mr. Snuffleupagus, one Oscar the Grouch. If we keep analyzing our data this way and eventually get enough events, we can expand to the Muppet Babies cartoons and various muppet movies, even including things like Labyrinth that used Jim Henson’s talents but not the muppets specifically. I’m personally a big fan of the muppet naming scheme, partly because it draws from a cannon recent enough that it includes several women and many kinds of diversity. When naming events is our biggest problem, it will be a great day for neutrino astrophysics. For formal publications, we usually say “HESE” for “High Energy Stating Event,” instead of “muppets.”

The two bedrock assumptions of the muppet analysis were that (1) we’re the most interested in the highest energy events, and (2) the events must have started within the detector; they must be “contained.” That containment requirement means that they must have been neutrinos and not cosmic rays, since comic ray showers contain lots of stuff besides neutrinos that arrives at the same time. We can assume at the highest energies that no cosmic ray could make it through the outer layers of our detector without leaving a trace (unpacked: cosmic rays must leave a trace) but at lower energies some cosmic ray muons can steak through. For the first muppet analysis, we get around this by just looking at the highest energies.

This is backwards from what we expected in two ways: first, the sample we get is mostly from neutrinos coming from above the detector, and second, there are almost no background events in our sample, so we don’t have to include directional clustering to know that we’ve seen astrophysics.

The sample is mostly downgoing because the highest energy neutrinos are blocked by the Earth. Higher energy neutrinos are more likely to interact than low-energy neutrinos; it’s the opposite of our momentum-based intuition from faster cars slamming through walls without stopping. It’s a popular trivium that neutrinos can pass through lightyears of lead without interacting, but that’s only true at low energy scales like the neutrinos from nuclear reactors. At IceCube astrophysics scales, it takes only our tiny planet to stop a neutrino. So the muppet events we do see are mostly ones that don’t pass through the Earth.

Since the muppets sample has almost no background events (at the very most, 10 of the 28, but we don’t know which 10), we don’t need to do a clustering analysis. Traditionally, we thought this was the most promising way to find neutrino point sources, and the background would be neutrinos from interactions in the Earth’s atmosphere. But at PeV energies, there aren’t enough atmospheric neutrinos to explain what we saw, so each event in the new analysis is potentially as interesting as a cluster would be in the old analysis.

We haven’t yet seen clusters using the old techniques, and when we do, it will probably be celebrated by a small party, an email around our collaboration, some nights out for the people involved, and a PhD for someone (or a few someones). But it won’t be the same cover-of-Science-Magazine celebration (that was Mr. Snuffalupagus on the cover) and press coverage that we had for the first discovery. It will be a quiet victory, as it was for the recent diffuse result.

While it doesn’t have to follow the script we expect it to, science can still sometimes choose to follow a familiar plotline. And we are comforted by the familiarity.

Share