• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Marcos Santander | IceCube | USA

Read Bio

Always check your sources

Friday, January 8th, 2010

Haim Harari said once, and for all, that “neutrino physics is largely an art of learning a big deal by observing nothing.” These words are as true today as they were when Harari stated them, but today we finally have the necessary tools to start seeing something, if we’re patient enough to wait for these neutrino sources to reveal themselves to our telescopes watching the whole sky. So, if you’re as patient as we are to read this long post, you’ll have a decent idea of how we try to pinpoint these sources.

The neutrino sky

So far, only two extraterrestrial objects have been observed using neutrinos as a detection channel: the Sun (relatively “easy” to spot since we are very close), and the supernova SN1987A (the brightest and closest

The Sun as seen in neutrinos by the Super Kamiokande telescope after 500 days of data-taking, in Japan.

The Sun, as seen in neutrinos by the Super Kamiokande telescope after 500 days of data-taking.

in almost 400 years.)

Since 1987, new and more powerful neutrino telescopes have come online, and the data being taken may reveal new point sources in the sky that are not as “evident” as a nearby supernova, or the Sun itself.

The detection technique is, in principle, quite simple. You take all the neutrino events that have been recorded by one telescope, and you plot them in a sky map to see how they are distributed. If some clustering appears around a certain direction, that may indicate that a distant astrophysical object is waving to us in neutrinos.

The key issue is, how do we know for sure that such clustering is not there by chance? How certain are we that this neutrino “bump” in the sky is a real signal, instead of a temporary pattern that will fade away as soon as we take more data?

The method

Let me explain the method by using a fictional detector. Let’s say that, after a lot of building and data taking, we finally obtain what we want: a set of 1000 neutrino events that come from almost every direction in the sky. Our goal is to determine if these neutrinos are distributed randomly across the sky, or if there’s some clustering around some preferential directions, which would indicate the presence of sources. Our fake neutrino sky is very small: say 10 x 10 degrees in size, which gives a density of 10 events per square degree

sky

A fictitious random sky map showing 1000 neutrino events in 10 by 10 degrees. Our "search box" is 2 by 2 degrees, and hence should contain, in average, 40 events.

From our knowledge of the detector (from simulations mostly) we have determined that we cannot get the direction of the incoming neutrino to a precision better than, say, 1 degree. This means that if we were to draw a circle in the map with a radius of 1 degree, all the events within the circle could actually be coming from a single point inside it, but there’s no way to tell that by simply looking at the map because of our resolution. The shape and size of this “circle” are characteristic of the instrument, and are properties of what is called the “point spread function”, or PSF, of the apparatus. For purposes of clarity, let’s assume that the PSF of our detector is a square of 2 x 2 degrees (a rather unphysical shape, but that’s OK for this explanation.)

Here’s where some statistical tools enter the game. Since we have a 2×2 degrees PSF, and there are 1000 events, we expect that on average there will be 40 neutrino events inside our square no matter where we put it on the map. Of course, there will be times when the number is lower and times when it is higher than 40, but how far from 40 events do we need to get to say that there’s actually something significant in a region of the sky? Luckily for us, this question was answered more than 150 years ago by the French mathematician Simeon-Denis Poisson. Using Poisson probability distribution we see that, if we expect 40 events on average, the probability of getting at least 50 events in the square is of about 7%, and the probability of getting at least 60 would be just 0.2 %.

Now, 0.2% may seem as extremely rare in every day life (I mean, it’s just a chance in 500!) but for particle physics, it’s really nothing exceptional. The rules are that if you see something that will happen by chance with a probability of 1 in a 1000 (what is called a 3-sigma significance), that indicates only evidence that there’s something interesting going on in that part of the sky. A real discovery asks for a 5-sigma significance, which is something that will happen by chance with a probability of only 1 in 10000000! In our case we would need to see ~77 events inside the square to say we have a discovery.

Great, you say, clearly relieved, now we just have to see if moving this square around I can get 77 events inside it. Well, actually, we’re not quite there yet. There’s still a subtle problem: since every random map will have “hotter” and “colder” places, and we’re looking for the hotter place throughout the entire map, we have to account for what we call the “trial factor.”

Trial factors

To describe this, let’s fast-forward to take a look at real data, from a real detector. I extracted the following plots from a very illustrative and entertaining talk given by my fellow IceCuber Chad Finley, so the credit goes to him. In the image below you see a skymap with 5114 events observed by IceCube in 2007-08 when the detector had 22 strings installed (~ 1/4 of the planned total.) The color map that you see is the result of scanning through the map with the real PSF of the IceCube detector and indicating in color what is the statistical significance of the events in any particular part of the sky. The scale is in -log10 p, with p being the probability as I explained above (although not exactly, since it involves many more details, but let’s say it is.) A -log10 p of 3 (dark blue) means that the probability of having such a configuration by total coincidence is 10-3 (or 0.1%.)

Picture 11

Almost immediately, you’ll notice that there’s a “hot spot” in right ascension 153° and declination 11º. Again we ask: what are the chances that we see such a spot in the sky given our dataset and the fact that we scan the entire sky looking for it? To answer this question we use another technique: we take the same data and scramble the coordinates, generating a random map, and we keep doing this until we get 10000 of these random maps. Looking through each random map we look at the highest significance (the highest -log10 p) that each map had out of pure luck (since we know that they’re totally random) and we put these significances in a histogram like the one showed below.

Picture 16

In the histogram we have the 10000 scrambled maps, and we see that we get a -log10 p of 7, or above, in ~10 random maps, indicating that such a -log10 p happens with a probability of 0.1% (10 out of 10000 maps.) This drastically reduces our significance, since at the beginning we thought that a spot with a -log10 p of 7 had a 5-sigma significance, and now it gets scaled down to 3-sigmas after accounting for the trials factor (which is how we call this procedure.)

Going back to the real map with the “hot spot”, you see that the -log10 p is around 6 (6.18 when looking at the numbers) What is the significance after accounting for the trial factor? We go to the histogram and we count the number of map that had a -log10 p higher than 6.18 and the answer is 67. So the “real” significance is around 2.2-sigmas (which corresponds to 67/10000), well below the 3-sigma threshold for evidence of a point source. You can find the complete paper describing these results here

Final remarks

There’re new results for point source searches from IceCube available already, but it was not my idea to show the latest results but rather describe the procedure used to look for sources (don’t worry, we haven’t seen anything yet anyway.) I hope that during the course of the next years, as we finish the construction of the detector during the next winter and gather more data, we will be able to announce the discovery of neutrino point sources. I also hope, at a more personal and probably selfish level, that this happens before I end my PhD! 🙂

Share

Relax, the year is over!

Thursday, December 31st, 2009

The year is officially over in more than half the world as I write these words. As every year comes to an end, it’s time to celebrate and to put to practice many traditions. Among these traditions, there are two that I wanted to share with you.
The first one is the UW-Madison Holiday Physics Colloquium. Every Friday throughout the year, a guest speaker will give a colloquium on his or her field of research, but the very last Friday before the end of classes is usually reserved to the 3rd year grad students.
The students will put together an incredible collection of skits and spoof videos, combined with live performances. The usual topics are: Grad school is terribly hard, TAing stinks, I can’t find a good topic for my thesis, and so on. The faculty always participates in such videos and performances and we all have a good laugh. Pizza is available for the entire department, as well as plenty of beer (this is Wisconsin after all!)
Here’s a video I found on youtube of a past Holiday Colloquium. They are very good, but they even get better if you know the professors and students involved.

The second tradition was unknown to me: it’s a real Race Around the World! and it doesn’t even take 80 days, but a couple of hours. Apparently, every year this race is held in the South Pole Station in Antarctica. The participants have spent some time building all kinds of weird and funny-looking vehicles and crazy costumes to be in the 4 km-long race around the South Pole marker. It is certainly a well deserved celebration moment after all the hard work they’ve put working there while being away from their families.

There were a couple of serious runners that were going against the clock, too.

Here are some pictures that I took from an appendix to the latest IceCube weekly report. Enjoy! and Happy new year!

Picture 8Picture 9

Share

Eight letters, starts with neu…

Saturday, December 12th, 2009

Last week I received a link to a crossword featuring the names of past and present neutrino experiments. Give it a try!

Crossword

Solution

The author is Phil Rodrigues, from Oxford University.

Share

Toy IceCube

Tuesday, November 10th, 2009

I wanted to share with you this entertaining video about IceCube (you know, the neutrino observatory that I talked about on a previous post.)

The author of the video is Casey O’Hara, a high school teacher from California that in a matter of days will be travelling to the South Pole to join the IceCube people stationed there. Mr. O’Hara is involved in PolarTREC, an NSF-funded program that provides polar research experiences for teachers; you can get more details about his trip from this link.

Although the toy polar explorer seemed happy while drilling the ice and installing the string with phototubes, I’ve heard from the people that actually do this stuff that it’s a bit more difficult than what it seems 😉

As a proof, take a look at this video of the deployment of one string into the 2.5 km-deep ice hole.

Share

My previous life in physics

Wednesday, October 7th, 2009

Before joining the IceCube collaboration, I spent several years working in another astroparticle physics experiment: the Pierre Auger Observatory, located in the western part of my home country: Argentina.

The first interesting thing about Auger is the name itself. In a community where everything is named using (sometimes outrageous) acronyms, an experiment named after a person is like a bit of fresh air.

The observatory, devoted to the detection of ultra-high energy cosmic rays, was named after French physicist Pierre Victor Auger, who was among the first persons to detect what we now call “extensive air-showers” during the first decades of the 20th century.

Pierre Auger (c. 1960)

Pierre Auger (c. 1960)

Auger, while working with data from cosmic-ray detectors he had installed in the Alps, noticed that, sometimes, detectors placed far apart would record a simultaneous burst of particles crossing them. After some serious thinking, he came to the conclusion that these bursts could have been caused not by several individual cosmic rays entering the Earth’s atmosphere, but by a single high-energy one interacting with the atmosphere at high altitudes and producing a “shower” of secondary particles that can be detected at ground level across an extended area.

His daughter, who visited the observatory during the ground breaking ceremony, still remembered the day that his father figured this out, Auger was apparently very excited about this idea walking nervously around the house.

The energy range of particles that the observatory detects is well above the energy that can be achieved on any man-made accelerators. Even the LHC running at full blast will produce collisions with energies hundreds of times lower than those detected at the observatory on a daily-basis. Determining the nature and location of these natural astrophysical accelerators are some of the major goals of the collaboration, and I’ll tell you more about that on a future post.

To detect these air-showers the observatory uses two complimentary techniques: an array of 1600 tanks installed across an area of 3000 km^2 (yes, that’s an area larger than that of some countries) that detects the secondary particles at ground level, and 4 sets of special telescopes that surround the tank array and, on clear moonless nights, detect the faint UV light caused by these secondary particles as they speed through the atmosphere.

For the second technique, the knowledge of the air transparency is extremely important, and it has a direct impact on the determination of the total energy of the primary particle (the entering cosmic ray at the top of the atmophere.) To account for this effect, the observatory has an extensive atmospheric monitoring network, and this is where my humble contribution comes in.

dsc061181

The LIDAR telescope during the alignment procedure. The telescope sits on top of a container with a cover that is fully open at the time of the picture. The bluish light in the top of the telescope structure is the laser light hitting a diffuser used during the early stages of the procedure. Right above the telescope you'll find the constellation of the Southern Cross, in the background you see a storm approaching (with some lightnings in the clouds.)

During my early days in Auger, doing research as an undergrad engineering student, I got to install, operate and play around with the LIDARs of the observatory. A LIDAR is basically a light-based radar, using a short, high intensity laser pulse to probe the atmosphere for particles in suspension (likes dust, smoke, or even the molecules that air is made of.) The light emitted by the laser is back-scattered by dust and molecules in the atmosphere and collected by an array of telescopes that are aligned with the laser beam. The more light you receive back at the telescope, the more dust and molecules you have in the light path which implies a dirtier, darker atmosphere.

This may seem a straightforward analysis, but several aspects of the scattering and absorption of light that you don’t have under control complicate things.
In order to measure these atmospheric parameters in different directions, the LIDAR telescopes and the laser were mounted on a fully steerable platform that performs a routine scan of the sky every time the LIDARs are operated.
The 4 LIDARs telescopes that the observatory runs are operated 16 nights per month, whenever the moon-induced brightness of the night sky is low enough to detect the faint UV light with the air-fluorescence telescopes, and they have been running for more than 5 years. As a result, the atmospheric monitoring dataset of the observatory is by far the biggest one in this part of the world, and it could be of great interest of scientists studying the atmosphere.

This is usually the case of many of these big experiments: whenever you build something as big as this observatory, you’ll have to solve problems or measure things nobody has done before, so you may end up expanding the knowledge of not only your field, but also other, at first glance completely unrelated fields.

downloads

A taste of the data taken with the LIDAR telescopes. On the upper right you see a single atmospheric profile which shows the light intensity recorded at the telescope as a function of time (and hence, as a function of distance from the telescope itself.) The black line indicates the expected absorption of light with a clean atmosphere, the bump in reflectivity comes from the laser beam hitting a cloud at a height of around 3.5 km. When several of these profiles obtained in different directions are put together you get the color plot, which is a cross section of the atmosphere above the LIDAR telescope showing clouds with interesting structures. Seeing these structures develop in real time along a night-long data taking shift was very entertaining.

Share

Still alive…

Friday, September 11th, 2009

I haven’t shown up in a good while, and there’s a reason for this: tomorrow I’m taking my first shot at the qualifying exams here at Madison. Most American schools require that their PhD students should pass this exam to continue towards their degree; you get 4 chances during the first 2 years, although this changes on a school-by-school basis.

I’m quite sure that I’m not going to pass on this first chance, so I don’t have high hopes for tomorrow. Anyway, you never know, maybe the physics fairy will be in a good mood tomorrow and may be willing to illuminate me, we’ll see.

I hope that after tomorrow I’ll be able to go back to write a bit more about my work here.

PS: There’s a PhD Comics strip for every ocassion 🙂

(c) PhD Comics (http://phdcomics.com)

(c) PhD Comics (http://phdcomics.com)

Share

Now I get it!

Tuesday, August 25th, 2009

For a considerable fraction of the students starting grad school in the US, September is a synonym for the dreaded qualifying exams. These tests are meant to check our mastery of undergraduate-level physics, so we can move on and take the graduate level courses with confidence in what we’re doing. Of course, for many students (and for me as well, depending on the day and my level of exhaustion) they rather mean plain torture.

Getting ready for an exam like this involves solving literally hundreds of problems that will cover a wide spectrum of topics in physics: classical and quantum mechanics, relativity, thermodynamics and statistical mechanics, electricity and magnetism, etc.
While doing all these practice tests I wondered about how, sometimes you would have no idea about how to tackle a problem and then, suddenly, something magical happens, you understand.

Montgomery Burns 1000 monkeys writing on 1000 typing machines

Montgomery Burns' 1000 monkeys writing on 1000 typing machines.

This rather trivial, everyday event has always intrigued me. Nothing has changed, you’re staring at the paper like you did two seconds ago, but in your head there is a big neon sign saying: “hey, I know what this is about!”

I wonder about what the brain is actually trying to do in those cases, maybe it just dumping all the memories related to problems like the one you’re reading, trying to make something fit. This is probably not so elegant, it would probably make several neurobiologists turn red, and it reminds me a bit of that idea that an infinite number of monkeys hitting keys in typewriters should be able to produce the complete works of Shakespeare, but indeed it’s a mysterious phenomenon.

On a side note, this also reminds me of a great satire about the work of people in academia: “Gulliver’s Travels” by Jonathan Swift. On one of his travels, Captain Gulliver visits an Academy where he is shown a machine that can produce all the knowledge of the world. It’s basically a grid of rotating cubes with words of them; some assistants would rotate the cubes and then take note of the resulting combination of words, then creating “new knowledge”. Here’s a sketch of the machine, from the first edition of Gulliver’s Travels.

Ironically, this great book loaded with tons of dark humor ended up being considered as a book for kids.

No matter what the process involved is, understanding something is one of the great pleasures of physics and, I think, of science in general. Training ourselves to solve these problems should help us in developing our instinct on how to solve other, not so trivial questions that we may encounter during our careers. If we are very lucky, we’ll be able to provide answers or hints to those as well. In a way, it’s very similar to a detective’s job, as Zoe mentioned some time ago.

This pleasure, the pleasure of finding things out as Feynman put it, it’s the driving force for many of us in this field. If the thing you have figured out is very important, you can certainly understand Archimedes’ reaction, running naked down the streets of Syracuse screaming “Eureka!” (I found it!) when he understood what we now call the Archimedes principle. And probably also, more recently, you could understand Kary Mullins pulling his Honda Civic to the side of a Californian road to be able to think about the implications of the idea that made him receive the 1993 Nobel prize in chemistry, and that made him exclaim something that it’s, in a way, close to Archimedes’ Eureka, but I will leave to you to find out.

Share

cout << “I hate programming!” << endl;

Tuesday, August 18th, 2009

Computers are nice, until you get to know them. Working in physics involves (as far as I can tell) spending a reasonable amount of time bathing our eyes in the cold light of a computer screen.

The days were all the data was recorded by hand, in a small logbook next to the experiment itself are long gone. Today’s huge experiments imply the generation of huge amounts of data, and an infrastructure that is able to record that information and store it for subsequent analysis. As you may have guessed, this is where computers enter the game.

Even though we have tools that make our work easier, there is no commercial package that will single-handedly crunch all the data, and spit a ready-to-be-published paper with all the results, that’s actually our job. Due to the uniqueness of the devices that we use (most detectors are “one of a kind”) it is very difficult to use software that was meant for a previous experiment to analyse our new data. That’s why, in most of the cases, physicists have to design, write, and run their own software. And that’s why we spend hours and hours in front of the screen trying to make the computer do what we want to do, as opposed to let it do what it wants to do.

How do we make a computer work for us? Easy, we use a programming language. We learn how to write plain text instructions in an specific language that would be then transformed into something that the computer can understand, and hopefully run.

An analogy to this is the HTML code in which this very page is written into. If you look through the menues of your internet browser, you’ll see an option that allows you to look at the source code of this page. This seemingly  cryptic text contains information that will tell the browser how to nicely format the webpage you’re looking at. But HTML is only used to format web pages, if we want to do something with need some other language. Most of the times I use C++, which I think is the lingua franca of software development in high energy physics.

If you’re working in a big collaboration, it’s very hard for a single individual to put together a complete analysis suite for the data. The collaborations usually have a group of people (most of the times consisting of a mix of software engineers and physicists) that dedicate their time to write the software that would be used by their colleagues. So you end up having a huge software package that will do thousands of marvelous things and, with a little bit of work from your part, will allow you to extract the values that you’re looking for from the raw data.

The IceCube software bootcamp

The IceCube software bootcamp

Luckily for me, IceCube organizes every year a “software bootcamp”, where all the gadgets available in the official analysis and simulation packages are presented to the newcomers. The software development team will come to Madison and they’ll tell us how to use all the tools they have made available to the collaboration in this week-long workshop, and we’ll end up knowing not only a little bit more about programming, but also about what’s in (and what’s not in) the data.

I had been working with the IceCube software even before the bootcamp, and I had a reasonable amount of experience from my work in Auger, so the transition was relatively smooth for me. I still have a lot to learn about programming, but I consider that I have a nice amount of experience in the subject, which doesn’t mean that I enjoy doing it. Let me give you an example of why that’s the case.

When you write code, and your code is supposed to do a lot of complicated things, there’s a good chance that the first time you try to compile it, the compiler (a program that converts your plain text code to machine-readable instructions) won’t like what you wrote. Most of the times, it’s just a typo, or the syntax that you used for a function is wrong, or something like that. But even when the compiler finishes without errors, that doesn’t mean that you’re getting any closer to your results. Many times a program would just crash for a variety of reasons, and you’ll have to go through the code looking for the error in a slow and tedious process known as “debugging” the code. Sometimes, the program will run perfectly, but when you look at the results it would be total nonsense, so it’s debugging time again!

This debugging process takes a good amount of time, anything from a minute to a couple of days, and I’m going through it at the moment. So now you understand why physicists have such a problematic relationship with computers, and that is why today I hate programming, although that could change to love tomorrow if I happen to find the bug in my code… 🙂

Share

The coolest telescope ever

Friday, August 7th, 2009

UNESCO and the International Astronomical Union (IAU) have declared 2009 the International Year of Astronomy, as a way to celebrate the 400th anniversary of Galileo Galilei’s first astronomical observations through a telescope.

Since Galileo’s days, telescopes have kept evolving, and now we are able to detect even the smallest amounts of energy emitted by distant astronomical sources, in a spectrum band that spans over 20 orders of magnitude in wavelength, from the most energetic gamma rays to the longest radio waves.

Astroparticle physics

A new window to the Universe has recently been opened. We are now starting to look at the skies using not just electromagnetic radiation, but subatomic particles as the carriers of valuable astrophysical information. The name of this new field is astroparticle physics, and it’s the result of the application of our current knowledge on the basic properties of elementary particles to the study of violent phenomena taking place in distant astrophysical objects.

(more…)

Share

The desperate remedy

Thursday, July 30th, 2009

Neutrinos, they are very small.
They have no charge and have no mass
And do not interact at all.
The earth is just a silly ball
To them, through which they simply pass,
Like dustmaids through a drafty hall
Or photons through a sheet of glass.
They snub the most exquisite gas,
Ignore the most substantial wall,
Cold-shoulder steel and sounding brass,
Insult the stallion in his stall,
And scorning barriers of class,
Infiltrate you and me! Like tall
And painless guillotines, they fall
Down through our heads into the grass.
At night, they enter at Nepal
And pierce the lover and his lass
From underneath the bed—you call
It wonderful; I call it crass.

“Cosmic Gall”, from Telephone Poles and Other Poems, John Updike, 1960.

Neutrinos, as Updike elegantly put it, are one of the most elusive particles in nature. They don’t only go through walls, planets, and humans; but also through the very instruments that are built to detect them, only occasionally leaving a trace behind them to evidence their pass.

That’s a lesson that everybody (even some of my fellow co-bloggers) working with this particle has learned the hard way. Detecting a neutrino always involves putting on its way a huge amount of matter to force a couple of them to “show up”, to reveal their existence. But why should we care about building such huge detectors if the neutrino is so indifferent to our efforts?

Well, the fact that its interaction with matter is so small makes it a perfect probe to observe the places where some of the most violent astrophysical phenomena in the universe are taking place. After its production, the neutrino will leave the place quietly, going through huge amounts of matter almost without being attenuated, bringing us the news, only if we have cared enough to put a detector in its way. As they have no electrical charge, they travel cosmological distances without being affected by magnetic fields so, at the time of their detection, they still point to their source. They also have huge decay times (if they decay at all) so they can travel for a long while without “breaking apart”. A neutrino is, in a sense, “our man in Havana.”

I’m tempted to say that neutrinos have such a weird personality because their father didn’t love them from the moment they were born, and called them just “a desperate remedy.” 🙂

Coming soon: the description of a full-fledged neutrino telescope. By the way, since the time that Updike wrote his poem it has been proved that neutrinos have a tiny but non-zero mass.

Share