• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Mandeep
  • Gill

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Latest Posts

Let there be beam!

Wednesday, October 15th, 2014

It’s been a little while since I’ve posted anything, but I wanted to write a bit about some of the testbeam efforts at CERN right now. In the middle of July this year, the Proton Synchrotron, or PS, the second ring of boosters/colliders which are used to get protons up to speed to collide in the LHC, saw its first beam since the shutdown at the end Run I of the LHC. In addition to providing beam to experiments like CLOUD, the beam can also be used to create secondary particles of up to 15 GeV/c momentum, which are then used for studies of future detector technology. Such a beam is called a testbeam, and all I can say is WOOT, BEAM! I must say that being able to take accelerator data is amazing!

The next biggest milestone is the testbeams from the SPS, which started on the 6th of October. This is the last ring before the LHC. If you’re unfamiliar with the process used to get protons up to the energies of the LHC, a great video can be found at the bottom of the page.

Just to be clear, test beams aren’t limited to CERN. Keep your eyes out for a post by my friend Rebecca Carney in the near future.

I was lucky enough to be part of the test beam effort of LHCb, which was testing both new technology for the VELO and for the upgrade of the TT station, called the Upstream Tracker, or UT. I worked mainly with the UT group, testing a sensor technology which will be used in the 2019 upgraded detector. I won’t go too much into the technology of the upgrade right now, but if you are interested in the nitty-gritty of it all, I will instead point you to the Technical Design Report itself.

I just wanted to take a bit to talk about my experience with the test beam in July, starting with walking into the experimental area itself. The first sight you see upon entering the building is a picture reminding you that you are entering a radiation zone.

ps_entrance

The Entrance!!

Then, as you enter, you see a large wall of radioactive concrete.

the_wall

Don’t lick those!

This is where the beam is dumped. Following along here, you get to the control room, which is where all the data taking stuff is set up outside the experimental area itself. Lots of people are always working in the control room, focused and making sure to take as much data as possible. I didn’t take their picture since they were working so hard.

Then there’s the experimental area itself.

the_setup

The Setup! To find the hardhat, look for the orange and green racks, then follow them towards the top right of the picture.

Ah, beautiful. :)

There are actually 4 setups here, but I think only three were being used at this time (click on the picture for a larger view). We occupied the area where the guy with the hardhat is.

Now the idea behind a tracker testbeam is pretty straight forward. A charged particle flies by, and many very sensitive detector planes record where the charged particle passed. These planes together form what’s called a “telescope.” The setup is completed when you add a detector to be tested either in the middle of the telescope or at one end.

Cartoon of a test beam setup. The blue indicates the "telescope", the orange is the detector under test, and the red is the trajectory of a charged particle.

Cartoon of a test beam setup. The blue indicates the “telescope”, the orange is the detector under test, and the red is the trajectory of a charged particle.

 

From timing information and from signals from these detectors, a trajectory of the particle can be determined. Now, you compare the position which your telescope gives you to the position you record in the detector you want to test, and voila, you have a way to understand the resolution and abilities of your tested detector. After that, the game is statistics. Ideally, you want to be in the middle of the telescope, so you have the information on where the charged particle passed on either side of your detector as this information gives the best resolution, but it can work if you’re on one side or the other, too.

This is the setup which we have been using for the testbeam at the PS.  We’ll be using a similar setup for the testbeam at the SPS next week! I’ll try to write a follow up post on that when we finish!

And finally, here is the promised video.

 

Share

Top quark still raising questions

Wednesday, October 15th, 2014

This article appeared in symmetry on Oct. 15, 2014.

Why are scientists still interested in the heaviest fundamental particle nearly 20 years after its discovery? Photo: Reidar Hahn, Fermilab

Why are scientists still interested in the heaviest fundamental particle nearly 20 years after its discovery? Photo: Reidar Hahn, Fermilab

“What happens to a quark deferred?” the poet Langston Hughes may have asked, had he been a physicist. If scientists lost interest in a particle after its discovery, much of what it could show us about the universe would remain hidden. A niche of scientists, therefore, stay dedicated to intimately understanding its properties.

Case in point: Top 2014, an annual workshop on top quark physics, recently convened in Cannes, France, to address the latest questions and scientific results surrounding the heavyweight particle discovered in 1995 (early top quark event pictured above).

Top and Higgs: a dynamic duo?
A major question addressed at the workshop, held from September 29 to October 3, was whether top quarks have a special connection with Higgs bosons. The two particles, weighing in at about 173 and 125 billion electronvolts, respectively, dwarf other fundamental particles (the bottom quark, for example, has a mass of about 4 billion electronvolts and a whole proton sits at just below 1 billion electronvolts).

Prevailing theory dictates that particles gain mass through interactions with the Higgs field, so why do top quarks interact so much more with the Higgs than do any other known particles?

Direct measurements of top-Higgs interactions depend on recording collisions that produce the two side-by-side. This hasn’t happened yet at high enough rates to be seen; these events theoretically require higher energies than the Tevatron or even the LHC’s initial run could supply. But scientists are hopeful for results from the next run at the LHC.

“We are already seeing a few tantalizing hints,” says Martijn Mulders, staff scientist at CERN. “After a year of data-taking at the higher energy, we expect to see a clear signal.” No one knows for sure until it happens, though, so Mulders and the rest of the top quark community are waiting anxiously.

A sensitive probe to new physics

Top and antitop quark production at colliders, measured very precisely, started to reveal some deviations from expected values. But in the last year, theorists have responded by calculating an unprecedented layer of mathematical corrections, which refined the expectation and promise to realigned the slightly rogue numbers.

Precision is an important, ongoing effort. If researchers aren’t able to reconcile such deviations, the logical conclusion is that the difference represents something they don’t know about — new particles, new interactions, new physics beyond the Standard Model.

The challenge of extremely precise measurements can also drive the formation of new research alliances. Earlier this year, the first Fermilab-CERN joint announcement of collaborative results set a world standard for the mass of the top quark.

Such accuracy hones methods applied to other questions in physics, too, the same way that research on W bosons, discovered in 1983, led to the methods Mulders began using to measure the top quark mass in 2005. In fact, top quark production is now so well controlled that it has become a tool itself to study detectors.

Forward-backward synergy

With the upcoming restart in 2015, the LHC will produce millions of top quarks, giving researchers troves of data to further physics. But scientists will still need to factor in the background noise and data-skewing inherent in the instruments themselves, called systematic uncertainty.

“The CDF and DZero experiments at the Tevatron are mature,” says Andreas Jung, senior postdoc at Fermilab. “It’s shut down, so the understanding of the detectors is very good, and thus the control of systematic uncertainties is also very good.”

Jung has been combing through the old data with his colleagues and publishing new results, even though the Tevatron hasn’t collided particles since 2011. The two labs combined their respective strengths to produce their joint results, but scientists still have much to learn about the top quark, and a new arsenal of tools to accomplish it.

“DZero published a paper in Nature in 2004 about the measurement of the top quark mass that was based on 22 events,” Mulders says. “And now we are working with millions of events. It’s incredible to see how things have evolved over the years.”

Troy Rummler

Share

Good Management is Science

Friday, October 10th, 2014

Management done properly satisfies Sir Karl Popper’s (1902 – 1994) demarcation criteria for science, i.e. using models that make falsifiable or at least testable predictions. That was brought home to me by a book[1] by Douglas Hubbard on risk management where he advocated observationally constrained (falsifiable or testable) models for risk analysis evaluated through Monte Carlo calculations. Hmm, observationally constrained models and Monte Carlo calculations, sounds like a recipe for science.

Let us take a step back. The essence of science is modeling how the universe works and checking the assumptions of the model and its predictions against observations. The predictions must be testable. According to Hubbard, the essence of risk management is modeling processes and checking the assumptions of the model and its predictions against observations. The predictions must be testable. What we are seeing here is a common paradigm for knowledge in which modeling and testing against observation play a key role.

The knowledge paradigm is the same in project management. A project plan, with its resource loaded schedules and other paraphernalia, is a model for how the project is expected to proceed. To monitor a project you check the plan (model) against actuals (a fancy euphemism for observations, where observations may or may not correspond to reality). Again, it reduces back to observationally constrained models and testable predictions.

The foundations of science and good management practices are tied even closer together. Consider the PDCA cycle for process management that is present, either implicitly or explicitly, in essentially all the ISO standards related to management. It was originated by Walter Shewhart (1891 – 1967), an American physicist, engineer and statistician, and popularized by Edwards Deming (1900 – 1993), an American engineer, statistician, professor, author, lecturer and management consultant. Engineers are into everything. The actual idea of the cycle is based on the ideas of Francis Bacon (1561 – 1629) but could equally well be based on the work of Roger Bacon[2] (1214 – 1294). Hence, it should probably be called the Double Bacon Cycle (no, that sounds too much like a breakfast food).

But what is this cycle? For science, it is: plan an experiment to test a model, do the experiment, check the model results against theCapture observed results, and act to change the model in response to the new information from the check stage or devise more precise tests if the predictions and observations agree. For process management replace experiment with production process. As a result, you have a model for how the production process should work and doing the process allows you to test the model. The check stage is where you see if the process performed as expected and the act stage allows you to improve the process if the model and actuals do not agree. The key point is the check step. It is necessary if you are to improve the process; otherwise you do not know what is going wrong or, indeed, even if something is going wrong. It is only possible if the plan makes predictions that are falsifiable or at least testable. Popper would be pleased.

There is another interesting aspect of the ISO 9001 standard. It is based on the idea of processes. A process is defined as an activity that converts inputs into outputs. Well, that sound rather vague, but the vagueness is an asset, kind of like degrees of freedom in an effective field theory. Define them as you like but if you choose them incorrectly you will be sorry. The real advantage of effective field theory and the flexible definition of process is that you can study a system at any scale you like. In effective field theory, you study processes that operate at the scale of the atom, the scale of the nucleus or the scale of the nucleon and tie them together with a few parameters. Similarly with processes, you can study the whole organization as a process or drill down and look at sub process at any scale you like, for CERN or TRIUMF that would be down to the last magnet. It would not be useful to go further and study accelerator operations at the nucleon scale. At a given scale different processes are tied together by their inputs and outputs and these are also used to tie process at different scales.

As a theoretical physicist who has gone over to the dark side and into administration, I find it amusing to see the techniques and approaches from science being borrowed for use in administration, even Monte Carlo calculations. The use of similar techniques in science and administration goes back to the same underlying idea: all true knowledge is obtained through observation and its use to build better testable models, whether in science or other walks of life.

[1] The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard (Apr 27, 2009)

[2] Roger Bacon described a repeating cycle of observation, hypothesis, and experimentation.

Share

Physics Laboratory: Back to Basics

Friday, October 10th, 2014

Dark matter –  it’s essential to our universe, it’s mysterious and it brings to mind cool things like space, stars, and galaxies. I have been fascinated by it since I was a child, and I feel very lucky to be a part for the search for it. But that’s not actually what I’m going to be talking about today.

I am a graduate student just starting my second year in the High Energy Physics group at UCL, London. Ironically, as a dark matter physicist working in the LUX (Large Underground Xenon detector) and LZ (LUX-ZEPLIN) collaborations, I’m actually dealing with very low energy physics.
When people ask what I do, I find myself saying different things, to differing responses:

  1. “I’m doing a PhD in physics” – reaction: person slowly backs away
  2. “I’m doing a PhD in particle physics” – reaction: some interest, mention of the LHC, person mildly impressed
  3. “I’m doing a PhD in astro-particle physics” – reaction: mild confusion but still interested, probably still mention the Large Hadron Collider
  4. “I’m looking for dark matter!” – reaction: awe, excitement, lots of questions

This obviously isn’t true in all cases, but has been the general pattern assumed. Admittedly, I enjoy that people are impressed, but sometimes I struggle to find a way to explain to people not in physics what I actually do day to day. Often I just say, “it’s a lot of computer programming; I analyse data from a detector to help towards finding a dark matter signal”, but that still induces a panicked look in a lot of people.

Nevertheless, I actually came across a group of people who didn’t ask anything about what I actually do last week, and I found myself going right back to basics in terms of the physics I think about daily. Term has just started, and that means one thing: undergraduates. The frequent noise they make as they stampede past my office going the wrong way to labs makes me wonder if the main reason for sending them away for so long is to give the researchers the chance to do their work in peace.

Nonetheless, somehow I found myself in the undergraduate lab on Friday. I had to ask myself why on earth I had chosen to demonstrate – I am, almost by definition, terrible in a lab. I am clumsy and awkward, and even the most simple equipment feels unwieldy in my hands. During my own undergrad, my overall practical mark always brought my average mark down for the year. My masters project was, thank god, entirely computational. But thanks to a moment of madness (and the prospect of earning a little cash, as London living on a PhD stipend is hard), I have signed up to be a lab demonstrator for the new first year physicists.

Things started off awkwardly as I was told to brief them on the experiment and realised I had not a great deal to say.  I got more into the swing of things as time went by, but I still felt like I’d been thrown in the deep end. I told the students I was a second year PhD student; one of them got the wrong end of the stick and asked if I knew a student who was a second year undergrad here. I told him I was postgraduate and he looked quite embarrassed, whilst I couldn’t help but laugh at the thought of the chaos that would ensue if a second year demonstrated the first year labs.

oscilloscope

The oscilloscope: the nemesis of physics undergrads in labs everywhere

None of them asked what my PhD was in. They weren’t interested – somehow I had become a faceless authority who told them what to do and had no other purpose. I am not surprised – they are brand new to university, and more importantly, they were pretty distracted by the new experience of the laboratory. That’s not to say they particularly enjoyed it, they seemed to have very little enthusiasm for the experiment. It was a very simple task: measuring the speed of sound in air using a frequency generator, an oscillator and a ruler. For someone now accustomed to dealing with data from a high tech dark matter detector, it was bizarre! I do find the more advanced physics I learn, the worse I become at the basics, and I had to go aside for a moment with a pen and paper to reconcile the theory in my head – it was embarrassing, to say the least!

Their frustration at the task was evident – there were frequent complaints over the length of time they were writing for, over the experimental ‘aims’ and ‘objectives’, of the fact they needed to introduce their diagrams before drawing them, etc. Eyes were rolling at me. I was going to have to really try to drill it in that this was indeed an important exercise. The panic I could sense from them was a horrible reminder of how I used to feel in my own labs. It’s hard to understand at that point that this isn’t just some form of torture, you are actually learning some very valuable and transferrable skills about how to conduct a real experiment. Some examples:

  1. Learn to write EVERYTHING down, you might end up in court over something and some tiny detail might save you.
  2. Get your errors right. You cannot claim a discovery without an uncertainty, that’s just physics. Its difficult to grasp, but you can never fully prove a hypothesis, only provide solid evidence towards it.
  3. Understand the health and safety risks – they seem pointless and stupid when the only real risk seems to be tripping over your bags, but speaking as someone who has worked down a mine with pressurised gases, high voltages and radioactive sources, they are extremely important and may be the difference between life and death.

In the end, I think my group did well. They got the right number for the speed of sound and their lab books weren’t a complete disaster. A few actually thanked me on their way out. 

It was a bit of a relief to get back to my laptop where I actually feel like I know what I am doing, but the experience was a stark reminder of where I was 5 years ago and how much I have learned. Choosing physics for university means you will have to struggle to understand things, work hard and exhaust yourself, but in all honestly it was completely worth it, at least for me. Measuring the speed of sound in air is just the beginning. One day, some of those students might be measuring the quarks inside a proton, or a distant black hole, or the quantum mechanical properties of a semiconductor. 

I’m back in the labs this afternoon, and I am actually quite looking forward to seeing how they cope this week, when we study that essential pillar of physics, conservation of momentum. I just hope they don’t start throwing steel ball-bearings at each other. Wish me luck.

Share

Liveblog: New ATLAS Higgs Results

Tuesday, October 7th, 2014

In a short while, starting at 11:00 CEST / 10:00 BST, ATLAS will announce some new Higgs results:

“New Higgs physics results from the ATLAS experiment using the full Run-1 LHC dataset, corresponding to an integrated luminosity of approximately 25 fb-1, of proton-proton collisions at 7 TeV and 8 TeV, will be presented.” [seminar link]

I don’t expect anything earth-shattering, because ATLAS already has preliminary analyses for all the major Higgs channels. They have also submitted final publications for LHC Run I on Higgs decaying to two photons, two b quarks, two Z bosons – so it’s reasonable to guess that Higgs decaying to taus or W’s is going to be covered today.

(Parenthetically, CMS has already published final results for all of the major Higgs decays, because we are faster, stronger, smarter, better looking, and more fun at parties.)

I know folks on ATLAS who are working on things that might be shown today, and they promise they have some new tricks, so I’m hoping things will be fairly interesting. But again, nothing earth-shattering.

I’ll update this very page during the seminar. You should also be able to watch it on the Webcast Service.

10:55 I have a front row seat in the CERN Council Chamber, which is smaller than the main auditorium that you might be more familiar with. Looks like it will be very, very full.

11:00 Here we go! (Now’s a good time to click the webcast, if you plan to.)

11:03 Yes, it turns out it will be taus and W’s.

11:06 As an entree, look how fabulously successful the Standard Model, including the Higgs, has been:

11:10 Good overview right now over overall Higgs production and decay and the framework we used to understand it. Have any questions I can answer during the seminar? Put them in the comments or write something at me on Twitter.

11:18 We’re learning about the already-released results for Higgs to photons and ZZ first.

11:24 Higgs to bb, the channel I worked on for CMS during Run I. These ATLAS results are quite new and have a lot of nice improvements from their preliminary analysis. Very pretty plot of improved Higgs mass resolution when corrections are made for muons produced inside b-jets.

11:30 Now to Higgs to tau tau, a new result!

11:35 Developments since preliminary analysis include detailed validation of techniques for estimating from data how isolated the taus should be from other things in the detector.

11:36 I hope that doesn’t sound too boring, but this stuff’s important. It’s what we do all day, not just counting sigmas.

11:37 4.5 sigma evidence (only 3.5 expected) for the Higgs coupling to the tau lepton!

11:39 Their signal is a bit bigger than the SM predicts, but still very consistent with it. And now on to WW, also new.

11:41 In other news, the Nobel Prize in Physics will be announced in 4 minutes: It’s very unlikely to be for anything in this talk.

11:44 Fixed last comment: “likely” –> “unlikely”. Heh.

11:48 When the W’s decay to a lepton and an invisible neutrino, you can’t measure a “Higgs peak” like we do when it decays to photons or Z’s. So you have to do very careful work to make sure that a misunderstanding of you background (i.e. non-Higgs processes) produces what looks like a Higgs signal.

11:50 Background-subtracted result does show a clear Higgs excess over the SM backgrounds. This will be a pretty strong result.

11:51 6.1 sigma for H –> WW –> lvlv. 3.2 sigma for VBF production mechanism. Very consistent with the SM again.

11:52 Lots of very nice, detailed work here. But the universe has no surprises for us today.

11:54 We can still look forward to the final ATLAS combination of all Higgs channels, but we know it’s going to look an awful lot like the Standard Model. Congratulations to my ATLAS colleagues on their hard work.

11:56 By the way, you can read the slides on the seminar link.

12:02 The most significant result here might actually be the single-channel observation of the Vector Boson Fusion production mechanism. The Higgs boson really is behaving the way the Standard Model says it should! Signing off here, time for lunch

Share

This Fermilab press release came out on Oct. 6, 2014.

With construction completed, the NOvA experiment has begun its probe into the mysteries of ghostly particles that may hold the key to understanding the universe. Image: Fermilab/Sandbox Studio

With construction completed, the NOvA experiment has begun its probe into the mysteries of ghostly particles that may hold the key to understanding the universe. Image: Fermilab/Sandbox Studio

It’s the most powerful accelerator-based neutrino experiment ever built in the United States, and the longest-distance one in the world. It’s called NOvA, and after nearly five years of construction, scientists are now using the two massive detectors – placed 500 miles apart – to study one of nature’s most elusive subatomic particles.

Scientists believe that a better understanding of neutrinos, one of the most abundant and difficult-to-study particles, may lead to a clearer picture of the origins of matter and the inner workings of the universe. Using the world’s most powerful beam of neutrinos, generated at the U.S. Department of Energy’s Fermi National Accelerator Laboratory near Chicago, the NOvA experiment can precisely record the telltale traces of those rare instances when one of these ghostly particles interacts with matter.

Construction on NOvA’s two massive neutrino detectors began in 2009. In September, the Department of Energy officially proclaimed construction of the experiment completed, on schedule and under budget.

“Congratulations to the NOvA collaboration for successfully completing the construction phase of this important and exciting experiment,” said James Siegrist, DOE associate director of science for high energy physics. “With every neutrino interaction recorded, we learn more about these particles and their role in shaping our universe.”

NOvA’s particle detectors were both constructed in the path of the neutrino beam sent from Fermilab in Batavia, Illinois, to northern Minnesota. The 300-ton near detector, installed underground at the laboratory, observes the neutrinos as they embark on their near-light-speed journey through the Earth, with no tunnel needed. The 14,000-ton far detector — constructed in Ash River, Minnesota, near the Canadian border – spots those neutrinos after their 500-mile trip and allows scientists to analyze how they change over that long distance.

For the next six years, Fermilab will send tens of thousands of billions of neutrinos every second in a beam aimed at both detectors, and scientists expect to catch only a few each day in the far detector, so rarely do neutrinos interact with matter.

From this data, scientists hope to learn more about how and why neutrinos change between one type and another. The three types, called flavors, are the muon, electron and tau neutrino. Over longer distances, neutrinos can flip between these flavors. NOvA is specifically designed to study muon neutrinos changing into electron neutrinos. Unraveling this mystery may help scientists understand why the universe is composed of matter and why that matter was not annihilated by antimatter after the big bang.

Scientists will also probe the still-unknown masses of the three types of neutrinos in an attempt to determine which is the heaviest.

“Neutrino research is one of the cornerstones of Fermilab’s future and an important part of the worldwide particle physics program,” said Fermilab Director Nigel Lockyer. “We’re proud of the NOvA team for completing the construction of this world-class experiment, and we’re looking forward to seeing the first results in 2015.”

The far detector in Minnesota is believed to be the largest free-standing plastic structure in the world, at 200 feet long, 50 feet high and 50 feet wide. Both detectors are constructed from PVC and filled with a scintillating liquid that gives off light when a neutrino interacts with it. Fiber optic cables transmit that light to a data acquisition system, which creates 3-D pictures of those interactions for scientists to analyze.

The NOvA far detector in Ash River saw its first long-distance neutrinos in November 2013. The far detector is operated by the University of Minnesota under an agreement with Fermilab, and students at the university were employed to manufacture the component parts of both detectors.

“Building the NOvA detectors was a wide-ranging effort that involved hundreds of people in several countries,” said Gary Feldman, co-spokesperson of the NOvA experiment. “To see the construction completed and the operations phase beginning is a victory for all of us and a testament to the hard work of the entire collaboration.”

The NOvA collaboration comprises 208 scientists from 38 institutions in the United States, Brazil, the Czech Republic, Greece, India, Russia and the United Kingdom. The experiment receives funding from the U.S. Department of Energy, the National Science Foundation and other funding agencies.

For more information, visit the experiment’s website: http://www-nova.fnal.gov.

Note: NOvA stands for NuMI Off-Axis Electron Neutrino Appearance. NuMI is itself an acronym, standing for Neutrinos from the Main Injector, Fermilab’s flagship accelerator.

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Share

Teaming up on top and Higgs

Monday, October 6th, 2014

While the LHC experiments are surely turning their attention towards the 2015 run of the collider, at an energy nearly double that of the previous run, we’re also busy trying to finalize and publish measurements using the data that we already have in the can.  Some measurements just take longer than others, and some it took us a while to get to.  And while I don’t like tooting my own horn too much here at the US LHC blog, I wanted to discuss a new result from CMS that I have been working on with a student, Dan Knowlton, here at the University of Nebraska-Lincoln, along with collaborators from a number of other institutions.  It’s been in the works for so long that I’m thrilled to get it out to the public!

(This is one of many CMS results that were shown for the first time last week at the TOP 2014 conference.  If you look through the conference presentations, you’ll find that the top quark, which has been around for about twenty years now, has continued to be a very interesting topic of study, with implications for searches for new physics and even for the fate of the universe.  One result that’s particularly interesting is a new average of CMS top-quark mass measurements, which is now the most accurate measurement of that quantity in the world.)

The LHC experiments have studied the Higgs boson through many different Higgs decay modes, and many different production mechanisms also.  Here is a plot of the expected cross sections for different Higgs production mechanisms as a function of Higgs mass; of course we know now that the Higgs has a mass of 125 GeV:

The most common production mechanism has a Higgs being produced with nothing else, but it can also be produced in association with other particles.  In our new result, we search for a Higgs production mechanism that is so much more rare that it doesn’t even appear on the above plot!  The mechanism is the production of a Higgs boson in association with a single top quark, and in the standard model, the cross section is expected to be 0.018 pb, about an order of magnitude below the cross section for Higgs production in association with a top-antitop pair.  Why even bother to look for such a thing, given how rare it is?

The answer lies in the reason for why this process is so rare.  There are actually two ways for this particular final state to be produced. Here are the Feynman diagrams for them:

   

In one case, the Higgs is radiated off the virtual W, while in the other it comes off the real final-state top quark.  Now, this is quantum mechanics: if you have two different ways to connect an initial and final state, you have to add the two amplitudes together before you square them to get a probability for the process.  It just so happens that these two amplitudes largely destructively interfere, and thus the production cross section is quite small.  There isn’t anything deep at work (e.g. no symmetries that suppress this process), it’s just how it comes out.

At least, that’s how it comes out in the standard model.  We assume certain values for the coupling factors of the Higgs to the top and W particles that appear in the diagrams above.  Other measurements of Higgs properties certainly suggest that the coupling factors do have the expected values, but there is room within the constraints for deviations.  It’s even possible that one of the two coupling values has the exact opposite sign from what we expect.  In that case, the destructive interference between the two amplitudes would become constructive, and the cross section would be almost a factor of 13 larger than expected!

The new result from CMS is a search for this anomalous production of the Higgs in association with a single top quark.  CMS already has a result for a search in which the Higgs decays to pair of photons; this new result describes a search in which the Higgs decays to bottom quarks.  That is a much more common Higgs decay mode, so there ought to be more events to see, but at the same time the backgrounds are much higher.  The production of a top-antitop pair along with an extra jet of hadrons that is mis-identified as arising from a bottom quark looks very much like the targeted Higgs production mechanism.  The top-antitop cross section is about 1000 times bigger than that of the anomalous production mechanism that we are looking for, and thus even a tiny bottom mis-identification rate leads to a huge number of background events.  A lot of the work in the data analysis goes into figuring out how to distinguish the (putative) signal events from the dominant background, and then verifying that the estimations of the background rates are correct.

The analysis is so challenging that we predicted that even by throwing everything we had at it, the best we could expect to do was to exclude the anomalous Higgs production process at a level of about five times the predicted rate for it.  When we looked at the data, we found that we could exclude it at about seven times the anomalous rate, roughly in line with what we expected.  In short, we do not see an anomalous rate for anomalous Higgs production!  But we are able to set a fairly tight limit, at around 1.8 pb.

What do I like about this measurement?  First, it’s a very different way to try to measure the properties of the Higgs boson.  The measurements we have are very impressive given the amount of data that we have so far, but they are not very constraining, and there is enough wiggle room for some strange stuff to be going on.  This is one of the few ways to probe the Higgs couplings through the interference of two processes, rather than just through the rate for one dominant process.  All of these Higgs properties measurements are going to be much more accurate in next year’s data run, when we expect to integrate more data and all of the production rates will be larger due to the increase in beam energy.  (For this anomalous production process, the cross section will increase by about a factor of four.)  In this particular case, we should be able to exclude anomalous Higgs couplings through this measurement…or, if nature surprises us, we will actually observe them!  There is a lot of fun ahead for Higgs physics (and top physics) at the LHC.

I’ve also really enjoyed working with my CMS colleagues on this project.  Any measurement coming out of the experiment is truly the work of thousands of people who have built and operated the detector, gotten the data recorded and processed, developed and refined the reconstruction algorithms, and defined the baselines for how we identify all kinds of particles that are produced in the proton collisions.  But the final stages of any measurement are carried out by smaller groups of people, and in this case we worked with colleagues from the Catholic University of Louvain in Belgium, the Karlsruhe Institute of Technology in Germany, the University of Malaya in Malaysia, and the University of Kansas (in Kansas).  We relied on the efforts of a strong group of graduate students with the assistance of harried senior physicists like myself, and the whole team did a great job of supporting each other and stepping up to solve problems as they arose.  These team efforts are one of the things that I’m proud of in particle physics, and that make our scientists so successful in the wider world.

Share

Why pure research?

Thursday, October 2nd, 2014

With my first post on Quantum Diaries I will not address a technical topic; instead, I would like to talk about the act (or art) of “studying” itself. In particular, why do we care about fundamental research, pure knowledge without any practical purpose or immediate application?

A. Flexner in 1939 authored a contribution to Harper’s Magazine (issue 179) named “The usefulness of useless knowledge”. He opens the discussion with an interesting question: “Is it not a curios fact that in a world steeped in irrational hatreds which threaten civilization itself, men and women – old and young – detach themselves wholly or partly from the angry current of daily life to devote themselves to the cultivation of beauty, to the extension of knowledge […] ?”

Nowadays this interrogative is still present, and probably the need for a satisfactory answer is even stronger.

From a pragmatic point of view, we can argue that there are many important applications and spin-offs of theoretical investigations into the deep structure of Nature that did not arise immediately after the scientific discoveries. This is, for example, the case of QED and antimatter, the theories for which date back to the 1920s and are nowadays exploited in hospitals for imaging purposes (like in PET, positron emission tomography). The most important discoveries affecting our everyday life, from electricity to the energy bounded in the atom, came from completely pure and theoretical studies: electricity and magnetism, summarized in Maxwell’s equations, and quantum mechanics are shining examples.

It may seem that it is just a matter of time: “Wait enough, and something useful will eventually pop out of these abstract studies!” True. But that would not be the most important answer. To me this is: “Pure research is important because it generates knowledge and education”. It is our own contribution to the understanding of Nature, a short but important step in a marvelous challenge set up by the human mind.

Personally, I find that research into the yet unknown aspects of Nature responds to some partly conscious and partly unconscious desires. Intellectual achievements provide a genuine ‘spiritual’ satisfaction, peculiar to the art of studying. For sake of truth I must say that there are also a lot of dark sides: frustration, stress, graduate-depression effects, geographical and economic instability and so on. But leaving for a while all these troubles aside, I think I am pretty lucky in doing this job.

source_of_knowledge

Books, the source of my knowledge

During difficult times from the economic point of view, it is legitimate to ask also “Why spend a lot of money on expensive experiments like the Large Hadron Collider?” or “Why fund abstract research in labs and universities instead of investing in more socially useful studies?”

We could answer by stressing again the fact that many of the best innovations came from the fuzziest studies. But in my mind the ultimate answer, once for all, relies in the power of generating culture, and education through its diffusion. Everything occurs within our possibilities and limitations. A willingness to learn, a passion for teaching, blackboards, books and (super)computers: these are our tools.

Citing again Flexner’s paper: “The mere fact spiritual and intellectual freedoms bring satisfaction to an individual soul bent upon its own purification and elevation is all the justification that they need. […] A poem, a symphony, a painting, a mathematical truth, a new scientific fact, all bear in themselves all the justification that universities, colleges and institutes of research need or require.”

Last but not least, it is remarkable to think about how many people from different parts of the world may have met and collaborated while questing together after knowledge. This may seem a drop in the ocean, but research daily contributes in generating a culture of peace and cooperation among people with different cultural backgrounds. And that is for sure one of the more important practical spin-offs.

Share

This article appeared in Fermilab Today on Sept. 30, 2014.

Illinois Mathematics and Science Academy students Nerione Agrawal (left) and Paul Nebres (right) work on the Muon g-2 experiment through the Student Inquiry and Research program. Muon g-2 scientist Brendan Kiburg (center) co-mentors the students. Photo: Fermilab

Illinois Mathematics and Science Academy students Nerione Agrawal (left) and Paul Nebres (right) work on the Muon g-2 experiment through the Student Inquiry and Research program. Muon g-2 scientist Brendan Kiburg (center) co-mentors the students. Photo: Fermilab

As an eighth grader, Paul Nebres took part in a 2012 field trip to Fermilab. He learned about the laboratory’s exciting scientific experiments, said hello to a few bison and went home inspired.

Now a junior at the Illinois Mathematics and Science Academy (IMSA) in Aurora, Nebres is back at Fermilab, this time actively contributing to its scientific program. He’s been working on the Muon g-2 project since the summer, writing software that will help shape the magnetic field that guides muons around a 150-foot-circumference muon storage ring.

Nebres is one of 13 IMSA students at Fermilab. The high school students are part of the academy’s Student Inquiry and Research program, or SIR. Every Wednesday over the course of a school year, the students use these weekly Inquiry Days to work at the laboratory, putting their skills to work and learning new ones that advance their understanding in the STEM fields.

The program is a win for both the laboratory and the students, who work on DZero, MicroBooNE, MINERvA and electrical engineering projects, in addition to Muon g-2.

“You can throw challenging problems at these students, problems you really want solved, and then they contribute to an important part of the experiment,” said Muon g-2 scientist Brendan Kiburg, who co-mentors a group of four SIR students with scientists Brendan Casey and Tammy Walton. “Students can build on various aspects of the projects over time toward a science result and accumulate quite a nice portfolio.”

This year roughly 250 IMSA students are in the broader SIR program, conducting independent research projects at Argonne National Laboratory, the University of Chicago and other Chicago-area institutions.

IMSA junior Nerione Agrawal, who started in the SIR program this month, uses her background in computing and engineering to simulate the potential materials that will be used to build Muon g-2 detectors.

“I’d been to Fermilab a couple of times before attending IMSA, and when I found out that you could do an SIR at Fermilab, I decided I wanted to do it,” she said. “I’ve really enjoyed it so far. I’ve learned so much in three weeks alone.”

The opportunities for students at the laboratory extend beyond their particular projects.

“We had the summer undergraduate lecture series, so apart from doing background for the experiment, I learned what else is going on around Fermilab, too,” Nebres said. “I didn’t expect the amount of collaboration that goes on around here to be at the level that it is.”

In April, every SIR student will create a poster on his or her project and give a short talk at the annual IMSAloquium.

Kiburg encourages other researchers at the lab to advance their projects while nurturing young talent through SIR.

“This is an opportunity to let a creative person take the reins of a project, steward it to completion or to a point that you could pick up where they leave off and finish it,” he said. “There’s a real deliverable outcome. It’s inspiring.”

Leah Hesla

Share

Dark matter is a tough thing to study. There is no getting around it: any strategy we can come up with to look for these invisible mystery particles must hinge on the sneaky little creatures interacting in some way with ordinary Standard Model particles. Otherwise we haven’t got even the slightest chance of seeing them.

One of the most popular classes of dark matter candidates is the Weakly Interacting Massive Particles (WIMPs), so called because they do not interact electromagnetically, only weakly, with ordinary matter.  In direct detection we look for WIMPs that interact by scattering off of a Standard Model particle. In contrast, indirect detection looks for interactions that consist of a dark matter particle (either a WIMP or a non-WIMP — it doesn’t matter)  annihilating with another dark matter particle or decaying on its own into Standard Model particles.  These Standard Model end products we have a good chance of detecting if we can just get our backgrounds low enough. In my last post, “Dark Skies: A Very Brief Guide to Indirect Detection,” I gave a more detailed look at the kinds of annihilation and decay products that we might expect from such a process and spoke briefly about some of the considerations that must go into a search for particles from these annihilation and decay processes. Today I will highlight three of the indirect detection experiments currently attacking the dark matter problem.

***

AMS-02

The Alpha Magnetic Spectrometer (AMS-02) is a large indirect detection experiment situated on the International Space Station. I am especially excited to talk to you about this experiment because just a couple of days ago AMS-02 released a very interesting result. Although I include a link to the press release and the relevant papers below, I intend to give away the punchline by summarizing here everything I know about the AMS-02 experiment and their result from this past week.

(Left) A 3D rendering of the AMS-02 detector, from the AMS-02 group website, ams02.org

Fig. 1: (Left) A 3D rendering of the AMS-02 detector, from the AMS-02 group website located at www.ams02.org.  (Right) A schematic of the AMS-02 detector showing all of its various subsystems [1].

First, let’s talk about the design of the experiment (Fig. 1). As the infamous Shrek once said, ogres are like onions. Well, most big particle physics experiments are like onions too. They consist of many layers of detectors interspersed with different kinds of shielding – the quintessential example being big collider experiments like ATLAS and CMS at the Large Hadron Collider. AMS-02 has just as many layers and almost as much complexity to it as something like ATLAS. In no particular order, these layers are:

  • A donut-shaped superconducting magnet surrounding most of the AMS-02 detector. Any particles traversing through the hole in the donut will be deflected by the magnet, and particles of different charges are deflected in different directions. The magnet therefore is an effective way to separate positrons from electrons, positive ions from negative ones, and antimatter from ordinary matter.
  • Veto or anticoincidence counters (ACCs) that lie just inside the superconducting magnet. The ACCs tag any particle that enters the detector through the side rather than the hole in the magnet thus allowing the AMS-02 to reject particle events that do not have well-understood trajectories.
  • A Transition Radiation Detector (TRD) consisting of twenty layers of material (that provides discrimination between extremely high-energy leptons (positrons, electrons) and hadrons (protons, etc) that are traveling near the speed of light. Each time an electron or positron passes through the TRD, it produces a shower of x-rays as it crosses the interface between layers, but a proton or other hadron does not.
  • The Time of Flight (ToF) system, which consists of 4 layers of scintillation counters, two above and two below the detector that measure the time it takes for a particle to traverse the detector and also serve as a trigger system, indicating to the other subsystems that a particle has entered the detector.
  • The tracker, which consists of eight layers of double-sided silicon sensors that record the path of any particle that enters the the detector through the magnet. By the time a particle has reached the tracker, the magnet has already done half the work by separating positively-charged from negatively-charged particles. By looking at the trajectory of each particle inside the tracker, the AMS-02 can not only determine which particles are positive and negative but also the atomic number Z of any nucleus that enters the detector, hence the “Spectrometer” part of “Alpha Magnetic Spectrometer.”
  • A Ring Imaging Cherenkov detector (RICH) which measures particle velocities by looking at the shape of the Cherenkov radiation that is produced by incident particles.
  • An electromagnetic calorimeter (ECAL) consisting of a dense piece of material inside which incident particles produce secondary showers of particles. By looking at these particle showers, the ECAL helps to discriminated between leptons (electrons, positrons, gammas) and hadrons (e.g. protons) that, if they have the same trajectory through the magnet, might be otherwise impossible to tell apart.

Although it sounds complicated, the combined power of all these various subdetectors allows AMS-02 to do a spectacular job of characterizing many different types of particles.  Here, the particles relevant to dark matter detection are antimatter particles such as positrons, antiprotons, and antideuterons. In the absence of exotic processes, we expect the spectra of these particles to be smoothly decreasing, isotropic, and generally well-behaved. Any bumps or features in, say, the positron or antiproton spectra would indicate some new process at work –like possibly WIMP annihilations [2].

(Fig. 2) The positron fraction measurement released by the AMS-02 collaboration in spring 2013.

Fig. 2: The first positron fraction measurement produced by the AMS-02 collaboration, released in April 2013 [3].

Back in April 2013, AMS-02 released its first measurement of the fraction of positrons as compared to electrons in cosmic rays (Fig 2). Clearly the curve in Fig. 2 is not decreasing – there is some other source of positrons at work here. There was some small speculation among the scientific community that the rise in positron fraction at higher energies could be attributed to dark matter annihilations, but the general consensus was that this shape is caused by a more ordinary astrophysical source such as pulsars. So in general, how do you tell what kind of positron source could cause this shape of curve? The answer is this: if the rise in positron fraction is due to dark matter annihilations, you can expect to see a very sharp dropoff at higher energies. A less exotic astrophysical source would result in a smooth flattening of this curve at higher energies [3].

Fig. 3:  The positron fraction akdsjhkajsh

Fig. 3: An updated positron fraction measurement from two years’ worth of data released by the AMS-02 collaboration on September 18, 2014 [4].

On September 18, 2014, AMS-02 released a followup to its 2013 result covering a larger range of energies in order to further investigate this positron excess (Fig. 3). The positron fraction curve does in fact begin to drop off at higher energies. Is it a smoking-gun signal of WIMP annihilations? Not yet – there are not enough statistics at high energies to differentiate between a smooth turnover or an abrupt drop in positron fraction. For now, the AMS-02 plans to investigate the positron flux at even higher energies to determine the nature of this turnover and to do a similar measurement with the antiproton fraction as compared to regular protons.

For a webcast of the official press release, you can go here. Or, to read about the AMS-02 results in more detail, check out the references [1] and [4] at the bottom of this article.

***

Fermi-LAT

Fig. x:

Fig. 4: A view of the gamma-ray sky from the Fermi-LAT instrument, from http://fermi.gsfc.nasa.gov/ssc/. The bright line is the galactic plane.

The Fermi Large Area Telescope (LAT) is another indirect detection experiment that has seen hints of something interesting. In this particular experiment, the signal of interest comes from gamma rays of energies ranging from tens of MeVs to more than 300 GeV. The science goals are to study and catalog localized gamma-ray sources, to investigate the diffuse gamma-ray background in our part of the universe, and to search for gamma rays resulting from new physics processes, such as dark matter annihilations.

Because the Earth’s atmosphere is not transparent to gamma rays, our best chance to study them lies out in space. The Fermi-LAT is a very large space-based observatory which detects gammas through a mechanism called pair conversion, where a high-energy photon rather than being reflected or refracted upon entering a medium converts into an electron-positron pair. Inside the LAT, this conversion takes place inside a tracker module in one of several thin-yet-very-dense layers of tungsten. There are sixteen of these conversion layers total, interleaved with eighteen layers of silicon strip detectors that record the x- and y- position of any tracks produced by the electron-positron pair. Beneath the tracker modules is a calorimeter, consisting of a set of CsI modules that absorb the full energy of the electrons and positrons and therefore give us a good measure of the energy of the original gamma ray. Finally, the entire detector is covered by an anticoincidence detector (ACD) consisting of plastic scintillator tiles that scintillate when charged particles pass through them but not when gamma rays pass through, thereby providing a way to discriminate the signal of interest from cosmic ray backgrounds (Fig. 5).

dsgssfdgsdf

Fig. 5: (Left) A 3D rendering of the Fermi spacecraft deployed above the earth, from http://fermi.gsfc.nasa.gov/. (Right) A design schematic of the Fermi-LAT instrument, also from http://fermi.gsfc.nasa.gov/.

One of the nice things about the Fermi telescope is that it not only has a wide field of view and continually scans across the entire sky, but it can also be pointed at specific locations. Let’s consider for a moment some of the places we could point the Fermi-LAT if we are hoping to detect a dark matter signal [6].

  • The probability of dark matter annihilations taking place is highest in regions of high density like our galactic center, but there is a huge gamma-ray background to contend with from many various astrophysical sources. If we look further out into our galactic halo, there will be less background, but also less statistics for our signal of interest. And there is still a diffuse gamma-ray background to contend with. However, a very narrow peak in the gamma spectrum that is present across the entire sky and not associated with any one particular localized astrophysical source would be very suggestive of a dark matter signal – exactly the kind of smoking gun we are looking for.
  • We could also look at other galaxies. Certain kinds of galaxies called dwarf spheroidals are particularly promising for a number of reasons. First of all, the Milky Way has several close dwarf neighbors, so there are plenty to choose from. Second, dwarf galaxies are very dim. They have few stars, very little gas, and few gamma-ray sources like pulsars or supernova remnants. Should a gamma signal be seen across several of these dwarf galaxies, it would be very hard to explain by any means other than dark matter annihilation.

In spring 2012, two papers came out one after another suggesting that a sharp gamma peak had indeed been found near the galactic center, which you can see in Fig. 6 [7, 8]. What is the cause of this feature? Was it some kind of instrumental effect? A statistical fluctuation? Was it the dark matter smoking gun? The Fermi-LAT team officially commented on these papers later that November, reporting a feature that was much less statistically significant and much closer to 135 GeV and consistent with gamma rays produced by cosmic rays in the earth’s atmosphere [9].

Fig. 5: The 2012 gamma-ray spectrum produced from three years of data.  The green markers represent the best-fit background-only model, the red markers represent the best-fit background + WIMP annihilation model, and the black points are the counts that were actually observed in each energy bin.  The bottom panel shows the residual. [7]

Fig. 6: The gamma-ray spectrum reported by Fermi in 2012 [7]. The black points show the number of counts observed in each energy bin.  The green markers represent the best-fit background-only model, the red markers represent the best-fit background + WIMP annihilation model, and the blue dots represent the best-fit WIMP annihilation model with the background subtracted off.

This gamma line has been an active target of study since 2012. In 2013, the Fermi-LAT group released a further investigation of this feature using over 3.7 years of data. A bump in the spectrum at about 133 GeV was again observed, consistent with the 2012 papers, but with decreased statistical significance in part because this feature was narrower than the energy resolution of the LAT and because a similar (yet smaller) feature was seen in the earth’s “limb”, or outermost edges of the atmosphere [10]. The hypothesis that this bump in the gamma-ray spectrum is a WIMP signal has all but fallen out of favor within the scientific community.

In the meantime, Fermi-LAT has also been looking for gamma rays from nearby dwarf galaxies. A survey of 25 dwarf galaxies near the Milky Way yielded no such statistically-significant signal [11]. For the next few years, Fermi will continue its search for dark matter as well continuing to catalog and investigate other astrophysical gamma-ray sources.

***

IceCube

Fig. x: The IceCube collaboration.  Image from http://news.ls.wisc.edu/.

Fig. 7: Members of the IceCube collaboration. Image from http://news.ls.wisc.edu/.

Last but certainly not least, I wanted to discuss one of my particular favorite experiments. IceCube is really cool for many reasons, not the least of which is because it is situated at the South Pole! Like LUX (my home experiment), IceCube consists of a large array of light sensors (photomultiplier tubes) that look for flashes of light indicating particle interactions within a large passive target. Unlike LUX, however, the target medium in IceCube is the Antarctic ice itself, which sounds absolutely fantastical until you consider the following: if you go deep enough, the ice is extremely clear and uniform, because the pressure prevents bubble formation; and if you go deep enough it becomes very dark, so that any flashes of light inside the ice will stand out.

In IceCube, neutrinos are the main particles of interest. They are the ninjas of the particle physics world – neutrinos interact only very rarely with other particles and are actually rather difficult to detect. However, when your entire detector consists of a giant chunk of ice 2.5 kilometers deep, that’s a lot of material, resulting in a not-insignificant probability that a passing neutrino will interact with an atom inside your detector. A neutrino interacting with the ice will produce a shower of secondary charged particles, which in turn produce Cherenkov radiation that can be detected. Neutrinos themselves are pretty awesome on their own, and there is a wealth of interesting neutrino research currently taking place. They can also tell us a lot about a variety of astrophysical entities such as gamma-ray bursts and supernovae. And even more importantly for this article, neutrinos can occur as a result of dark matter annihilations.

Unfortunately for the dark matter search, muons and neutrinos produced by cosmic ray interactions in the atmosphere are a major source of background in the detector. Muons, because they are quite heavy, tend to travel long distances in most materials. Luckily, they don’t travel nearly as far as neutrinos – we’re talking on the order of only a few meters on average before they attenuate in a medium like ice. Neutrinos travel vastly further, so a good way to discriminate between cosmic-ray muons and neutrinos is to eliminate any downwards-traveling particles. Any upwards-traveling particle tracks must be from astrophysical neutrinos, because only they can traverse the entire diameter of the Earth without getting stopped somewhere in the ground. To put it more succinctly: IceCube makes use of the entire Earth as a shield against backgrounds! Atmospheric neutrinos are a little more difficult to distinguish from the astrophysical neutrinos relevant to dark matter searches, but are neve­rtheless an active target of study and are becoming increasingly better understood.

Now that we’ve talked about the rationale of building gigantic detectors out of ice and the kinds of signals and backgrounds to expect in such detectors, let’s talk the actual design of the experiment. IceCube consists of 86 strings of 60 digital optical modules, each consisting of a photomultiplier tube and a readout board, deployed between 1.5 and 2.5 kilometers deep in the Antarctic ice. How do you get the modules down there? Only with the help of very powerful hot-water drills. The drilling of these holes and the construct­ion of IceCube is exciting enough that it probably warrants its own article.

icecube_fig_a

Fig. 8: A schematic of the IceCube experiment.  Note the Eiffel Tower included for scale.  Image from [12].

Alongside the strings that make up the bulk of the detector, IceCube also contains a couple of other subdetectors. There is a detector called IceTop on the surface of the ice that is used to help veto events that are atmospheric in origin. There is another detector called DeepCore that consists of additional strings with optical modules packed much more tightly together than the regular strings for the purpose of looking of increasing the sensitivity to low-energy events. Other special extensions designed to look for very high and very low energy events are also planned for the near future.

With regards to the quest for dark matter, the IceCube strategy is to focus on two different WIMP annihilation models: χχ to W+W- (or τ+τ- for WIMPs that are lighter than W bosons) and χχ to b b-bar. In each case, the decay products produce secondary particles, including some neutrinos. By examining neutrinos both from the sun and from other galaxies and galaxy clusters, IceCube has produced a very competitive limit on the cross section for dark matter annihilation via these and other similar annihilation modes [12, 13].

Fig. 9:

Fig. 9: IceCube’s 2012  limit on dark matter – proton spin dependent interactions.  All of the black curves correspond to different neutrino models.  Image from [15].

For more information, there is a wonderful in-depth review of the IceCube detector design and status at the Annual Review of Nuclear and Particle Science.

***

So there you have it. These are some of the big projects keeping an eye out for WIMPs in the sky. At least some of these experiments have seen hints of something promising, so over the next couple years maybe we’ll finally get that five-sigma discovery that we want so badly to see.

 

References:

[1] AMS Collaboration. “High statistics measurement of the positron fraction is primary cosmic rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station.” Phys. Rev. Lett. 113 (2014) 121101.

[2] AMS Collaboration. “First result from the Alpha Magnetic Spectrometer on the International Space Station: Precision measurement of the positron fraction in primary cosmic rays of 0.5-350 GeV.” Phys. Rev. Lett. 110 (2013) 141102.

[3] Serpico, Pasquale D. “Astrophysical models for the origin of the positron ‘excess’.”  Astroparticle Physics, Vol. 39, pg. 2-11 (2011). ArXiv e-print 1108.4827.

[4] AMS Collaboration. “Electron and positron fluxes in primary cosmic rays measured with the Alpha Magnetic Spectrometer on the International Space Station.” Phys. Rev. Lett. 113 (2014) 121102.

[5] Fermi-LAT Collaboration.  “The large area telescope on the Fermi Gamma-Ray Space Telescope mission.” The Astrophysical Journal, Vol. 697, Issue 2, pp. 1071-1102 (2009). ArXiv e-print 0902.1089.

[6] Bloom, Elliott. “The search for dark matter with Fermi.” Conference presentation – Dark Matter 2014, Westwood, CA. http://www.pa.ucla.edu/sites/default/files/webform/ElliottBloom_UCLADMMeeting_022614_FinalkAsPlacedOnDM2014Website.pdf.

[7] Bringmann, Torsten, et al. “Fermi LAT search for internal bremsstrahlung signatures from dark matter annihilation.” JCAP 1207 (2012) 054. ArXiv e-print 1203.1312.

[8] Weniger, Christoph. “A tentative gamma-ray line from dark matter annihilation at the Fermi Large Area Telescope.” JCAP 1208 (2012) 007. ArXiv e-print 1204.2797.

[9] Albert, Andrea. “Search for gamma-ray spectral lines in the Milky Way diffuse with the Fermi Large Area Telescope.” Conference presentation – The Fermi Symposium 2012. http://fermi.gsfc.nasa.gov/science/mtgs/symposia/2012/program/fri/AAlbert.pdf.

[10] Fermi-LAT Collaboration. “Search for gamma-ray spectral lines with the Fermi Large Area Telescope and dark matter implications.” Phys.Rev. D 88 (2013) 082002 . ArXiv e-print 1305.5597.

[11] Fermi-LAT Collaboration. “Dark matter constraints from observations of 25 Milky Way satellite galaxies with the Fermi Large Area Telescope.” Phys.Rev. D 89 (2014) 4, 042001. ArXiv e-print 1310.0828.

[12] IceCube Collaboration. “Measurement of South Pole ice transparency with the IceCube LED calibration system.” ArXiv e-print 1301.5361I.

[13] IceCube Collaboration. “Search for dark matter annihilations in the Sun with the 79-string IceCube detector.” Phys. Rev. Lett. 110, 131302 (2013). ArXiv e-print 1212.4097v2.

[14] IceCube Collaboration. “IceCube search for dark matter annihilation in nearby galaxies and galaxy clusters.” Phys. Rev. D 88 (2013) 122001. ArXiv e-print 1307.3473v2.

[15] Arguelles, Carlos A., and Joachim Kopp. “Sterile neutrinos and indirect dark matter searches in IceCube.” JCAP 1207 (2012) 016. ArXiv e-print 1202.3431.

Share