• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

USLHC | USA

The Higgs Boson: A Natural Disaster!

Kyle Cranmer
Saturday, February 1st, 2014

The discovery of the Higgs boson was a triumph for particle physics. Its discovery completes the tremendously successful Standard Model of particle physics.  Of course, we know there are other phenomena — like dark matter, the dominance of matter over anti-matter, the mass of neutrinos, etc. — that aren’t explained by the Standard Model.  However, the Higgs itself is the source of one of the deepest mysteries of particle physics: the fine tuning problem.

The fine-tuning problem is related to the slippery concept of naturalness, and has driven the bulk of theoretical work for the last several decades.  Unfortunately, it is notoriously difficult to explain.  I took on this topic recently for a public lecture and came up with an analogy that I would like to share.

Why we take our theory seriously

Before discussing the fine tuning, we need need a few prerequisites.  The first thing to know is that the Standard Model (and most other theories we are testing) is based on a conceptual framework called Relativistic Quantum Field Theory (QFT).  As you might guess from the name, it’s based on the pillars of relativity, quantum mechanics, and field theory.  The key point here is that relativistic quantum field theory goes beyond the initial formulation of quantum mechanics.  To illustrate this difference, let’s consider a property of the electron and muon called its “g-factor” that relates its magnetic moment and spin [more].  In standard quantum mechanics, the prediction is that g=2; however, with relativistic quantum field theory we expect corrections.  Those corrections are shown pictorially  in the Feynman diagrams below.

g-2corrections

It turns out that this correction is small — about one part in a thousand.  But we can calculate it to an exquisite accuracy (about ten digits).  Moreover, we can measure it to a comparable accuracy.  The current result for the muon is

g = 2.0023318416 ± 0.000000001

This is a real tour de force for relativistic quantum field theory and represents one of the most stringent tests of any theory in the history of science [more].  To put it into perspective, it’s slightly better than hitting a hole in one from New York to China (that distance is about 10,000 km =1 billion cm).

It is because of tests like these that we take the predictions of this conceptual framework very seriously.

Precision-g-2

The Higgs, fine tuning, and an analogy

It turns out that all quantities that we can predict receive similar quantum corrections, even the mass of the Higgs boson.    In the Standard Model, there is a free parameter that can be thought of as an initial estimate for the Higgs mass, let’s call it M₀.  There will also be corrections, let’s call them ΔM (where Δ is pronounced “delta” and it indicates “change to”).   The physical mass that we observe is this initial estimate plus the corrections.  [For the aficionados: usually physicists talk about the mass squared instead of the mass, but that does not change the essential message].

The funny thing about the mass of the Higgs is that the corrections are not small.  In fact, the naive size of the corrections is enormously larger than the 126 GeV mass of that we observe!

Confused?  Now is a good time to bring in the analogy.  Let’s think about the budget of a large country like the U.S.  We will think of positive contributions to the Higgs mass as income (taxes) and negative contributions to the Higgs mass as spending.  The physical Higgs mass that we measure corresponds to the budget surplus.

Now imagine that there is no coordination between the raising of taxes and government spending (maybe it’s not that hard). Wouldn’t you be surprised that a large economy of trillions of dollars would have a budget balanced to better than a penny?  Wouldn’t that be unnatural to expect such a  fine tuning between  income and spending if they are just independent quantities?

This is exactly the case we find ourselves in with the Standard Model… and we don’t like it.  With the discovery of the Higgs, the Standard Model is now complete.  It is also the first theory we have had that can be extrapolated to very high energies (we say that it is renormalizable). But it has a severe fine tuning problem and does not seem natural.

Budget

AnalogyTable

 

The analogy can be fleshed out a bit more.  It turns out that the size of the corrections to the Higgs mass is related to something we call the cutoff, which is the  energy scale where the theory is no longer a valid approximation because some other phenomena become important.  For example, in a grand unified theory the strong force and the electroweak force would unify at approximately 10¹⁶ GeV (10 quadrillion GeV), and we would expect the corrections to be of a similar size.  Another common energy scale for the cutoff is the Planck Scale — 10¹⁹ GeV — where the quantum effects of gravity become important.  In the analogy, the cutoff energy corresponds to the fiscal year.  As time goes on, the budget grows and the chance of balancing the budget so precisely seems more and more unnatural.

Going even further, I can’t resist pointing out that the analogy even offers a nice way to think about one of the most enigmatic concepts in quantum field theory called renormalization.  We often use this term to describe how fundamental constants aren’t really constant.  For example, the  charge of an electron depends on the energy you use to probe the electron.  In the analogy, renormalization is like adjusting for inflation.  We know that a dollar today isn’t comparable to a dollar fifty years ago.

Breaking down the budget

The first thing one wants to understand before attempting to balance the budget is to find out where the money is going.  In the U.S. the big budget items are the military and social programs like social security and medicare.  In the case of the Higgs, the biggest corrections come from the top quark (the top diagrams on the right).  Of course the big budget items get most of the attention, and so it is with physics as well.  Most of the thinking that goes into to solving the fine tuning problem is related to the top quark.

BudgetOfCorrections

Searching for a principle to balance the budget

Maybe it’s not a miraculous coincidence that the budget is balanced so well.  Maybe there is some underlying principle.  Maybe someone came to Washington DC and passed a law to balance the budget that says that for every dollar of spending there must be a dollar of revenue.  This is an excellent analogy for supersymmetry.  In supersymmetry, there is an underlying principle — a symmetry — that relates two types of particles (fermions and bosons).  These two types of particles give corrections to the Higgs mass with opposite signs.  If this symmetry was perfect, the budget would be perfectly balanced, and it would not be unnatural for the Higgs to be 126 GeV.

That is one of the reasons that supersymmetry is so highly motivated, and there is an enormous effort to search for signs of supersymmetry in the LHC data.  Unfortunately, we haven’t seen any evidence for supersymmetry thus far. In the analogy that is a bit like saying that if there is some sort of law to balance the budget, it allows for some wiggle room between spending and taxes.  If the laws allow for too much wiggle room between spending and taxes then it may still be a law, but it isn’t explaining why the budget is balanced as well as it is.  The current state of the LHC experiments indicates that budget is balanced about 10-100 times better than the wiggle room allows  — which is better than we would expect, but not so much better that it seems unnatural.  However, if we don’t see supersymmetry in the next run of the LHC the situation will be worse. And if we were to build a 100 TeV collider and not see evidence of supersymmetry, then the level of fine tuning would be high enough that most physicists probably would consider the situation unnatural and abandon supersymmetry as the solution to the fine tuning problem.

SUSY

Since the fine tuning problem was first recognized, there have been essentially two proposed solutions.  One of them is supersymmetry, which I discussed above.  The second is often referred to as strong dynamics or compositeness.  The idea there is that maybe the Higgs is not a fundamental particle, but instead it’s a composite of some more fundamental particles.  My colleague Jamison Galloway and I tried to think through the analogy in that situation. In that case, one must start to think of different kinds of currencies… say the dollar for the Higgs boson and something other currencies like bitcoin for the more fundamental particles.  You would imagine that as time goes on (energy increases) that there is a transition from one currency to another.   At early times the budget is described entirely in terms of  dollars, but at later times the budget is described in terms of bitcoin.  That transition can be very complicated, but if it happened at a time when the total budget in dollars wasn’t too  large, then a well balanced budget wouldn’t seem too unnatural.  Trying to explain the rest of the compositeness story took us from a simple analogy to the basis for a series of sci-fi fantasy books, and I will spare you from that.

There are a number of examples where this aesthetic notion of naturalness has been a good guide, which is partially why physicists hold it so dear.  However, another avenue of thinking is that maybe the theory is unnatural, maybe it is random chance that the budget is balanced so well.  That thinking is bolstered by the idea that there may be a huge number of universes that are part of a larger complex we call the multiverse. In most of these universes the budget wouldn’t be balanced, the Higgs mass  would be very different.  In fact, most universes would not form atoms, would not form starts, and would not support life.  Of course, we are here here to observe our universe, and the conditions necessary to support life select very special universes out of the larger multiverse.  Maybe it is this requirement that explains why our universe seems so finely tuned.  This reasoning is called the anthropic principle, and it is one of the most controversial topics in theoretical physics. Many consider it giving up on a more fundamental theory that would explain why nature is as it is.  The very fact that we are resorting to this type of reasoning is evidence that the fine tuning problem is a big deal. I discuss this at the end of the public lecture (starting around the 30 min mark) with another analogy for the multiverse, but maybe I will leave that for another post.

Nota bene:  After developing this analogy I learned about a similar analogy from Tommaso Dorigo. They both use the idea of money, but the budget analogy goes a bit further.

Share

No cream, no sugar

Ken Bloom
Monday, January 6th, 2014

My first visit to CERN was in 1997, when I was wrapping up my thesis work. I had applied for, and then was offered, a CERN fellowship, and I was weighing whether to accept it. So I took a trip to Geneva to get a look at the place and make a decision. I stayed on the outskirts of Sergy with my friend David Saltzberg (yes, that David Saltzberg) who was himself a CERN fellow, and he and other colleagues helped set up appointments for me with various CERN physicists.

Several times each day, I would use my map to find the building with the right number on it, and arrive for my next appointment. Invariably, I would show up and be greeted with, “Oh good, you’re here. Let’s go get a coffee!”

I don’t drink coffee. At this point, I can’t remember why I never got started; I guess I just wasn’t so interested, and may also have had concerns about addictive stimulants. So I spent that week watching other people drink coffee. I learned that CERN depends on large volumes of coffee for its operation. It plays the same role as liquid helium does for the LHC, allowing the physicists to operate at high energies and accelerate the science. (I don’t drink liquid helium either, but that’s a story for another time.)

Coffee is everywhere. In Restaurant 1, there are three fancy coffee machines that can make a variety of brews. (Which ones? You’re asking the wrong person.) At breakfast time, the line for the machines stretches across the width of the cafeteria, blocking the cooler that has the orange juice, much to my consternation. Outside the serving area, there are three more machines where one can buy a coffee with a jeton (token) that can be purchased at a small vending machine. (I don’t know how much they cost.) After lunch, the lines for these machines clogs the walkway to the place where you deposit your used trays.

Coffee goes beyond the restuarants. Many buildings (including out-of-the-way Building 8, where my office is) have small coffee areas that are staffed by baristas (I suppose) at peak times when people who aren’t me want coffee. Building 40, the large headquarters for the CMS and ATLAS experiments, has a big coffee kiosk, where one can also get sandwiches and small pizzas, good when you want to avoid crazy Restaurant 1 lunchtimes and coffee runs. People line up for coffee here during meeting breaks, which usually puts us even further behind schedule.

Being a non-drinker of coffee can lead to some social discomfort. When two CERN people want to discuss something, they often do it over coffee. When someone invites me for a chat over coffee, I gamely say yes. But when we meet up I have to explain that I don’t actually drink coffee, and then sit patiently while they go to get a cup. I do worry that the other person feels uncomfortable about me watching them drink coffee. I could get a bottle of water for myself — even carbonated water, when I feel like living on the edge — but I rarely do. My wife (who does drink coffee, but tolerates me) gave me a few jetons to carry around with me, so I can at least make the friendly gesture of buying the other person’s coffee, but usually my offer is declined, perhaps because the person knows that he or she can’t really repay the favor.

So, if you see a person in conversation in the Restaurant 1 coffee area, not drinking anything but nervously twiddling his thumbs instead, come over and say hello. I can give you a jeton if you need one.

Share

Will the car start?

Ken Bloom
Saturday, November 9th, 2013

While my family and I are spending a year at CERN, our Subaru Outback is sitting in the garage in Lincoln, under a plastic cover and hooked up to a trickle charger. We think that we hooked it all up right before going, but it’s hard to know for sure. Will the car start again when we get home? We don’t know.

CMS is in a similar situation. The detector was operating just fine when the LHC run ended at the start of 2013, but now we aren’t using it like we did for the previous three years. It’s basically under a tarp in the garage. When proton collisions resume in 2015, the detector will have to be in perfect working order again. So will this car start after not being driven for two years?

Fortunately, we can actually take this car out for a drive. This past week, CMS performed an exercise known as the Global Run in November, or GRIN. (I know, the acronym. You are wondering, if it didn’t go well, would we call it FROWN instead? That too has an N for November.) The main goal of GRIN was to make sure that all of the components of CMS could still operate in concert. In fact, many pieces of CMS have been run during the past nine months, but independently of one another. Actually making everything run together is a huge integration task; it doesn’t just happen automatically. All of the readouts have to be properly synchronized so that the data from the entire detector makes sense. In addition, GRIN was a chance to test out some operational changes that the experiment wants to make for the 2015 run. It may sound like it is a while away, but anything new should really be tested out as soon as possible.

On Friday afternoon, I ran into some of the leaders of the detector run coordination team, and they told me that GRIN had gone very well. At the start, not every CMS subsystem was ready to join in, but by the end of the week, the entire detector was running together, for the first time since the end of collisions. Various problems were overcome along the way — including several detector experts getting trapped in a stuck elevator. But they believe that CMS is in a good position to be ready to go in 2015.

As a member of CMS, that was really encouraging news. Now, if only the run coordinators could tell me where I left the Subaru keys!

Share

2013 Nobel Prize — Made in America?

Ken Bloom
Tuesday, October 8th, 2013

You’re looking at the title and thinking, “Now that’s not true! Francois Englert is Belgian, and Peter Higgs is from the UK. And CERN, where the Higgs discovery was made, is a European lab, not in the US.”

That is all true, but on behalf of the US LHC blog, let’s take a few minutes to review the role of the United States in the Higgs observation that made this prize possible. To be sure, the US was part of an international effort on this, with essential contributions from thousands of people at hundreds of institutes from all over the world, and the Nobel Prize is a validation of the great work of all of them. (Not to mention the work of Higgs, Englert and many other contributing theorists!) But at the same time, I do want to combat the notion that this was somehow a non-US discovery (as some have implied). For many more details, see this link.

US collaborators, about 2000 strong, are a major contingent within both of the biggest LHC experiments, ATLAS and CMS. I’m a member of CMS, where people from US institutions are about one third of the membership of the collaboration. This makes the US physicists the largest single national contingent on the experiment — by no means a majority, but because of our size we have a critical role to play in the construction and operation of the experiment, and the data analysis that follows. American physicists are represented throughout the management structure (including Joe Incandela, the current CMS spokesperson) and deep in the trenches.

While the detectors were painstakingly assembled at CERN, many of the parts were designed, prototyped and fabricated in the US. On CMS, for instance, there has been US involvement in every major piece of the instrument: charged particle tracking, energy measurements, muon detection, and the big solenoid magnet that gives the experiment its name. Along with the construction responsibilities come maintenance and operational responsibilities too; we expect to carry these for the lifetime of the experiment.

The data that these amazing instruments record must then be processed, stored, and analyzed. This requires powerful computers, and the expertise to operate them efficiently. The US is a strong contributor here too. On CMS, about 40% of the data processing is handled at facilities in the US. And then there is the last step in the chain, the data analysis itself that leads to the measurements that allow us to claim a discovery. This is harder to quantify, but I can’t think of a single piece of the Higgs search analysis that didn’t have some US involvement.

Again, this is not to say that the US is the only player here — just to point out that thanks to the long history that the United States has in supporting this science, the US too can share some of the glory of today’s announcement.

Share

Another day at the office

Ken Bloom
Tuesday, October 8th, 2013

I suppose that my grandchildren might ask me, “Where were you when the Nobel Prize for the Higgs boson was announced?” I was at CERN, where the boson was discovered, thus giving the observational support required for the prize. And was I in the atrium of Building 40, where CERN Director General Rolf Heuer and hundreds of physicists had gathered to watch the broadcast of the announcement? Well no; I was in a small, stuffy conference room with about twenty other people.

We were in the midst of a meeting where we were hammering out the possible architecture of the submission system that physicists will be using to submit computing jobs for analyzing the data in the next LHC run and beyond. Not at all glamorous, I know. But that’s my point: the work that is needed to make big scientific discoveries, be it the Higgs or whatever might come next (we hope!) usually not the least bit glamorous. It’s a slog, where you have to work with a lot of other people to figure out all the difficult little details. And you really have to do this day after day, to make the science work. And there are many aspects of making science work — building advanced scientific instruments, harnessing the power of computers, coming up with clever ways to look at the data (and not making mistakes while at it), and working with colleagues to build confidence in a measurement. Each one of them takes time, effort and patience.

So in the end, today was just another day at the office — where we did the same things we’ve been doing for years to make this Nobel Prize possible, and are laying the groundwork for the next one.

Share

CERN’s universe is ours!

Ken Bloom
Sunday, September 29th, 2013

This past weekend, CERN held its first open days for the public in about five years. This was a big, big deal. I haven’t heard any final statistics, but the lab was expecting about 50,000 visitors on each of the two days. (Some rain on Sunday might have held down attendance.) Thus, the open days were a huge operation — roads were shut down, and Transports Publics Genevois was running special shuttle buses amongst the Meyrin and Previssen sites and the access points on the LHC ring. The tunnels were open to people who had reserved tickets in advance — a rare opportunity, and one that is only possible during a long shutdown such as the one currently underway.

A better CERN user than me would have volunteered for the open days. Instead, I took my kids to see the activities. We thought that the event went really well. I was bracing for it to be a mob scene, but in the end the Meyrin site was busy but not overrun. (Because the children are too small, we couldn’t go to any of the underground areas.) There were many eager orange-shirted volunteers at our service, as we visited open areas around the campus. We got to see a number of demonstrations, such as the effects of liquid-nitrogen temperatures on different materials. There were hands-on activities for kids, such as assembling your own LHC and trying to use a scientific approach to guessing what was inside a closed box. Pieces of particle detectors and LHC magnets were on display for all to see.

But I have to say, what really got my kids excited was the Transport and Handling exhibit, which featured CERN’s heavy lifting equipment. They rode a scissors lift that took them to a height of several stories, and got to operate a giant crane. Such a thing would never, ever happen in the US, which has a very different culture of legal liability.

I hope that all of the visitors had a great time too! I anticipate that the next open days won’t be until the next long shutdown, which is some years away, but it will be well worth the trip.

Share

Aces high

Ken Bloom
Thursday, September 19th, 2013

Much as I love living in Lincoln, Nebraska, having a long residence at CERN has some advantages. For instance, we do get much better traffic of seminar and colloquium speakers here. (I know, you were thinking about chocolate.) Today’s colloquium in particular really got me thinking about how we do, or don’t, understand particle physics today.

The speaker was George Zweig of MIT. Zweig has been to CERN before — almost fifty years ago, when he was a postdoctoral fellow. (This was his first return visit since then.) He had just gotten his PhD at Caltech under Richard Feynman, and was busy trying to understand the “zoo” of hadronic particles that were being discovered in the 1960’s. (Side note: Zweig pointed out today that at the time there were 26 known hadronic particles…19 of which are no longer believed to exist.) Zweig developed a theory that explained the observations of the time by positing a set of hadronic constituents that he called “aces”. (He thought there might be four of them, hence the name.) Some particles were made of two aces (and thus called “deuces”) and others were made of three (and called “trays”). This theory successfully explained why some expected particle decays didn’t actually happen in nature, and gave an explanation for differences in masses between various sets of particles.

Now, reading this far along, you might think that this sounds like the theory of quarks. Yes and no — it was Murray Gell-Mann who first proposed quarks, and had similar successful predictions in his model. But there was a critical difference between the two theories. Zweig’s aces were meant to be true physical particles — concrete quarks, as he referred to them. Gell-Mann’s quarks, by contrast, were merely mathematical constructs whose physical reality was not required for the success of the theory. At the time, Gell-Mann’s thinking held sway; I’m no expert on the history of this period of history in theoretical particle physics. But my understanding was that the Gell-Mann approach was more in line with the theory fashions of the day, and besides, if you could have a successful theory that didn’t have to introduce some new particles that were themselves sketchy (their electric charges had to be fractions of the electron charge, and they apparently couldn’t be observed anyway), why would you?

Of course, we now know that Zweig’s interpretation is more correct; this was even becoming apparent a few short years later, when deep-inelastic scattering experiments at SLAC in the late 1960’s discovered that nucleons had smaller constituents, but at that time it was controversial to actually associate those with the quarks (or aces). For whatever reason, Zweig left the field of particle physics and went on to a successful career as a faculty member at MIT, doing work in neurobiology that involved understanding the mechanisms of hearing.

I find it a fascinating tale of how science actually gets done. How might it apply to our science today? A theory like the standard model of particle physics has been so well tested by experiment that it is taken to be true without controversy. But theories of physics beyond the standard model, the sort of theories that we’re now trying to test at the LHC, are much less constrained. And, to be sure, some are more popular than others, because they are believed to have some certain inherent beauty to them, or because they fit well with patterns that we think we observe. I’m no theorist, but I’m sure that some theories are currently more fashionable than others. But in the absence of experimental data, we can’t know that they are right. Perhaps there are some voices that are not being heard as well as they need to be. Fifty years from now, will we identify another George Zweig?

Share

Inspired by the Higgs, a step forward in open access

Kyle Cranmer
Thursday, September 12th, 2013

The discovery of the Higgs boson is a major step forward in our understanding of nature at the most fundamental levels. In addition to being the last piece of the standard model, it is also at the core of the fine tuning problem — one of the deepest mysteries in particle physics. So it is only natural that our scientific methodology rise to the occasion to provide the most powerful and complete analysis of this breakthrough discovery.

This week the ATLAS collaboration has taken an important step forward by making the likelihood function for three key measurements about the Higgs available to the world digitally. Furthermore, this data is being shared in a way that represents a template for how particle physics operates in the fast-evolving world of open access to data. These steps are a culmination of decades of work, so allow me to elaborate.

Four interactions that can produced a Higgs boson at the LHC

Four interactions that can produced a Higgs boson at the LHC

Higgs production and decay measured by ATLAS.

Higgs production and decay measured by ATLAS.

First of all, what are the three key measurements, and why are they important? The three results were presented by ATLAS in this recent paper.  Essentially, they are measurements for how often the Higgs is produced at the LHC through different types of interactions (shown above) and how often it decays into three different force carrying particles (photons, W, and Z bosons).  In this plot, the black + sign at (1,1) represents the standard model prediction and the three sets of contours represent the measurements performed by ATLAS.  These measurements are fundamental tests of the standard model and any deviation could be a sign of new physics like supersymmetry!

Ok, so what is the likelihood function, and why is it useful?  Here maybe it is best to give a little bit of history.  In 2000, the first in a series of workshops was held at CERN where physicists gathered to discuss the details of our statistical procedures that lead to the final results of our experiments.  Perhaps surprisingly, there is no unique statistical procedure, and there is a lot of debate about the merits of different approaches.  After a long discussion panel, Massimo Corradi cut to the point

It seems to me that there is a general consensus that what is really meaningful for an experiment is likelihood, and almost everybody would agree on the prescription that experiments should give their likelihood function for these kinds of results. Does everybody agree on this statement, to publish likelihoods?

And as Louis Lyons charred the session…

Any disagreement? Carried unanimously.  That’s actually quite an achievement for this workshop.

So there you have it, the likelihood function is the essential piece of information needed for communicating scientific results.

So what happened next?  Well… for years, despite unanimous support, experiments still do not publish their likelihood functions.  Part of the reason is that we lacked the underlying technology to communicate these likelihood functions efficiently.  In the run up to the LHC we developed some technology (associated to RooFit and RooStats) for being able to share very complicated likelihood functions internally.  This would be the ideal way to share our likelihood functions, but we aren’t quite there yet.  In January 2013, we had a conference devoted to the topic of publishing likleihood functions, which culminated in a paper “On the presentation of LHC Higgs results”.  This paper, written by theorist and experimentalists, singled out the likelihood associated to the plot above as the most useful way of communicating information about the Higgs properties.

An overlay of the original ATLAS result (filled contours) and those reproduced from the official ATLAS likelihood functions.

An overlay of the original ATLAS result (filled contours) and those reproduced from the official ATLAS likelihood functions.

The reason that these specific Higgs plots are so useful is that more specific tests of the standard model can be derived from them.  For instance, one might want to consider beyond the standard model theories where the Higgs interacts with all the matter particles (fermions) or all the force carrying particles (vector bosons) differently than in the standard model.  To do that, it is useful to group together all of the information in a particular way and take a special 2-d slice through the 6-d parameter space described by the three 2-d plots above.  To the left is the result of this test (where the axes are called κ_F and κ_V for the vector bosons and fermions, respectively).  What is special about this plot is that there is an overlay of the original ATLAS result (filled contours) and those reproduced from the official ATLAS likelihood functions.  While my student Sven Kreiss made the comparison as part of a test, anyone can now reproduce this plot from the official ATLAS likelihood functions.  More importantly, the same procedure that was used to make this plot can be used to test other specific theories — and there are a lot of alternative ways to reinterpret these Higgs results.

Great! So where can you find these likelihood functions and what does this have to do with open access?  I think this part is very exciting.  CERN is now famous for being the birthplace for the world wide web and having a forward-looking vision for open access to our published papers. The sheer volume and complexity of the LHC data makes the conversation about open access to the raw data quite complicated.  However, having access to our published results is much less controversial.  While it is not done consistently, there are several examples of experiments putting information that goes into tables and figures on HepData (a repository for particle physics measurements).  Recently, our literature system INSPIRE started to integrate with HepData so that the data are directly associated to the original publication (here is an example).  What is important is that this data is discoverable and citable.  If someone uses this data, we want to know exactly what is being used and the collaborations that produced the data deserve some credit.  INSPIRE is now issuing a Digital Object Identifier (DOI) to this data, which is a persistent and trackable link to the data.

So now for the fun part, you can go over to the INSPIRE record for the recent Higgs paper (http://inspirehep.net/record/1241574) and you will see this:

The INSPIRE record for the recent ATLAS Higgs paper.

 

If you click on HepData tab at the top it will take you to a list of data associated to this paper.   Each of the three entries has a DOI associated to it (and lists all the ATLAS authors).  For example, the H→γγ result’s DOI is 10.7484/INSPIREHEP.DATA.A78C.HK44, and this is what should be cited for any result that uses this likelihood.    (Note, to get to the actual data, you click on the Files tab.)  INSPIRE is now working so that your author profile will not only include all of your papers, but also the data sets that you are associated with (and you can also see the data associated with your ORCID ID).

The H→γγ likelihood function.

The INSPIRE record for the H→γγ likelihood function.

Now it’s time for me and my co-authors to update our paper “On the presentation of LHC Higgs results” to cite this data.  And next week, Salvatore Mele, head of Open Access at CERN, will give a keynote presentation to the DataCite conference entitled “A short history of the Higgs Boson. From a tenth of a billionth of a second after the Big Bang, through the discovery at CERN, to a DataCite DOI”.

I truly hope that this becomes standard practice for the LHC.  It is a real milestone for the information architecture associated to the field of high energy physics and a step forward in the global analysis of the Higgs boson discovered at the LHC!

Update (Sept. 17): The new version of our paper is out that has citations to the likelihoods.

Update (Sept. 18): The data record now has a citation tab as well, so you can distinguish citations to the data and citations to the paper.

Share

Prioritizing the future

Ken Bloom
Monday, September 9th, 2013

As I’ve discussed a number of times, the United States particle physics community has spent the last nine months trying to understand what the exciting research and discovery opportunities are for the next ten to twenty years, and what sort of facilities might be required to exploit them. But what comes next? How do we decide which of these avenues of research are the most attractive, and, perhaps most importantly, can be achieved given that we work within finite budgets, need the right enabling technologies to be available at the right times, and must be planned in partnership with researchers around the world?

In the United States, this is the job of the Particle Physics Project Prioritization Panel, or P5. What is this big mouthful? First, it is a sub-panel of the High Energy Physics Advisory Panel, or HEPAP. HEPAP is the official body that can advise the Department of Energy and the National Science Foundation (the primary funders of particle physics in the US, and also the sponsors of the US LHC blog) on programmatic direction of the field in the US. As an official Federal Advisory Committee, HEPAP operates in full public view, but it is allowed to appoint sub-panels that are under the control of and report to HEPAP but have more flexibility to deliberate in private. This particular sub-panel, P5, was first envisioned in a report of a previous HEPAP sub-panel in 2001 that looked at, among other things, the long-term planning process for the field. The original idea was that P5 would meet quite regularly and continually review the long-term roadmap for the field and adjust it according to current conditions and scientific knowledge. However, in reality P5’s have been short-lived and been re-formed every few years. The last P5 report dates from 2008, and obviously a lot has changed since then — in particular, we now know from the LHC that there is a Higgs boson that looks like the one predicted in the standard model, and there have been some important advances in our understanding of neutrino mixing. Thus the time is ripe to take another look at the plan.

And so it is that a new P5 was formed last week, tasked with coming up with a new strategic plan for the field “that can be executed over a 10 year timescale, in the context of a 20-year global vision for the field.” P5 is supposed to be taking into account the latest developments in the field, and use the Snowmass studies as inputs. The sub-panel is to consider what investments are needed to fulfill the scientific goals, what mix of small, medium and large experiments is appropriate, and how international partnerships can fit into the picture. Along the way, they are also being asked to provide a discussion of the scientific questions of the field that is accessible to non-specialists (along the lines of this lovely report from 2004) and articulate the value of particle-physics research to other sciences and society. Oh, and the sub-panel is supposed to have a final report by May 1. No problem at all, right?

Since HEPAP’s recommendations will drive the the plan for the field, it is very important that this panel does a good job! Fortunately, there are two good things going for it. First, the membership of the panel looks really great — talented and knowledgeable scientists who are representative of the demographics of the field and include representatives from outside the US. Second, they are being asked to make their recommendations in the context of fairly optimistic budget projections. Let us only hope that these come to pass!

Watch this space for more about the P5 process over the coming eight months.

Share

Where the Future Lies – 30 Years in the Making

James Faulkner
Monday, September 9th, 2013

Last week, the CMS and ATLAS experiments hosted a party for all collaborators
of the respective groups to celebrate receiving the 2013 EPS High Energy
Particle Physics Prize. This was an opportunity to celebrate the past 30 years of
hard work in planning and execution of an international effort to discover the
next level of high-energy physics. During the gathering, it was pointed out that
now is the time to start planning and executing the next generation of particle
colliders and high energy physics searches. This is quite true, given that we must
not fall into the mindset of, “If we can discover the Higgs particle, then why build
a better detector?” It is very much a reality that the LHC still offers much to be
discovered, as we have yet to reach its full potential. But if we wait until we have
exhausted the potential of the LHC before planning for the next experiments, it
would create a gap in progress for future generations and ourselves.

Another great moment from the evening was when the awards were displayed—
symbolically communicating that we all received this award and we can all share
in the moment. Individual and personal achievement will always be a highlight
in one’s own life, but this moment of gratitude and humility for a job well done
through a international collaboration was certainly inspiring.

A physicists’ party could be imagined as a bunch of nerds reciting equations to
the beat of techno music, but that’s not quite what happened. Working in an
international collaboration usually means we have irregular work hours, with
swarms of emails sent from all over the world at any hour of the day (or night).
We usually try to maintain a normal workday as well as life outside of work, but
you can still spot at least a few glowing computer screens as work carries on into
the dead of night. To have a dedicated party for both collaborations where we
were able to unwind and talk about sports or vacations and dance highlights how
worthwhile our experiences are. The work is difficult, but how many careers
offer the chance to literally travel across the world on a regular basis and work
on the largest particle physics experiment ever?

Share