• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

USLHC | USA

A quick ski through history

Ken Bloom
Sunday, March 23rd, 2014

This past week about 175 lucky particle physicists gathered in La Thuile, a mountain town in the Italian Alps, for one of the annual Rencontres de Moriond conferences. This is one of the highlights of the particle-physics calendar, perhaps the most important gathering of particle physicists between the summer-time Lepton-Photon and ICHEP conferences for the presentation of new results. The major experimental collaborations of the world have been wrapping up a flurry of activity in preparation for the high-profile meetings taking place over the next few weeks. The atmosphere on the LHC experiments has been a bit less intense this year than last year, as the flashiest results from the 2010-12 data sample have already been released, but there was still a push to complete as many measurements as possible for presentation at this conference in particular.

I’ve only been to a Moriond conference once, but it was quite an experience. The conference is held at a ski resort to encourage cameraderie and scientific exchanges outside the conference room, and that leads to an action-packed week. Each morning of the week opens with about three hours of scientific presentations. The mid-morning finish allows for an almost-full day of skiing for those who chose to go (and as you might imagine, many do). This is a great opportunity to spend leisure time with colleagues, meet new people and discuss what had been learned that morning. After the lifts have closed, everyone returns to the hotel for another three hours of presentations. This is followed by a group dinner to continue the conversation. Everyone who has the chance to go realizes that they are very lucky to be there, but at the same time it is a rather exhausting experience! Or, as Henry Frisch, my undergraduate mentor and a regular Moriond attendee, once told me, “There are three things going on at Moriond — the physics, the skiing, and the food — and you can only do two out of the three.” (I skipped lunch on most days.)

As friends were getting ready to head south from CERN through the Mont Blanc tunnel to Italy (and as I was getting ready for my first visit to the United States in more than seven months, for the annual external review of the US LHC operations programs), I realized that it has in fact been ten years since the Moriond conference I went to. Thankfully, the conference organizers have maintained the conference website from 2004, allowing me to relive my presentation from that time. It is a relief to observe that our understanding of particle physics has advanced quite a bit since then! At that Moriond, the Tevatron was just starting to kick into gear for its “Run 2,” and during the previous year we had re-established the signal for the top quark that had first been observed in the mid-1990s. We were just starting to explore the properties of the top quark, but we were hampered by the size of the data sample at that point. It is amusing to look back and see that we were trying to measure the mass of the top quark with a mere six dilepton decay events! Over the coming years, the Tevatron would produce hundreds more such events, and the CDF and D0 experiments would complete the first thorough explorations of the top quark, demonstrating that its properties are totally in line with the predictions of the standard model. And since then, the LHC has done the Tevatron one better, thanks to both an increase in the top-quark production rate at the higher LHC energy and the larger LHC collision rate. The CMS top-quark sample now boasts about 70,000 dilepton candidate events, and the CMS measurement of the top-quark mass is now the best in the world.

Top-quark physics is one of the topics I’m most familiar with, so it is easy for me to mark progress there, but of course it has been a remarkable decade of advances for particle physics, with the discovery of the Higgs boson, a more thorough understanding of neutrino masses and mixing, and constraints on the properties of dark matter. Next year, the LHC will resume operations in its own “Run 2″, with an even higher collision energy and higher collision rates than we had in 2012. It is a change almost as great as that we experienced in moving from the Tevatron to the first run of the LHC. I cannot wait to see how the LHC will be advancing our knowledge of particle physics, possibly through the discovery of new particles that will help explain the puzzles presented by the Higgs boson. You can be sure that there will be a lot of excited chatter on the chair lifts around the dinner table at the 2016 Moriond conferences!

Share

Dear Google: Hire us!

Ken Bloom
Monday, March 3rd, 2014

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

Share

B Decays Get More Interesting

Adam Davis
Friday, February 28th, 2014

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for \( b\to s\gamma\) transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

\(b\to s\gamma \\)

One loop penguin diagram representing the transition \(b \to s \gamma \).

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay \(b\to s\gamma\) is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for \(b\to s\gamma\) agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays \(B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma \) are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Definition of \(\theta\)

Picture defining the angle \(\theta\) in the analysis of \(B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma\). From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into \(K^\pm \pi^\mp \pi^\pm\). It is then possible to form the triple product of \(p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})\). Effectively, this defines the angle \(\theta\) defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

\(\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta\)

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or \(A_{ud}\). For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb\(^{-1}\) collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of \(b\to s\gamma\).

Up-down asymmetry for the analysis of \(b\to s\gamma\). From the Lake Louise Conference Talk

In bins of invariant mass of the \(K \pi \pi\) system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2\(\sigma\) observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

Share

The Higgs Boson: A Natural Disaster!

Kyle Cranmer
Saturday, February 1st, 2014

The discovery of the Higgs boson was a triumph for particle physics. Its discovery completes the tremendously successful Standard Model of particle physics.  Of course, we know there are other phenomena — like dark matter, the dominance of matter over anti-matter, the mass of neutrinos, etc. — that aren’t explained by the Standard Model.  However, the Higgs itself is the source of one of the deepest mysteries of particle physics: the fine tuning problem.

The fine-tuning problem is related to the slippery concept of naturalness, and has driven the bulk of theoretical work for the last several decades.  Unfortunately, it is notoriously difficult to explain.  I took on this topic recently for a public lecture and came up with an analogy that I would like to share.

Why we take our theory seriously

Before discussing the fine tuning, we need need a few prerequisites.  The first thing to know is that the Standard Model (and most other theories we are testing) is based on a conceptual framework called Relativistic Quantum Field Theory (QFT).  As you might guess from the name, it’s based on the pillars of relativity, quantum mechanics, and field theory.  The key point here is that relativistic quantum field theory goes beyond the initial formulation of quantum mechanics.  To illustrate this difference, let’s consider a property of the electron and muon called its “g-factor” that relates its magnetic moment and spin [more].  In standard quantum mechanics, the prediction is that g=2; however, with relativistic quantum field theory we expect corrections.  Those corrections are shown pictorially  in the Feynman diagrams below.

g-2corrections

It turns out that this correction is small — about one part in a thousand.  But we can calculate it to an exquisite accuracy (about ten digits).  Moreover, we can measure it to a comparable accuracy.  The current result for the muon is

g = 2.0023318416 ± 0.000000001

This is a real tour de force for relativistic quantum field theory and represents one of the most stringent tests of any theory in the history of science [more].  To put it into perspective, it’s slightly better than hitting a hole in one from New York to China (that distance is about 10,000 km =1 billion cm).

It is because of tests like these that we take the predictions of this conceptual framework very seriously.

Precision-g-2

The Higgs, fine tuning, and an analogy

It turns out that all quantities that we can predict receive similar quantum corrections, even the mass of the Higgs boson.    In the Standard Model, there is a free parameter that can be thought of as an initial estimate for the Higgs mass, let’s call it M₀.  There will also be corrections, let’s call them ΔM (where Δ is pronounced “delta” and it indicates “change to”).   The physical mass that we observe is this initial estimate plus the corrections.  [For the aficionados: usually physicists talk about the mass squared instead of the mass, but that does not change the essential message].

The funny thing about the mass of the Higgs is that the corrections are not small.  In fact, the naive size of the corrections is enormously larger than the 126 GeV mass of that we observe!

Confused?  Now is a good time to bring in the analogy.  Let’s think about the budget of a large country like the U.S.  We will think of positive contributions to the Higgs mass as income (taxes) and negative contributions to the Higgs mass as spending.  The physical Higgs mass that we measure corresponds to the budget surplus.

Now imagine that there is no coordination between the raising of taxes and government spending (maybe it’s not that hard). Wouldn’t you be surprised that a large economy of trillions of dollars would have a budget balanced to better than a penny?  Wouldn’t that be unnatural to expect such a  fine tuning between  income and spending if they are just independent quantities?

This is exactly the case we find ourselves in with the Standard Model… and we don’t like it.  With the discovery of the Higgs, the Standard Model is now complete.  It is also the first theory we have had that can be extrapolated to very high energies (we say that it is renormalizable). But it has a severe fine tuning problem and does not seem natural.

Budget

AnalogyTable

 

The analogy can be fleshed out a bit more.  It turns out that the size of the corrections to the Higgs mass is related to something we call the cutoff, which is the  energy scale where the theory is no longer a valid approximation because some other phenomena become important.  For example, in a grand unified theory the strong force and the electroweak force would unify at approximately 10¹⁶ GeV (10 quadrillion GeV), and we would expect the corrections to be of a similar size.  Another common energy scale for the cutoff is the Planck Scale — 10¹⁹ GeV — where the quantum effects of gravity become important.  In the analogy, the cutoff energy corresponds to the fiscal year.  As time goes on, the budget grows and the chance of balancing the budget so precisely seems more and more unnatural.

Going even further, I can’t resist pointing out that the analogy even offers a nice way to think about one of the most enigmatic concepts in quantum field theory called renormalization.  We often use this term to describe how fundamental constants aren’t really constant.  For example, the  charge of an electron depends on the energy you use to probe the electron.  In the analogy, renormalization is like adjusting for inflation.  We know that a dollar today isn’t comparable to a dollar fifty years ago.

Breaking down the budget

The first thing one wants to understand before attempting to balance the budget is to find out where the money is going.  In the U.S. the big budget items are the military and social programs like social security and medicare.  In the case of the Higgs, the biggest corrections come from the top quark (the top diagrams on the right).  Of course the big budget items get most of the attention, and so it is with physics as well.  Most of the thinking that goes into to solving the fine tuning problem is related to the top quark.

BudgetOfCorrections

Searching for a principle to balance the budget

Maybe it’s not a miraculous coincidence that the budget is balanced so well.  Maybe there is some underlying principle.  Maybe someone came to Washington DC and passed a law to balance the budget that says that for every dollar of spending there must be a dollar of revenue.  This is an excellent analogy for supersymmetry.  In supersymmetry, there is an underlying principle — a symmetry — that relates two types of particles (fermions and bosons).  These two types of particles give corrections to the Higgs mass with opposite signs.  If this symmetry was perfect, the budget would be perfectly balanced, and it would not be unnatural for the Higgs to be 126 GeV.

That is one of the reasons that supersymmetry is so highly motivated, and there is an enormous effort to search for signs of supersymmetry in the LHC data.  Unfortunately, we haven’t seen any evidence for supersymmetry thus far. In the analogy that is a bit like saying that if there is some sort of law to balance the budget, it allows for some wiggle room between spending and taxes.  If the laws allow for too much wiggle room between spending and taxes then it may still be a law, but it isn’t explaining why the budget is balanced as well as it is.  The current state of the LHC experiments indicates that budget is balanced about 10-100 times better than the wiggle room allows  — which is better than we would expect, but not so much better that it seems unnatural.  However, if we don’t see supersymmetry in the next run of the LHC the situation will be worse. And if we were to build a 100 TeV collider and not see evidence of supersymmetry, then the level of fine tuning would be high enough that most physicists probably would consider the situation unnatural and abandon supersymmetry as the solution to the fine tuning problem.

SUSY

Since the fine tuning problem was first recognized, there have been essentially two proposed solutions.  One of them is supersymmetry, which I discussed above.  The second is often referred to as strong dynamics or compositeness.  The idea there is that maybe the Higgs is not a fundamental particle, but instead it’s a composite of some more fundamental particles.  My colleague Jamison Galloway and I tried to think through the analogy in that situation. In that case, one must start to think of different kinds of currencies… say the dollar for the Higgs boson and something other currencies like bitcoin for the more fundamental particles.  You would imagine that as time goes on (energy increases) that there is a transition from one currency to another.   At early times the budget is described entirely in terms of  dollars, but at later times the budget is described in terms of bitcoin.  That transition can be very complicated, but if it happened at a time when the total budget in dollars wasn’t too  large, then a well balanced budget wouldn’t seem too unnatural.  Trying to explain the rest of the compositeness story took us from a simple analogy to the basis for a series of sci-fi fantasy books, and I will spare you from that.

There are a number of examples where this aesthetic notion of naturalness has been a good guide, which is partially why physicists hold it so dear.  However, another avenue of thinking is that maybe the theory is unnatural, maybe it is random chance that the budget is balanced so well.  That thinking is bolstered by the idea that there may be a huge number of universes that are part of a larger complex we call the multiverse. In most of these universes the budget wouldn’t be balanced, the Higgs mass  would be very different.  In fact, most universes would not form atoms, would not form starts, and would not support life.  Of course, we are here here to observe our universe, and the conditions necessary to support life select very special universes out of the larger multiverse.  Maybe it is this requirement that explains why our universe seems so finely tuned.  This reasoning is called the anthropic principle, and it is one of the most controversial topics in theoretical physics. Many consider it giving up on a more fundamental theory that would explain why nature is as it is.  The very fact that we are resorting to this type of reasoning is evidence that the fine tuning problem is a big deal. I discuss this at the end of the public lecture (starting around the 30 min mark) with another analogy for the multiverse, but maybe I will leave that for another post.

Nota bene:  After developing this analogy I learned about a similar analogy from Tommaso Dorigo. They both use the idea of money, but the budget analogy goes a bit further.

Share

No cream, no sugar

Ken Bloom
Monday, January 6th, 2014

My first visit to CERN was in 1997, when I was wrapping up my thesis work. I had applied for, and then was offered, a CERN fellowship, and I was weighing whether to accept it. So I took a trip to Geneva to get a look at the place and make a decision. I stayed on the outskirts of Sergy with my friend David Saltzberg (yes, that David Saltzberg) who was himself a CERN fellow, and he and other colleagues helped set up appointments for me with various CERN physicists.

Several times each day, I would use my map to find the building with the right number on it, and arrive for my next appointment. Invariably, I would show up and be greeted with, “Oh good, you’re here. Let’s go get a coffee!”

I don’t drink coffee. At this point, I can’t remember why I never got started; I guess I just wasn’t so interested, and may also have had concerns about addictive stimulants. So I spent that week watching other people drink coffee. I learned that CERN depends on large volumes of coffee for its operation. It plays the same role as liquid helium does for the LHC, allowing the physicists to operate at high energies and accelerate the science. (I don’t drink liquid helium either, but that’s a story for another time.)

Coffee is everywhere. In Restaurant 1, there are three fancy coffee machines that can make a variety of brews. (Which ones? You’re asking the wrong person.) At breakfast time, the line for the machines stretches across the width of the cafeteria, blocking the cooler that has the orange juice, much to my consternation. Outside the serving area, there are three more machines where one can buy a coffee with a jeton (token) that can be purchased at a small vending machine. (I don’t know how much they cost.) After lunch, the lines for these machines clogs the walkway to the place where you deposit your used trays.

Coffee goes beyond the restuarants. Many buildings (including out-of-the-way Building 8, where my office is) have small coffee areas that are staffed by baristas (I suppose) at peak times when people who aren’t me want coffee. Building 40, the large headquarters for the CMS and ATLAS experiments, has a big coffee kiosk, where one can also get sandwiches and small pizzas, good when you want to avoid crazy Restaurant 1 lunchtimes and coffee runs. People line up for coffee here during meeting breaks, which usually puts us even further behind schedule.

Being a non-drinker of coffee can lead to some social discomfort. When two CERN people want to discuss something, they often do it over coffee. When someone invites me for a chat over coffee, I gamely say yes. But when we meet up I have to explain that I don’t actually drink coffee, and then sit patiently while they go to get a cup. I do worry that the other person feels uncomfortable about me watching them drink coffee. I could get a bottle of water for myself — even carbonated water, when I feel like living on the edge — but I rarely do. My wife (who does drink coffee, but tolerates me) gave me a few jetons to carry around with me, so I can at least make the friendly gesture of buying the other person’s coffee, but usually my offer is declined, perhaps because the person knows that he or she can’t really repay the favor.

So, if you see a person in conversation in the Restaurant 1 coffee area, not drinking anything but nervously twiddling his thumbs instead, come over and say hello. I can give you a jeton if you need one.

Share

Will the car start?

Ken Bloom
Saturday, November 9th, 2013

While my family and I are spending a year at CERN, our Subaru Outback is sitting in the garage in Lincoln, under a plastic cover and hooked up to a trickle charger. We think that we hooked it all up right before going, but it’s hard to know for sure. Will the car start again when we get home? We don’t know.

CMS is in a similar situation. The detector was operating just fine when the LHC run ended at the start of 2013, but now we aren’t using it like we did for the previous three years. It’s basically under a tarp in the garage. When proton collisions resume in 2015, the detector will have to be in perfect working order again. So will this car start after not being driven for two years?

Fortunately, we can actually take this car out for a drive. This past week, CMS performed an exercise known as the Global Run in November, or GRIN. (I know, the acronym. You are wondering, if it didn’t go well, would we call it FROWN instead? That too has an N for November.) The main goal of GRIN was to make sure that all of the components of CMS could still operate in concert. In fact, many pieces of CMS have been run during the past nine months, but independently of one another. Actually making everything run together is a huge integration task; it doesn’t just happen automatically. All of the readouts have to be properly synchronized so that the data from the entire detector makes sense. In addition, GRIN was a chance to test out some operational changes that the experiment wants to make for the 2015 run. It may sound like it is a while away, but anything new should really be tested out as soon as possible.

On Friday afternoon, I ran into some of the leaders of the detector run coordination team, and they told me that GRIN had gone very well. At the start, not every CMS subsystem was ready to join in, but by the end of the week, the entire detector was running together, for the first time since the end of collisions. Various problems were overcome along the way — including several detector experts getting trapped in a stuck elevator. But they believe that CMS is in a good position to be ready to go in 2015.

As a member of CMS, that was really encouraging news. Now, if only the run coordinators could tell me where I left the Subaru keys!

Share

2013 Nobel Prize — Made in America?

Ken Bloom
Tuesday, October 8th, 2013

You’re looking at the title and thinking, “Now that’s not true! Francois Englert is Belgian, and Peter Higgs is from the UK. And CERN, where the Higgs discovery was made, is a European lab, not in the US.”

That is all true, but on behalf of the US LHC blog, let’s take a few minutes to review the role of the United States in the Higgs observation that made this prize possible. To be sure, the US was part of an international effort on this, with essential contributions from thousands of people at hundreds of institutes from all over the world, and the Nobel Prize is a validation of the great work of all of them. (Not to mention the work of Higgs, Englert and many other contributing theorists!) But at the same time, I do want to combat the notion that this was somehow a non-US discovery (as some have implied). For many more details, see this link.

US collaborators, about 2000 strong, are a major contingent within both of the biggest LHC experiments, ATLAS and CMS. I’m a member of CMS, where people from US institutions are about one third of the membership of the collaboration. This makes the US physicists the largest single national contingent on the experiment — by no means a majority, but because of our size we have a critical role to play in the construction and operation of the experiment, and the data analysis that follows. American physicists are represented throughout the management structure (including Joe Incandela, the current CMS spokesperson) and deep in the trenches.

While the detectors were painstakingly assembled at CERN, many of the parts were designed, prototyped and fabricated in the US. On CMS, for instance, there has been US involvement in every major piece of the instrument: charged particle tracking, energy measurements, muon detection, and the big solenoid magnet that gives the experiment its name. Along with the construction responsibilities come maintenance and operational responsibilities too; we expect to carry these for the lifetime of the experiment.

The data that these amazing instruments record must then be processed, stored, and analyzed. This requires powerful computers, and the expertise to operate them efficiently. The US is a strong contributor here too. On CMS, about 40% of the data processing is handled at facilities in the US. And then there is the last step in the chain, the data analysis itself that leads to the measurements that allow us to claim a discovery. This is harder to quantify, but I can’t think of a single piece of the Higgs search analysis that didn’t have some US involvement.

Again, this is not to say that the US is the only player here — just to point out that thanks to the long history that the United States has in supporting this science, the US too can share some of the glory of today’s announcement.

Share

Another day at the office

Ken Bloom
Tuesday, October 8th, 2013

I suppose that my grandchildren might ask me, “Where were you when the Nobel Prize for the Higgs boson was announced?” I was at CERN, where the boson was discovered, thus giving the observational support required for the prize. And was I in the atrium of Building 40, where CERN Director General Rolf Heuer and hundreds of physicists had gathered to watch the broadcast of the announcement? Well no; I was in a small, stuffy conference room with about twenty other people.

We were in the midst of a meeting where we were hammering out the possible architecture of the submission system that physicists will be using to submit computing jobs for analyzing the data in the next LHC run and beyond. Not at all glamorous, I know. But that’s my point: the work that is needed to make big scientific discoveries, be it the Higgs or whatever might come next (we hope!) usually not the least bit glamorous. It’s a slog, where you have to work with a lot of other people to figure out all the difficult little details. And you really have to do this day after day, to make the science work. And there are many aspects of making science work — building advanced scientific instruments, harnessing the power of computers, coming up with clever ways to look at the data (and not making mistakes while at it), and working with colleagues to build confidence in a measurement. Each one of them takes time, effort and patience.

So in the end, today was just another day at the office — where we did the same things we’ve been doing for years to make this Nobel Prize possible, and are laying the groundwork for the next one.

Share

CERN’s universe is ours!

Ken Bloom
Sunday, September 29th, 2013

This past weekend, CERN held its first open days for the public in about five years. This was a big, big deal. I haven’t heard any final statistics, but the lab was expecting about 50,000 visitors on each of the two days. (Some rain on Sunday might have held down attendance.) Thus, the open days were a huge operation — roads were shut down, and Transports Publics Genevois was running special shuttle buses amongst the Meyrin and Previssen sites and the access points on the LHC ring. The tunnels were open to people who had reserved tickets in advance — a rare opportunity, and one that is only possible during a long shutdown such as the one currently underway.

A better CERN user than me would have volunteered for the open days. Instead, I took my kids to see the activities. We thought that the event went really well. I was bracing for it to be a mob scene, but in the end the Meyrin site was busy but not overrun. (Because the children are too small, we couldn’t go to any of the underground areas.) There were many eager orange-shirted volunteers at our service, as we visited open areas around the campus. We got to see a number of demonstrations, such as the effects of liquid-nitrogen temperatures on different materials. There were hands-on activities for kids, such as assembling your own LHC and trying to use a scientific approach to guessing what was inside a closed box. Pieces of particle detectors and LHC magnets were on display for all to see.

But I have to say, what really got my kids excited was the Transport and Handling exhibit, which featured CERN’s heavy lifting equipment. They rode a scissors lift that took them to a height of several stories, and got to operate a giant crane. Such a thing would never, ever happen in the US, which has a very different culture of legal liability.

I hope that all of the visitors had a great time too! I anticipate that the next open days won’t be until the next long shutdown, which is some years away, but it will be well worth the trip.

Share

Aces high

Ken Bloom
Thursday, September 19th, 2013

Much as I love living in Lincoln, Nebraska, having a long residence at CERN has some advantages. For instance, we do get much better traffic of seminar and colloquium speakers here. (I know, you were thinking about chocolate.) Today’s colloquium in particular really got me thinking about how we do, or don’t, understand particle physics today.

The speaker was George Zweig of MIT. Zweig has been to CERN before — almost fifty years ago, when he was a postdoctoral fellow. (This was his first return visit since then.) He had just gotten his PhD at Caltech under Richard Feynman, and was busy trying to understand the “zoo” of hadronic particles that were being discovered in the 1960’s. (Side note: Zweig pointed out today that at the time there were 26 known hadronic particles…19 of which are no longer believed to exist.) Zweig developed a theory that explained the observations of the time by positing a set of hadronic constituents that he called “aces”. (He thought there might be four of them, hence the name.) Some particles were made of two aces (and thus called “deuces”) and others were made of three (and called “trays”). This theory successfully explained why some expected particle decays didn’t actually happen in nature, and gave an explanation for differences in masses between various sets of particles.

Now, reading this far along, you might think that this sounds like the theory of quarks. Yes and no — it was Murray Gell-Mann who first proposed quarks, and had similar successful predictions in his model. But there was a critical difference between the two theories. Zweig’s aces were meant to be true physical particles — concrete quarks, as he referred to them. Gell-Mann’s quarks, by contrast, were merely mathematical constructs whose physical reality was not required for the success of the theory. At the time, Gell-Mann’s thinking held sway; I’m no expert on the history of this period of history in theoretical particle physics. But my understanding was that the Gell-Mann approach was more in line with the theory fashions of the day, and besides, if you could have a successful theory that didn’t have to introduce some new particles that were themselves sketchy (their electric charges had to be fractions of the electron charge, and they apparently couldn’t be observed anyway), why would you?

Of course, we now know that Zweig’s interpretation is more correct; this was even becoming apparent a few short years later, when deep-inelastic scattering experiments at SLAC in the late 1960’s discovered that nucleons had smaller constituents, but at that time it was controversial to actually associate those with the quarks (or aces). For whatever reason, Zweig left the field of particle physics and went on to a successful career as a faculty member at MIT, doing work in neurobiology that involved understanding the mechanisms of hearing.

I find it a fascinating tale of how science actually gets done. How might it apply to our science today? A theory like the standard model of particle physics has been so well tested by experiment that it is taken to be true without controversy. But theories of physics beyond the standard model, the sort of theories that we’re now trying to test at the LHC, are much less constrained. And, to be sure, some are more popular than others, because they are believed to have some certain inherent beauty to them, or because they fit well with patterns that we think we observe. I’m no theorist, but I’m sure that some theories are currently more fashionable than others. But in the absence of experimental data, we can’t know that they are right. Perhaps there are some voices that are not being heard as well as they need to be. Fifty years from now, will we identify another George Zweig?

Share