• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Karen
  • Andeen
  • Karlsruhe Institute of Technology

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Alexandre
  • Fauré
  • CEA/IRFU
  • FRANCE

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

USLHC | USA

Can 2130 physicists pounding on keyboards turn out Shakespeare plays?

Ken Bloom
Tuesday, April 22nd, 2014

The CMS Collaboration, of which I am a member, has submitted 335 papers to refereed journals since 2009, including 109 such papers in 2013. Each of these papers had about 2130 authors. That means that the author list alone runs 15 printed pages. In some cases, the author list takes up more space than the actual content of the paper!

One might wonder: How do 2130 people write a scientific paper for a journal? Through a confluence of circumstances, I’ve been directly involved in the preparation of several papers over the last few months, so I have been thinking a lot about how this gets done, and thought I might use this opportunity to shed some light on the publication process. What I will not discuss here is why a paper should have 2130 authors and not more (or fewer)—this is a very interesting topic, but for now we will work from the premise that there are 2130 authors who, by signing the paper, take scientific responsibility for the correctness of its contents. How can such a big group organize itself to submit a scientific paper at all, and how can it turn out 109 papers in a year?

Certainly, with this many authors and this many papers, some set of uniform procedures are needed, and some number of people must put in substantial effort to maintain and operate the procedures. Each collaboration does things a bit differently, but all have the same goal in mind: to submit papers that are first correct (in the scientific sense of “correct” as in “not wrong with a high level of confidence”), and that are also timely. Correct takes precedence over timely; it would be quite an embarrassment to produce a paper that was incorrect because the work was done quickly and not carefully. Fortunately, in my many years in particle physics, I can think of very few cases when a correction to a published paper had to be issued, and never have I seen a paper from an experiment I have worked be retracted. This suggests that the publication procedures are indeed meeting their goals.

But even though being correct trumps everything, having an efficient publication process is still important. It would also be a shame to be scooped by a competitor on an interesting result because your paper was stuck inside your collaboration’s review process. So there is an important balance to be struck between being careful and being efficient.

One thing that would not be efficient would be for every one of the 2130 authors to scrutinize every publishable result in detail. If we were to try to do this, everyone would soon become consumed by reviewing data analyses, rather than working on the other necessary tasks of the experiment, from running the detector to processing the data to designing upgrades of the experiment. And it’s hard to imagine that, say, once 1000 people have examined a result carefully, another thousand would uncover a problem. That being said, everyone needs to understand that even if they decline to take part in the review of a particular paper, they are still responsible for it, in accordance with generally accepted guidelines for scientific authorship.

Instead, the review of each measurement or set of measurements destined for publication in a single paper is delegated by the collaboration to a smaller group of people. Different collaborations have different ways of forming these review committees—some create a new committee for a particular paper that dissolves when that paper is published, while others have standing panels that review multiple analyses within a certain topic area. These committees usually include several people with expertise in that particular area of particle physics or data analysis techniques, but one or two who serve as interested outsiders who might look at the work in a different way and come up with new questions about it. The reviewers tend to be more senior physicists, but some collaborations have allowed graduate students to be reviewers too. (One good way to learn how to analyze data is to carefully study how other people are doing it!)

The scientists who are performing a particular measurement with the data are typically also responsible for producing a draft of the scientific paper that will be submitted to the journal. The review committee is then responsible for making sure that the paper accurately describes the work and will be understandable to physicists who are not experts on this particular topic. There can also be a fair amount of work at this stage to shape the message of the paper; measurements produce results in the form of numerical values of physical quantities, but scientific papers have to tell stories about the values and how they are measured, and expressing the meaning of a measurement in words can be a challenge.

Once the review committee members think that a paper is of sufficient quality to be submitted to a journal, it is circulated to the entire collaboration for comment. Many collaborations insert a “style review” step at this stage, in which a physicist who has a lot of experience in the matter checks that the paper conforms to the collaboration’s style guidelines. This ensures some level of uniformity in terminology across the all of the collaboration’s papers, and it is also a good chance to check that the figures and tables are working as intended.

The circulation of a paper draft to the collaboration is a formal process that has potential scaling issues, given how many people might submit comments and suggestions. On relatively small collaborations such as those at the Tevatron (my Tevatron-era colleagues will find the use of the word “small” here ironic!), it was easy enough to take the comments by email, but the LHC collaborations have a more structured system for collecting and archiving comments. Collaborators are usually given about two weeks to read the draft paper and make comments. How many people send feedback can vary greatly with each paper; hotter topics might attract more attention. Some conscientious collaborators do in fact read every paper draft (as far as I can tell). To encourage participation, some collaborations do make explicit requests to a randomly-chosen set of institutes to scrutinize the paper, while some institutes have their own traditions of paper review. Comments on all aspects of the paper are typically welcome, from questions about the physics or the veracity of the analysis techniques, to suggestions on the organization of the paper and descriptions of data analysis, to matters like the placement of commas.

In any case, given the number of people who read the paper, the length of the comments can often exceed the length of the paper itself. The scientists who wrote the paper draft then have to address all of the comments. Some comments lead to changes in the paper to explain things better, or to additional cross-checks of the analysis to address a point that was raised. Many textual suggestions are implemented, while others are turned down with an explanation of why they are not necessary or harmful to the paper. The analysis review committee then verifies that all significant comments have been properly considered, and checks that the resulting revised paper draft is in good shape for submission.

Different collaborations have different final steps before the paper is actually submitted to a journal. Some have certain leaders of the collaboration, such as the spokespersons and/or physics coordinators, read the draft and make a final set of recommendations that are to be implemented before submission. Others have “publication committees” that organize public final readings of a paper that can lead to changes. At this stage the authors of the original draft very much hope that things go smoothly and that paper submission will be imminent.

And this whole process comes before the scientific tradition of independent, blind peer review! Journals have their own procedures for appointing referees who read the paper and give the journal editors advice on whether a paper should be published, and what changes or checks they might require before recommending publication. The interaction with the journal and its referees can also take quite some time, but almost always it ends with a positive result. The paper has gone through so many levels of scrutiny already that the output is really a high-quality scientific product that describes reproducible results, and that will ultimately stand the test of time.

A paper that describes a measurement in particle physics is the last step of a long journey, from the conception of the experiment, the design and subsequent construction of the apparatus, its operation over the course of years to collect the data sample, the processing of the data, and the subsequent analysis that leads to numerical values of physical quantities and their associated uncertainties. The actual writing of the papers, and process of validating them and bringing 2130 physicists to agree that the paper has told the right story about the whole journey is an important step in the creation of scientific knowledge.

Share

The Realineituhedron

Kyle Cranmer
Tuesday, April 1st, 2014

Inspired by the deep insights revealed in the recent work around the Amplituhedron, a new and deeper mathematical principle has revealed itself. While the amplituhedron caused quite a buzzeven outside of the world of theoretical particle physics, thus far it is restricted to N=4 supersymmetry. In contrast, this new object is able to represent all known predictions for physical observables. The new object, outlined in a recent paper is being called “The Realineituhedron”.

The key observation is that at the end of the day, everything we measure can be represented as a real number. The paper outlines a particular way of projecting these observations onto the realineituhedron, in which the “volume” Ω of the object represents the value of the observation.

In fact, the physically observable quantity must be a real number, a feature foreshadoewed by the Hermitian postulate of quantum mechanics.

The paper is full of beautiful hand-drawn figures, such as the ones below:

 Is it possible that there is some geometrical object is able to capture the Hermitian nature of these operators–indeed, is it able to represent all fundamental observables?

This masterful work will take some time to digest — it was only released today! One of the most intriguing ideas is that of a “The Master Realineituhedron”, denoted ℝ², in which all realineituhedrons can be embeded.

It would be interesting to see whether this larger space has any interesting role to play in understanding the m = 1 geometry relevant to physics.

 

[This post was originally posted here]


Share

A quick ski through history

Ken Bloom
Sunday, March 23rd, 2014

This past week about 175 lucky particle physicists gathered in La Thuile, a mountain town in the Italian Alps, for one of the annual Rencontres de Moriond conferences. This is one of the highlights of the particle-physics calendar, perhaps the most important gathering of particle physicists between the summer-time Lepton-Photon and ICHEP conferences for the presentation of new results. The major experimental collaborations of the world have been wrapping up a flurry of activity in preparation for the high-profile meetings taking place over the next few weeks. The atmosphere on the LHC experiments has been a bit less intense this year than last year, as the flashiest results from the 2010-12 data sample have already been released, but there was still a push to complete as many measurements as possible for presentation at this conference in particular.

I’ve only been to a Moriond conference once, but it was quite an experience. The conference is held at a ski resort to encourage cameraderie and scientific exchanges outside the conference room, and that leads to an action-packed week. Each morning of the week opens with about three hours of scientific presentations. The mid-morning finish allows for an almost-full day of skiing for those who chose to go (and as you might imagine, many do). This is a great opportunity to spend leisure time with colleagues, meet new people and discuss what had been learned that morning. After the lifts have closed, everyone returns to the hotel for another three hours of presentations. This is followed by a group dinner to continue the conversation. Everyone who has the chance to go realizes that they are very lucky to be there, but at the same time it is a rather exhausting experience! Or, as Henry Frisch, my undergraduate mentor and a regular Moriond attendee, once told me, “There are three things going on at Moriond — the physics, the skiing, and the food — and you can only do two out of the three.” (I skipped lunch on most days.)

As friends were getting ready to head south from CERN through the Mont Blanc tunnel to Italy (and as I was getting ready for my first visit to the United States in more than seven months, for the annual external review of the US LHC operations programs), I realized that it has in fact been ten years since the Moriond conference I went to. Thankfully, the conference organizers have maintained the conference website from 2004, allowing me to relive my presentation from that time. It is a relief to observe that our understanding of particle physics has advanced quite a bit since then! At that Moriond, the Tevatron was just starting to kick into gear for its “Run 2,” and during the previous year we had re-established the signal for the top quark that had first been observed in the mid-1990s. We were just starting to explore the properties of the top quark, but we were hampered by the size of the data sample at that point. It is amusing to look back and see that we were trying to measure the mass of the top quark with a mere six dilepton decay events! Over the coming years, the Tevatron would produce hundreds more such events, and the CDF and D0 experiments would complete the first thorough explorations of the top quark, demonstrating that its properties are totally in line with the predictions of the standard model. And since then, the LHC has done the Tevatron one better, thanks to both an increase in the top-quark production rate at the higher LHC energy and the larger LHC collision rate. The CMS top-quark sample now boasts about 70,000 dilepton candidate events, and the CMS measurement of the top-quark mass is now the best in the world.

Top-quark physics is one of the topics I’m most familiar with, so it is easy for me to mark progress there, but of course it has been a remarkable decade of advances for particle physics, with the discovery of the Higgs boson, a more thorough understanding of neutrino masses and mixing, and constraints on the properties of dark matter. Next year, the LHC will resume operations in its own “Run 2″, with an even higher collision energy and higher collision rates than we had in 2012. It is a change almost as great as that we experienced in moving from the Tevatron to the first run of the LHC. I cannot wait to see how the LHC will be advancing our knowledge of particle physics, possibly through the discovery of new particles that will help explain the puzzles presented by the Higgs boson. You can be sure that there will be a lot of excited chatter on the chair lifts around the dinner table at the 2016 Moriond conferences!

Share

Dear Google: Hire us!

Ken Bloom
Monday, March 3rd, 2014

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

Share

B Decays Get More Interesting

Adam Davis
Friday, February 28th, 2014

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for \( b\to s\gamma\) transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

\(b\to s\gamma \\)

One loop penguin diagram representing the transition \(b \to s \gamma \).

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay \(b\to s\gamma\) is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for \(b\to s\gamma\) agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays \(B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma \) are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Definition of \(\theta\)

Picture defining the angle \(\theta\) in the analysis of \(B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma\). From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into \(K^\pm \pi^\mp \pi^\pm\). It is then possible to form the triple product of \(p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})\). Effectively, this defines the angle \(\theta\) defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

\(\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta\)

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or \(A_{ud}\). For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb\(^{-1}\) collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of \(b\to s\gamma\).

Up-down asymmetry for the analysis of \(b\to s\gamma\). From the Lake Louise Conference Talk

In bins of invariant mass of the \(K \pi \pi\) system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2\(\sigma\) observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

Share

The Higgs Boson: A Natural Disaster!

Kyle Cranmer
Saturday, February 1st, 2014

The discovery of the Higgs boson was a triumph for particle physics. Its discovery completes the tremendously successful Standard Model of particle physics.  Of course, we know there are other phenomena — like dark matter, the dominance of matter over anti-matter, the mass of neutrinos, etc. – that aren’t explained by the Standard Model.  However, the Higgs itself is the source of one of the deepest mysteries of particle physics: the fine tuning problem.

The fine-tuning problem is related to the slippery concept of naturalness, and has driven the bulk of theoretical work for the last several decades.  Unfortunately, it is notoriously difficult to explain.  I took on this topic recently for a public lecture and came up with an analogy that I would like to share.

Why we take our theory seriously

Before discussing the fine tuning, we need need a few prerequisites.  The first thing to know is that the Standard Model (and most other theories we are testing) is based on a conceptual framework called Relativistic Quantum Field Theory (QFT).  As you might guess from the name, it’s based on the pillars of relativity, quantum mechanics, and field theory.  The key point here is that relativistic quantum field theory goes beyond the initial formulation of quantum mechanics.  To illustrate this difference, let’s consider a property of the electron and muon called its “g-factor” that relates its magnetic moment and spin [more].  In standard quantum mechanics, the prediction is that g=2; however, with relativistic quantum field theory we expect corrections.  Those corrections are shown pictorially  in the Feynman diagrams below.

g-2corrections

It turns out that this correction is small — about one part in a thousand.  But we can calculate it to an exquisite accuracy (about ten digits).  Moreover, we can measure it to a comparable accuracy.  The current result for the muon is

g = 2.0023318416 ± 0.000000001

This is a real tour de force for relativistic quantum field theory and represents one of the most stringent tests of any theory in the history of science [more].  To put it into perspective, it’s slightly better than hitting a hole in one from New York to China (that distance is about 10,000 km =1 billion cm).

It is because of tests like these that we take the predictions of this conceptual framework very seriously.

Precision-g-2

The Higgs, fine tuning, and an analogy

It turns out that all quantities that we can predict receive similar quantum corrections, even the mass of the Higgs boson.    In the Standard Model, there is a free parameter that can be thought of as an initial estimate for the Higgs mass, let’s call it M₀.  There will also be corrections, let’s call them ΔM (where Δ is pronounced “delta” and it indicates “change to”).   The physical mass that we observe is this initial estimate plus the corrections.  [For the aficionados: usually physicists talk about the mass squared instead of the mass, but that does not change the essential message].

The funny thing about the mass of the Higgs is that the corrections are not small.  In fact, the naive size of the corrections is enormously larger than the 126 GeV mass of that we observe!

Confused?  Now is a good time to bring in the analogy.  Let’s think about the budget of a large country like the U.S.  We will think of positive contributions to the Higgs mass as income (taxes) and negative contributions to the Higgs mass as spending.  The physical Higgs mass that we measure corresponds to the budget surplus.

Now imagine that there is no coordination between the raising of taxes and government spending (maybe it’s not that hard). Wouldn’t you be surprised that a large economy of trillions of dollars would have a budget balanced to better than a penny?  Wouldn’t that be unnatural to expect such a  fine tuning between  income and spending if they are just independent quantities?

This is exactly the case we find ourselves in with the Standard Model… and we don’t like it.  With the discovery of the Higgs, the Standard Model is now complete.  It is also the first theory we have had that can be extrapolated to very high energies (we say that it is renormalizable). But it has a severe fine tuning problem and does not seem natural.

Budget

AnalogyTable

 

The analogy can be fleshed out a bit more.  It turns out that the size of the corrections to the Higgs mass is related to something we call the cutoff, which is the  energy scale where the theory is no longer a valid approximation because some other phenomena become important.  For example, in a grand unified theory the strong force and the electroweak force would unify at approximately 10¹⁶ GeV (10 quadrillion GeV), and we would expect the corrections to be of a similar size.  Another common energy scale for the cutoff is the Planck Scale — 10¹⁹ GeV — where the quantum effects of gravity become important.  In the analogy, the cutoff energy corresponds to the fiscal year.  As time goes on, the budget grows and the chance of balancing the budget so precisely seems more and more unnatural.

Going even further, I can’t resist pointing out that the analogy even offers a nice way to think about one of the most enigmatic concepts in quantum field theory called renormalization.  We often use this term to describe how fundamental constants aren’t really constant.  For example, the  charge of an electron depends on the energy you use to probe the electron.  In the analogy, renormalization is like adjusting for inflation.  We know that a dollar today isn’t comparable to a dollar fifty years ago.

Breaking down the budget

The first thing one wants to understand before attempting to balance the budget is to find out where the money is going.  In the U.S. the big budget items are the military and social programs like social security and medicare.  In the case of the Higgs, the biggest corrections come from the top quark (the top diagrams on the right).  Of course the big budget items get most of the attention, and so it is with physics as well.  Most of the thinking that goes into to solving the fine tuning problem is related to the top quark.

BudgetOfCorrections

Searching for a principle to balance the budget

Maybe it’s not a miraculous coincidence that the budget is balanced so well.  Maybe there is some underlying principle.  Maybe someone came to Washington DC and passed a law to balance the budget that says that for every dollar of spending there must be a dollar of revenue.  This is an excellent analogy for supersymmetry.  In supersymmetry, there is an underlying principle — a symmetry — that relates two types of particles (fermions and bosons).  These two types of particles give corrections to the Higgs mass with opposite signs.  If this symmetry was perfect, the budget would be perfectly balanced, and it would not be unnatural for the Higgs to be 126 GeV.

That is one of the reasons that supersymmetry is so highly motivated, and there is an enormous effort to search for signs of supersymmetry in the LHC data.  Unfortunately, we haven’t seen any evidence for supersymmetry thus far. In the analogy that is a bit like saying that if there is some sort of law to balance the budget, it allows for some wiggle room between spending and taxes.  If the laws allow for too much wiggle room between spending and taxes then it may still be a law, but it isn’t explaining why the budget is balanced as well as it is.  The current state of the LHC experiments indicates that budget is balanced about 10-100 times better than the wiggle room allows  – which is better than we would expect, but not so much better that it seems unnatural.  However, if we don’t see supersymmetry in the next run of the LHC the situation will be worse. And if we were to build a 100 TeV collider and not see evidence of supersymmetry, then the level of fine tuning would be high enough that most physicists probably would consider the situation unnatural and abandon supersymmetry as the solution to the fine tuning problem.

SUSY

Since the fine tuning problem was first recognized, there have been essentially two proposed solutions.  One of them is supersymmetry, which I discussed above.  The second is often referred to as strong dynamics or compositeness.  The idea there is that maybe the Higgs is not a fundamental particle, but instead it’s a composite of some more fundamental particles.  My colleague Jamison Galloway and I tried to think through the analogy in that situation. In that case, one must start to think of different kinds of currencies… say the dollar for the Higgs boson and something other currencies like bitcoin for the more fundamental particles.  You would imagine that as time goes on (energy increases) that there is a transition from one currency to another.   At early times the budget is described entirely in terms of  dollars, but at later times the budget is described in terms of bitcoin.  That transition can be very complicated, but if it happened at a time when the total budget in dollars wasn’t too  large, then a well balanced budget wouldn’t seem too unnatural.  Trying to explain the rest of the compositeness story took us from a simple analogy to the basis for a series of sci-fi fantasy books, and I will spare you from that.

There are a number of examples where this aesthetic notion of naturalness has been a good guide, which is partially why physicists hold it so dear.  However, another avenue of thinking is that maybe the theory is unnatural, maybe it is random chance that the budget is balanced so well.  That thinking is bolstered by the idea that there may be a huge number of universes that are part of a larger complex we call the multiverse. In most of these universes the budget wouldn’t be balanced, the Higgs mass  would be very different.  In fact, most universes would not form atoms, would not form starts, and would not support life.  Of course, we are here here to observe our universe, and the conditions necessary to support life select very special universes out of the larger multiverse.  Maybe it is this requirement that explains why our universe seems so finely tuned.  This reasoning is called the anthropic principle, and it is one of the most controversial topics in theoretical physics. Many consider it giving up on a more fundamental theory that would explain why nature is as it is.  The very fact that we are resorting to this type of reasoning is evidence that the fine tuning problem is a big deal. I discuss this at the end of the public lecture (starting around the 30 min mark) with another analogy for the multiverse, but maybe I will leave that for another post.

Nota bene:  After developing this analogy I learned about a similar analogy from Tommaso Dorigo. They both use the idea of money, but the budget analogy goes a bit further.

Share

No cream, no sugar

Ken Bloom
Monday, January 6th, 2014

My first visit to CERN was in 1997, when I was wrapping up my thesis work. I had applied for, and then was offered, a CERN fellowship, and I was weighing whether to accept it. So I took a trip to Geneva to get a look at the place and make a decision. I stayed on the outskirts of Sergy with my friend David Saltzberg (yes, that David Saltzberg) who was himself a CERN fellow, and he and other colleagues helped set up appointments for me with various CERN physicists.

Several times each day, I would use my map to find the building with the right number on it, and arrive for my next appointment. Invariably, I would show up and be greeted with, “Oh good, you’re here. Let’s go get a coffee!”

I don’t drink coffee. At this point, I can’t remember why I never got started; I guess I just wasn’t so interested, and may also have had concerns about addictive stimulants. So I spent that week watching other people drink coffee. I learned that CERN depends on large volumes of coffee for its operation. It plays the same role as liquid helium does for the LHC, allowing the physicists to operate at high energies and accelerate the science. (I don’t drink liquid helium either, but that’s a story for another time.)

Coffee is everywhere. In Restaurant 1, there are three fancy coffee machines that can make a variety of brews. (Which ones? You’re asking the wrong person.) At breakfast time, the line for the machines stretches across the width of the cafeteria, blocking the cooler that has the orange juice, much to my consternation. Outside the serving area, there are three more machines where one can buy a coffee with a jeton (token) that can be purchased at a small vending machine. (I don’t know how much they cost.) After lunch, the lines for these machines clogs the walkway to the place where you deposit your used trays.

Coffee goes beyond the restuarants. Many buildings (including out-of-the-way Building 8, where my office is) have small coffee areas that are staffed by baristas (I suppose) at peak times when people who aren’t me want coffee. Building 40, the large headquarters for the CMS and ATLAS experiments, has a big coffee kiosk, where one can also get sandwiches and small pizzas, good when you want to avoid crazy Restaurant 1 lunchtimes and coffee runs. People line up for coffee here during meeting breaks, which usually puts us even further behind schedule.

Being a non-drinker of coffee can lead to some social discomfort. When two CERN people want to discuss something, they often do it over coffee. When someone invites me for a chat over coffee, I gamely say yes. But when we meet up I have to explain that I don’t actually drink coffee, and then sit patiently while they go to get a cup. I do worry that the other person feels uncomfortable about me watching them drink coffee. I could get a bottle of water for myself — even carbonated water, when I feel like living on the edge — but I rarely do. My wife (who does drink coffee, but tolerates me) gave me a few jetons to carry around with me, so I can at least make the friendly gesture of buying the other person’s coffee, but usually my offer is declined, perhaps because the person knows that he or she can’t really repay the favor.

So, if you see a person in conversation in the Restaurant 1 coffee area, not drinking anything but nervously twiddling his thumbs instead, come over and say hello. I can give you a jeton if you need one.

Share

Will the car start?

Ken Bloom
Saturday, November 9th, 2013

While my family and I are spending a year at CERN, our Subaru Outback is sitting in the garage in Lincoln, under a plastic cover and hooked up to a trickle charger. We think that we hooked it all up right before going, but it’s hard to know for sure. Will the car start again when we get home? We don’t know.

CMS is in a similar situation. The detector was operating just fine when the LHC run ended at the start of 2013, but now we aren’t using it like we did for the previous three years. It’s basically under a tarp in the garage. When proton collisions resume in 2015, the detector will have to be in perfect working order again. So will this car start after not being driven for two years?

Fortunately, we can actually take this car out for a drive. This past week, CMS performed an exercise known as the Global Run in November, or GRIN. (I know, the acronym. You are wondering, if it didn’t go well, would we call it FROWN instead? That too has an N for November.) The main goal of GRIN was to make sure that all of the components of CMS could still operate in concert. In fact, many pieces of CMS have been run during the past nine months, but independently of one another. Actually making everything run together is a huge integration task; it doesn’t just happen automatically. All of the readouts have to be properly synchronized so that the data from the entire detector makes sense. In addition, GRIN was a chance to test out some operational changes that the experiment wants to make for the 2015 run. It may sound like it is a while away, but anything new should really be tested out as soon as possible.

On Friday afternoon, I ran into some of the leaders of the detector run coordination team, and they told me that GRIN had gone very well. At the start, not every CMS subsystem was ready to join in, but by the end of the week, the entire detector was running together, for the first time since the end of collisions. Various problems were overcome along the way — including several detector experts getting trapped in a stuck elevator. But they believe that CMS is in a good position to be ready to go in 2015.

As a member of CMS, that was really encouraging news. Now, if only the run coordinators could tell me where I left the Subaru keys!

Share

2013 Nobel Prize — Made in America?

Ken Bloom
Tuesday, October 8th, 2013

You’re looking at the title and thinking, “Now that’s not true! Francois Englert is Belgian, and Peter Higgs is from the UK. And CERN, where the Higgs discovery was made, is a European lab, not in the US.”

That is all true, but on behalf of the US LHC blog, let’s take a few minutes to review the role of the United States in the Higgs observation that made this prize possible. To be sure, the US was part of an international effort on this, with essential contributions from thousands of people at hundreds of institutes from all over the world, and the Nobel Prize is a validation of the great work of all of them. (Not to mention the work of Higgs, Englert and many other contributing theorists!) But at the same time, I do want to combat the notion that this was somehow a non-US discovery (as some have implied). For many more details, see this link.

US collaborators, about 2000 strong, are a major contingent within both of the biggest LHC experiments, ATLAS and CMS. I’m a member of CMS, where people from US institutions are about one third of the membership of the collaboration. This makes the US physicists the largest single national contingent on the experiment — by no means a majority, but because of our size we have a critical role to play in the construction and operation of the experiment, and the data analysis that follows. American physicists are represented throughout the management structure (including Joe Incandela, the current CMS spokesperson) and deep in the trenches.

While the detectors were painstakingly assembled at CERN, many of the parts were designed, prototyped and fabricated in the US. On CMS, for instance, there has been US involvement in every major piece of the instrument: charged particle tracking, energy measurements, muon detection, and the big solenoid magnet that gives the experiment its name. Along with the construction responsibilities come maintenance and operational responsibilities too; we expect to carry these for the lifetime of the experiment.

The data that these amazing instruments record must then be processed, stored, and analyzed. This requires powerful computers, and the expertise to operate them efficiently. The US is a strong contributor here too. On CMS, about 40% of the data processing is handled at facilities in the US. And then there is the last step in the chain, the data analysis itself that leads to the measurements that allow us to claim a discovery. This is harder to quantify, but I can’t think of a single piece of the Higgs search analysis that didn’t have some US involvement.

Again, this is not to say that the US is the only player here — just to point out that thanks to the long history that the United States has in supporting this science, the US too can share some of the glory of today’s announcement.

Share

Another day at the office

Ken Bloom
Tuesday, October 8th, 2013

I suppose that my grandchildren might ask me, “Where were you when the Nobel Prize for the Higgs boson was announced?” I was at CERN, where the boson was discovered, thus giving the observational support required for the prize. And was I in the atrium of Building 40, where CERN Director General Rolf Heuer and hundreds of physicists had gathered to watch the broadcast of the announcement? Well no; I was in a small, stuffy conference room with about twenty other people.

We were in the midst of a meeting where we were hammering out the possible architecture of the submission system that physicists will be using to submit computing jobs for analyzing the data in the next LHC run and beyond. Not at all glamorous, I know. But that’s my point: the work that is needed to make big scientific discoveries, be it the Higgs or whatever might come next (we hope!) usually not the least bit glamorous. It’s a slog, where you have to work with a lot of other people to figure out all the difficult little details. And you really have to do this day after day, to make the science work. And there are many aspects of making science work — building advanced scientific instruments, harnessing the power of computers, coming up with clever ways to look at the data (and not making mistakes while at it), and working with colleagues to build confidence in a measurement. Each one of them takes time, effort and patience.

So in the end, today was just another day at the office — where we did the same things we’ve been doing for years to make this Nobel Prize possible, and are laying the groundwork for the next one.

Share