• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Ken Bloom | USLHC | USA

Read Bio

ICHEP at a distance

Friday, July 11th, 2014

I didn’t go to ICHEP this year.  In principle I could have, especially given that I have been resident at CERN for the past year, but we’re coming down to the end of our stay here and I didn’t want to squeeze in one more work trip during a week that turned out to be a pretty good opportunity for one last family vacation in Europe.  So this time I just kept track of it from my office, where I plowed through the huge volume of slides shown in the plenary sessions earlier this week.  It was a rather different experience for me from ICHEP 2012, which I attended in person in Melbourne and where we had the first look at the Higgs boson.  (I’d have to say it was also probably the pinnacle of my career as a blogger!)

Seth’s expectations turned out to be correct — there were no earth-shattering announcements at this year’s ICHEP, but still a lot to chew on.  The Standard Model of particle physics stands stronger than ever.  As Pauline wrote earlier today, the particle thought to be the Higgs boson two years ago still seems to be the Higgs boson, to the best of our abilities to characterize it.  The LHC experiments are starting to move beyond measurements of the “expected” properties — the dominant production and decay modes — into searches for unexpected, low-rate behavior.  While there are anomalous results here and there, there’s nothing that looks like more than a fluctuation.  Beyond the Higgs, all sectors of particle physics look much as predicted, and some fluctuations, such as the infamous forward-backward asymmetry of top-antitop production at the Tevatron, appear to have subsided.  Perhaps the only ambiguous result out there is that of the BICEP2 experiment which might have observed gravitational waves, or maybe not.  We’re all hoping that further data from that experiment and others will resolve the question by the end of the year.  (See the nice talk on the subject of particle physics and cosmology by Alan Guth, one of the parents of that field.)

This success of the Standard Model is both good and bad news.  It’s good that we do have a model that has stood up so well to every experimental test that we have thrown at it, in some cases to startling precision.  You want models to have predictive power.  But at the same time, we know that the model is almost surely incomplete.  Even if it can continue to work at higher energy scales than we have yet explored, at the very least we seem to be missing some particles (those that make up the dark matter we know exists from astrophysical measurements) and it also fails to explain some basic observations (the clear dominance of matter over antimatter in the universe).  We have high hopes for the next run of the LHC, which will start in Spring 2015, in which we will have higher beam energies and collision rates, and a greater chance of observing new particles (should they exist).

It was also nice to see the conference focus on the longer-term future of the field.  Since the last ICHEP, every region of the world has completed long-range strategic planning exercises, driven by recent discoveries (including that of the Higgs boson, but also of various neutrino properties) and anchored by realistic funding scenarios for the field.  There were several presentations about these plans during the conference, and a panel discussion featuring leaders of the field from around the world.  It appears that we are having a nice sorting out of which region wants to host which future facility, and when, in such a way that we can carry on our international efforts in a straightforward way.  Time will tell if we can bring all of these plans to fruition.

I’ll admit that I felt a little left out by not attending ICHEP this year.  But here’s the good news: ICHEP 2016 is in Chicago, one of the few places in the world that I can reach on a single plane flight from Lincoln.  I have marked my calendar!

Share

P5 and the fifth dimension that Einstein missed

Tuesday, May 27th, 2014

Among the rain
and lights
I saw the figure 5
in gold
on a red
firetruck
moving
tense
unheeded
to gong clangs
siren howls
and wheels rumbling
through the dark city.

William Carlos Williams, “The Great Figure”, 1921

Ever since the Particle Physics Project Prioritization Panel (P5) report was released on Thursday, May 22, I have been thinking very hard about the number five. Five is in the name of the panel, it is embedded in the science that the report describes, and in my opinion, the panel has figured out how to manipulate a fifth dimension. Please give me a chance to explain.

Having had a chance to read the report, let me say that I personally am very impressed by it and very supportive of the conclusions drawn and the recommendations made. The charge to P5 was to develop “an updated strategic plan for the U.S. that can be executed over a ten-year timescale, in the context of a twenty-year global vision for the field.” Perhaps the key phrase here is “can be executed”: this must be a plan that is workable under funding scenarios that are more limited than we might wish. It requires making some hard decisions about priorities, and these priorities must be set by the scientific questions that we are trying to address through the techniques of particle physics.

Using input from the Snowmass workshop studies that engaged a broad swath of the particle-physics community, P5 has done a nice job of distilling the intellectual breadth of our field into a small number of “science drivers”. How many? Well, five of course:

• Use the Higgs boson as a new tool for discovery
• Pursue the physics associated with neutrino mass
• Identify the new physics of dark matter
• Understand cosmic acceleration: dark energy and inflation
• Explore the unknown: new particles, interactions, and physical principles

I would claim that four of the drivers represent imperatives that are driven by recent and mostly unexpected discoveries — exactly how science should work. (The fifth and last listed is really the eternal question of particle physics.) While the discovery of the Higgs boson two years ago was dramatic and received a tremendous amount of publicity, it was not totally unexpected. The Higgs is part of the standard model, and all indirect evidence was pointing to its existence; now we can use it to look for things that actually are unexpected. The observation of the Higgs was not the end of an era, but the start of a new one. Meanwhile, neutrino masses, dark matter and dark energy are all outside our current theories, and they demand explanation that can only come through further experimentation. We now have the technical abilities to do these experiments. These science drivers are asking exciting, fundamental questions about how the universe came to be, what it is made of and how it all interacts, and they are questions that, finally, can be addressed in our time.

But, how to explore these questions in a realistic funding environment? Is it even possible? The answer from P5 is yes, if we are clever about how we do things. I will focus here on the largest projects that the P5 report addresses, the ones that cost at least $200M to construct; the report also discusses many medium-size and small efforts, and recommends hard choices on which we should continue to pursue and which, despite having merit, simply cannot fit into realistic funding scenarios. The three biggest projects are the LHC and its high-luminosity upgrade that should be completed about about ten years from now; a long-baseline neutrino experiment that would create neutrinos at Fermilab and observe them in South Dakota, and a high-energy electron-positron collider, the International Linear Collider (ILC) that could do precision studies of the Higgs boson but is at least ten years away from realization. They are all interesting projects that each address at least two of the science drivers, but is it possible for the U.S. to take a meaningful role in all three? The answer is yes…if you understand how to use the fifth dimension.

The high-luminosity LHC emerged as “the first high-priority large-category project” in the program recommended by P5, and it is to be executed regardless of budget scenario. (See below about the use of the word “first” here.)  As an LHC experimenter who write for the U.S. LHC blog, I am of course a bit biased, but I think this is a good choice. The LHC is an accelerator that we have in hand; there is nothing else that could be built in the next ten years that can do anything like it, and we must fully exploit its potential. It can address three of the science drivers — the Higgs, dark matter, and the unknown. U.S. physicists form the largest national contingent in each of the two big multi-purpose experiments, ATLAS and CMS, and the projects depend on U.S. participation and expertise for their success. While we can never make any guarantees of discovery, I personally think that the LHC gives us as good a chance as anything, and that it will be an exciting environment to work in over the coming years.

P5 handled the long-baseline neutrino experiment by presenting some interesting challenges to the U.S. and global particle physics communities. While there is already a plan to build this project, in the form of a proposed experiment called LBNE, it was considered to be inadequate for the importance of the science. The currently proposed LBNE detector in South Dakota would be too small to collect enough data on a timescale that would give interesting and conclusive results. Even the proponents of LBNE recognized these limitations.  So, P5 recommends that the entire project “should be reformulated under the auspices of a new international collaboration, as as an internationally coordinated and internationally funded program, with Fermilab as the host,” that will truly meet the scientific demands. It wouldn’t just be a single experiment, but a facility — the Long-Baseline Neutrino Facility (LBNF).

This is a remarkable strategic step. First, it makes the statement that if we are going to do the science, we must do it well. LBNF would be bigger then LBNE, and also much better in terms of its capabilities. It also fully integrates the U.S. program into the international community of particle physics — it would commit the U.S. to hosting a major facility that would draw world-wide collaboration and participation. The U.S. will hold up its end of the efforts to build particle-physics facilities that scientists from all over the world can take part in, just as CERN has successfully done with the LHC. To organize this new facility will take some time, such that peak costs of building LBNE will be pushed to a time later than the peak costs of upgrading the LHC.

One of the important ideas of special relativity is that the three dimensions of space and one dimension of time are placed on an equal footing. Two events in space-time that have given spatial and time separations in one frame of reference will have different spatial and time separations in a different frame. With LBNF, P5 has postulated a fifth dimension that must be considered: cost. If we were to try to upgrade the LHC and build LBNF at the same time, the cost would be more than we could afford, even with international participation. But by spacing out these two events in time, doing the HL-LHC first and LBNF second, the cost per year of these projects has become smaller; time and cost have been put on a more equal footing. Why didn’t Einstein think of that?

Thus, it is straightforward to set the international LBNF as “the highest-priority large project in its timeframe.” The title of the P5 report is “Building for Discovery”; LBNF will be the major project that the U.S. will build for discoveries in the areas of neutrino masses and exploration of the unknown.

As for the ILC, which Japan has expressed an interest in building, the scientific case for it is strong enough that “the U.S. should engage in modest and appropriate levels of ILC accelerator and detector design” no matter what the funding scenario. How much involvement there will be will depend on the funds available, and on whether the project actually goes forward. We will understand this better within the next few years. If the ILC is built, it will be a complement to the LHC and let us explore the properties of the Higgs and other particles in precise detail. With that, P5 has found a way for the U.S. to participate in all three major projects on the horizon, if we are careful about the timing of the projects and accept reasonable bounds on what we do with each.

These are the headlines from the report, but there is much more to it. The panel emphasizes the importance of maintaining a balance between the funds spent to build new facilities, to operate those facilities, and to do the actual research that leads to scientific discovery at the facilities. In recent years, there have been few building projects in the pipeline, and the fraction of the U.S. particle-physics budget devoted to new projects has languished at around 15%. P5 proposes that this be raised to the 20-25% level and maintained there, so that there will always be a push to create facilities that can address the scientific drivers — building for discovery. The research program is what funds graduate students and postdoctoral researchers, the future leaders of the field, and is where many exciting new physics ideas come from. Research has also been under financial pressure lately, and P5 proposes that it should not receive less than 40% of the budget. In addition, long-standing calls to invest in research and development that could lead to cheaper particle accelerators, more sensitive instrumentation, and revolutionary computational techniques are repeated.

This strategic vision is laid out in the context of three different funding scenarios. The most constrained scenario imagines flat budgets through 2018, and then annual increases of 2%, which is likely below the rate of inflation and thus would represent effectively shrinking budgets. The program described could be carried out, but it would be very challenging. LBNF could still be built, but it would be delayed. Various other projects would be cancelled, reduced or delayed. The research program would lose some of its capabilities. It would make it difficult for the U.S. to be a full international partner in particle physics, one that would be capable of hosting a large project and thus being a global leader in the field. Can we do better than that? Can we instead have a budget that grows at 3% per year, closer to the rate of inflation? The answer is ultimately up to our elected leaders. But I hope that we will be able to convince them, and you, that the scientific opportunities are exciting, and that the broad-based particle-physics community’s response to them is visionary while also being realistic.

Finally, I would like to offer some words on the use of logos. Since the last P5 report, in 2008, the U.S. particle physics program has relied on a logo that represented three “frontiers” of scientific exploration:

three_frontiers

It is a fine way to classify the kinds of experiments and projects that we pursue, but I have to say that the community has chafed a bit under this scheme. These frontiers represent different experimental approaches, but a single physics question can be addressed through multiple approaches. (Only the lack of time has kept me from writing a blog post titled “The tyranny of Venn diagrams.”) Indeed, in his summary presentation about the Energy Frontier for the Snowmass workshop, Chip Brock of Michigan State University suggested a logo that represented the interconnectedness of these approaches:

chip_rings

“Building for Discovery” brings us a new logo, one that represents the five science drivers as five interlocked crescents:

P5-swirl

I hope that this logo does an even better job of emphasizing the interconnectedness not just of experimental approaches to particle physics, but also of the five (!) scientific questions that will drive research in our field over the next ten to twenty years.

Of course, I’m also sufficiently old that this logo reminded me of something else entirely:

American_revolution_bicentennial

Maybe we can celebrate the P5 report as the start of an American revolution in particle physics? But I must admit that with P5, 5 science drivers and 5 dimensions, I still see the figure 5 in gold:

"I Saw the Figure 5 in Gold", Charles Demuth, 1928

“I Saw the Figure 5 in Gold”, Charles Demuth, 1928

Share

Building for Discovery

Thursday, May 22nd, 2014

After years in the making — from the earliest plans in 2011 for an extended Snowmass workshop that started in October 2012 and culminated in August 2013, to the appointment of a HEPAP subpanel in September, to today — we have now received the report of the Particle Physics Project Prioritization Panel, or P5. As has been discussed elsewhere, this is a major report outlining the strategic plan for United States participation in the global enterprise of particle physics for the next two decades.

As I writing this, Steve Ritz of UC Santa Cruz, the chair of the panel, is making his presentation on the report, which has the title “Building for Discovery: Strategic Plan for U.S. Particle Physics in the Global Context.” While at CERN, I am watching remotely (or trying to do, the system must be heavily loaded, and it sounds like there are technical difficulties in the meeting room). I am restraining myself from live-blogging the presentation, as I want to take the time to read the report carefully before discussing it. (The report will be available in a couple of hours, but the executive summary is ready now.) Anything this important takes some time for proper digestion! If you are reading this, you are already a fan of particle physics, so I invite you to read it also and see what you think. I hope to discuss the matter further in a post next week.

But in any case, a huge thank-you to the hard-working members of P5 who developed this report!

Share

Can 2130 physicists pounding on keyboards turn out Shakespeare plays?

Tuesday, April 22nd, 2014

The CMS Collaboration, of which I am a member, has submitted 335 papers to refereed journals since 2009, including 109 such papers in 2013. Each of these papers had about 2130 authors. That means that the author list alone runs 15 printed pages. In some cases, the author list takes up more space than the actual content of the paper!

One might wonder: How do 2130 people write a scientific paper for a journal? Through a confluence of circumstances, I’ve been directly involved in the preparation of several papers over the last few months, so I have been thinking a lot about how this gets done, and thought I might use this opportunity to shed some light on the publication process. What I will not discuss here is why a paper should have 2130 authors and not more (or fewer)—this is a very interesting topic, but for now we will work from the premise that there are 2130 authors who, by signing the paper, take scientific responsibility for the correctness of its contents. How can such a big group organize itself to submit a scientific paper at all, and how can it turn out 109 papers in a year?

Certainly, with this many authors and this many papers, some set of uniform procedures are needed, and some number of people must put in substantial effort to maintain and operate the procedures. Each collaboration does things a bit differently, but all have the same goal in mind: to submit papers that are first correct (in the scientific sense of “correct” as in “not wrong with a high level of confidence”), and that are also timely. Correct takes precedence over timely; it would be quite an embarrassment to produce a paper that was incorrect because the work was done quickly and not carefully. Fortunately, in my many years in particle physics, I can think of very few cases when a correction to a published paper had to be issued, and never have I seen a paper from an experiment I have worked be retracted. This suggests that the publication procedures are indeed meeting their goals.

But even though being correct trumps everything, having an efficient publication process is still important. It would also be a shame to be scooped by a competitor on an interesting result because your paper was stuck inside your collaboration’s review process. So there is an important balance to be struck between being careful and being efficient.

One thing that would not be efficient would be for every one of the 2130 authors to scrutinize every publishable result in detail. If we were to try to do this, everyone would soon become consumed by reviewing data analyses, rather than working on the other necessary tasks of the experiment, from running the detector to processing the data to designing upgrades of the experiment. And it’s hard to imagine that, say, once 1000 people have examined a result carefully, another thousand would uncover a problem. That being said, everyone needs to understand that even if they decline to take part in the review of a particular paper, they are still responsible for it, in accordance with generally accepted guidelines for scientific authorship.

Instead, the review of each measurement or set of measurements destined for publication in a single paper is delegated by the collaboration to a smaller group of people. Different collaborations have different ways of forming these review committees—some create a new committee for a particular paper that dissolves when that paper is published, while others have standing panels that review multiple analyses within a certain topic area. These committees usually include several people with expertise in that particular area of particle physics or data analysis techniques, but one or two who serve as interested outsiders who might look at the work in a different way and come up with new questions about it. The reviewers tend to be more senior physicists, but some collaborations have allowed graduate students to be reviewers too. (One good way to learn how to analyze data is to carefully study how other people are doing it!)

The scientists who are performing a particular measurement with the data are typically also responsible for producing a draft of the scientific paper that will be submitted to the journal. The review committee is then responsible for making sure that the paper accurately describes the work and will be understandable to physicists who are not experts on this particular topic. There can also be a fair amount of work at this stage to shape the message of the paper; measurements produce results in the form of numerical values of physical quantities, but scientific papers have to tell stories about the values and how they are measured, and expressing the meaning of a measurement in words can be a challenge.

Once the review committee members think that a paper is of sufficient quality to be submitted to a journal, it is circulated to the entire collaboration for comment. Many collaborations insert a “style review” step at this stage, in which a physicist who has a lot of experience in the matter checks that the paper conforms to the collaboration’s style guidelines. This ensures some level of uniformity in terminology across the all of the collaboration’s papers, and it is also a good chance to check that the figures and tables are working as intended.

The circulation of a paper draft to the collaboration is a formal process that has potential scaling issues, given how many people might submit comments and suggestions. On relatively small collaborations such as those at the Tevatron (my Tevatron-era colleagues will find the use of the word “small” here ironic!), it was easy enough to take the comments by email, but the LHC collaborations have a more structured system for collecting and archiving comments. Collaborators are usually given about two weeks to read the draft paper and make comments. How many people send feedback can vary greatly with each paper; hotter topics might attract more attention. Some conscientious collaborators do in fact read every paper draft (as far as I can tell). To encourage participation, some collaborations do make explicit requests to a randomly-chosen set of institutes to scrutinize the paper, while some institutes have their own traditions of paper review. Comments on all aspects of the paper are typically welcome, from questions about the physics or the veracity of the analysis techniques, to suggestions on the organization of the paper and descriptions of data analysis, to matters like the placement of commas.

In any case, given the number of people who read the paper, the length of the comments can often exceed the length of the paper itself. The scientists who wrote the paper draft then have to address all of the comments. Some comments lead to changes in the paper to explain things better, or to additional cross-checks of the analysis to address a point that was raised. Many textual suggestions are implemented, while others are turned down with an explanation of why they are not necessary or harmful to the paper. The analysis review committee then verifies that all significant comments have been properly considered, and checks that the resulting revised paper draft is in good shape for submission.

Different collaborations have different final steps before the paper is actually submitted to a journal. Some have certain leaders of the collaboration, such as the spokespersons and/or physics coordinators, read the draft and make a final set of recommendations that are to be implemented before submission. Others have “publication committees” that organize public final readings of a paper that can lead to changes. At this stage the authors of the original draft very much hope that things go smoothly and that paper submission will be imminent.

And this whole process comes before the scientific tradition of independent, blind peer review! Journals have their own procedures for appointing referees who read the paper and give the journal editors advice on whether a paper should be published, and what changes or checks they might require before recommending publication. The interaction with the journal and its referees can also take quite some time, but almost always it ends with a positive result. The paper has gone through so many levels of scrutiny already that the output is really a high-quality scientific product that describes reproducible results, and that will ultimately stand the test of time.

A paper that describes a measurement in particle physics is the last step of a long journey, from the conception of the experiment, the design and subsequent construction of the apparatus, its operation over the course of years to collect the data sample, the processing of the data, and the subsequent analysis that leads to numerical values of physical quantities and their associated uncertainties. The actual writing of the papers, and process of validating them and bringing 2130 physicists to agree that the paper has told the right story about the whole journey is an important step in the creation of scientific knowledge.

Share

A quick ski through history

Sunday, March 23rd, 2014

This past week about 175 lucky particle physicists gathered in La Thuile, a mountain town in the Italian Alps, for one of the annual Rencontres de Moriond conferences. This is one of the highlights of the particle-physics calendar, perhaps the most important gathering of particle physicists between the summer-time Lepton-Photon and ICHEP conferences for the presentation of new results. The major experimental collaborations of the world have been wrapping up a flurry of activity in preparation for the high-profile meetings taking place over the next few weeks. The atmosphere on the LHC experiments has been a bit less intense this year than last year, as the flashiest results from the 2010-12 data sample have already been released, but there was still a push to complete as many measurements as possible for presentation at this conference in particular.

I’ve only been to a Moriond conference once, but it was quite an experience. The conference is held at a ski resort to encourage cameraderie and scientific exchanges outside the conference room, and that leads to an action-packed week. Each morning of the week opens with about three hours of scientific presentations. The mid-morning finish allows for an almost-full day of skiing for those who chose to go (and as you might imagine, many do). This is a great opportunity to spend leisure time with colleagues, meet new people and discuss what had been learned that morning. After the lifts have closed, everyone returns to the hotel for another three hours of presentations. This is followed by a group dinner to continue the conversation. Everyone who has the chance to go realizes that they are very lucky to be there, but at the same time it is a rather exhausting experience! Or, as Henry Frisch, my undergraduate mentor and a regular Moriond attendee, once told me, “There are three things going on at Moriond — the physics, the skiing, and the food — and you can only do two out of the three.” (I skipped lunch on most days.)

As friends were getting ready to head south from CERN through the Mont Blanc tunnel to Italy (and as I was getting ready for my first visit to the United States in more than seven months, for the annual external review of the US LHC operations programs), I realized that it has in fact been ten years since the Moriond conference I went to. Thankfully, the conference organizers have maintained the conference website from 2004, allowing me to relive my presentation from that time. It is a relief to observe that our understanding of particle physics has advanced quite a bit since then! At that Moriond, the Tevatron was just starting to kick into gear for its “Run 2,” and during the previous year we had re-established the signal for the top quark that had first been observed in the mid-1990s. We were just starting to explore the properties of the top quark, but we were hampered by the size of the data sample at that point. It is amusing to look back and see that we were trying to measure the mass of the top quark with a mere six dilepton decay events! Over the coming years, the Tevatron would produce hundreds more such events, and the CDF and D0 experiments would complete the first thorough explorations of the top quark, demonstrating that its properties are totally in line with the predictions of the standard model. And since then, the LHC has done the Tevatron one better, thanks to both an increase in the top-quark production rate at the higher LHC energy and the larger LHC collision rate. The CMS top-quark sample now boasts about 70,000 dilepton candidate events, and the CMS measurement of the top-quark mass is now the best in the world.

Top-quark physics is one of the topics I’m most familiar with, so it is easy for me to mark progress there, but of course it has been a remarkable decade of advances for particle physics, with the discovery of the Higgs boson, a more thorough understanding of neutrino masses and mixing, and constraints on the properties of dark matter. Next year, the LHC will resume operations in its own “Run 2″, with an even higher collision energy and higher collision rates than we had in 2012. It is a change almost as great as that we experienced in moving from the Tevatron to the first run of the LHC. I cannot wait to see how the LHC will be advancing our knowledge of particle physics, possibly through the discovery of new particles that will help explain the puzzles presented by the Higgs boson. You can be sure that there will be a lot of excited chatter on the chair lifts around the dinner table at the 2016 Moriond conferences!

Share

Dear Google: Hire us!

Monday, March 3rd, 2014

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

Share

No cream, no sugar

Monday, January 6th, 2014

My first visit to CERN was in 1997, when I was wrapping up my thesis work. I had applied for, and then was offered, a CERN fellowship, and I was weighing whether to accept it. So I took a trip to Geneva to get a look at the place and make a decision. I stayed on the outskirts of Sergy with my friend David Saltzberg (yes, that David Saltzberg) who was himself a CERN fellow, and he and other colleagues helped set up appointments for me with various CERN physicists.

Several times each day, I would use my map to find the building with the right number on it, and arrive for my next appointment. Invariably, I would show up and be greeted with, “Oh good, you’re here. Let’s go get a coffee!”

I don’t drink coffee. At this point, I can’t remember why I never got started; I guess I just wasn’t so interested, and may also have had concerns about addictive stimulants. So I spent that week watching other people drink coffee. I learned that CERN depends on large volumes of coffee for its operation. It plays the same role as liquid helium does for the LHC, allowing the physicists to operate at high energies and accelerate the science. (I don’t drink liquid helium either, but that’s a story for another time.)

Coffee is everywhere. In Restaurant 1, there are three fancy coffee machines that can make a variety of brews. (Which ones? You’re asking the wrong person.) At breakfast time, the line for the machines stretches across the width of the cafeteria, blocking the cooler that has the orange juice, much to my consternation. Outside the serving area, there are three more machines where one can buy a coffee with a jeton (token) that can be purchased at a small vending machine. (I don’t know how much they cost.) After lunch, the lines for these machines clogs the walkway to the place where you deposit your used trays.

Coffee goes beyond the restuarants. Many buildings (including out-of-the-way Building 8, where my office is) have small coffee areas that are staffed by baristas (I suppose) at peak times when people who aren’t me want coffee. Building 40, the large headquarters for the CMS and ATLAS experiments, has a big coffee kiosk, where one can also get sandwiches and small pizzas, good when you want to avoid crazy Restaurant 1 lunchtimes and coffee runs. People line up for coffee here during meeting breaks, which usually puts us even further behind schedule.

Being a non-drinker of coffee can lead to some social discomfort. When two CERN people want to discuss something, they often do it over coffee. When someone invites me for a chat over coffee, I gamely say yes. But when we meet up I have to explain that I don’t actually drink coffee, and then sit patiently while they go to get a cup. I do worry that the other person feels uncomfortable about me watching them drink coffee. I could get a bottle of water for myself — even carbonated water, when I feel like living on the edge — but I rarely do. My wife (who does drink coffee, but tolerates me) gave me a few jetons to carry around with me, so I can at least make the friendly gesture of buying the other person’s coffee, but usually my offer is declined, perhaps because the person knows that he or she can’t really repay the favor.

So, if you see a person in conversation in the Restaurant 1 coffee area, not drinking anything but nervously twiddling his thumbs instead, come over and say hello. I can give you a jeton if you need one.

Share

Will the car start?

Saturday, November 9th, 2013

While my family and I are spending a year at CERN, our Subaru Outback is sitting in the garage in Lincoln, under a plastic cover and hooked up to a trickle charger. We think that we hooked it all up right before going, but it’s hard to know for sure. Will the car start again when we get home? We don’t know.

CMS is in a similar situation. The detector was operating just fine when the LHC run ended at the start of 2013, but now we aren’t using it like we did for the previous three years. It’s basically under a tarp in the garage. When proton collisions resume in 2015, the detector will have to be in perfect working order again. So will this car start after not being driven for two years?

Fortunately, we can actually take this car out for a drive. This past week, CMS performed an exercise known as the Global Run in November, or GRIN. (I know, the acronym. You are wondering, if it didn’t go well, would we call it FROWN instead? That too has an N for November.) The main goal of GRIN was to make sure that all of the components of CMS could still operate in concert. In fact, many pieces of CMS have been run during the past nine months, but independently of one another. Actually making everything run together is a huge integration task; it doesn’t just happen automatically. All of the readouts have to be properly synchronized so that the data from the entire detector makes sense. In addition, GRIN was a chance to test out some operational changes that the experiment wants to make for the 2015 run. It may sound like it is a while away, but anything new should really be tested out as soon as possible.

On Friday afternoon, I ran into some of the leaders of the detector run coordination team, and they told me that GRIN had gone very well. At the start, not every CMS subsystem was ready to join in, but by the end of the week, the entire detector was running together, for the first time since the end of collisions. Various problems were overcome along the way — including several detector experts getting trapped in a stuck elevator. But they believe that CMS is in a good position to be ready to go in 2015.

As a member of CMS, that was really encouraging news. Now, if only the run coordinators could tell me where I left the Subaru keys!

Share

2013 Nobel Prize — Made in America?

Tuesday, October 8th, 2013

You’re looking at the title and thinking, “Now that’s not true! Francois Englert is Belgian, and Peter Higgs is from the UK. And CERN, where the Higgs discovery was made, is a European lab, not in the US.”

That is all true, but on behalf of the US LHC blog, let’s take a few minutes to review the role of the United States in the Higgs observation that made this prize possible. To be sure, the US was part of an international effort on this, with essential contributions from thousands of people at hundreds of institutes from all over the world, and the Nobel Prize is a validation of the great work of all of them. (Not to mention the work of Higgs, Englert and many other contributing theorists!) But at the same time, I do want to combat the notion that this was somehow a non-US discovery (as some have implied). For many more details, see this link.

US collaborators, about 2000 strong, are a major contingent within both of the biggest LHC experiments, ATLAS and CMS. I’m a member of CMS, where people from US institutions are about one third of the membership of the collaboration. This makes the US physicists the largest single national contingent on the experiment — by no means a majority, but because of our size we have a critical role to play in the construction and operation of the experiment, and the data analysis that follows. American physicists are represented throughout the management structure (including Joe Incandela, the current CMS spokesperson) and deep in the trenches.

While the detectors were painstakingly assembled at CERN, many of the parts were designed, prototyped and fabricated in the US. On CMS, for instance, there has been US involvement in every major piece of the instrument: charged particle tracking, energy measurements, muon detection, and the big solenoid magnet that gives the experiment its name. Along with the construction responsibilities come maintenance and operational responsibilities too; we expect to carry these for the lifetime of the experiment.

The data that these amazing instruments record must then be processed, stored, and analyzed. This requires powerful computers, and the expertise to operate them efficiently. The US is a strong contributor here too. On CMS, about 40% of the data processing is handled at facilities in the US. And then there is the last step in the chain, the data analysis itself that leads to the measurements that allow us to claim a discovery. This is harder to quantify, but I can’t think of a single piece of the Higgs search analysis that didn’t have some US involvement.

Again, this is not to say that the US is the only player here — just to point out that thanks to the long history that the United States has in supporting this science, the US too can share some of the glory of today’s announcement.

Share

Another day at the office

Tuesday, October 8th, 2013

I suppose that my grandchildren might ask me, “Where were you when the Nobel Prize for the Higgs boson was announced?” I was at CERN, where the boson was discovered, thus giving the observational support required for the prize. And was I in the atrium of Building 40, where CERN Director General Rolf Heuer and hundreds of physicists had gathered to watch the broadcast of the announcement? Well no; I was in a small, stuffy conference room with about twenty other people.

We were in the midst of a meeting where we were hammering out the possible architecture of the submission system that physicists will be using to submit computing jobs for analyzing the data in the next LHC run and beyond. Not at all glamorous, I know. But that’s my point: the work that is needed to make big scientific discoveries, be it the Higgs or whatever might come next (we hope!) usually not the least bit glamorous. It’s a slog, where you have to work with a lot of other people to figure out all the difficult little details. And you really have to do this day after day, to make the science work. And there are many aspects of making science work — building advanced scientific instruments, harnessing the power of computers, coming up with clever ways to look at the data (and not making mistakes while at it), and working with colleagues to build confidence in a measurement. Each one of them takes time, effort and patience.

So in the end, today was just another day at the office — where we did the same things we’ve been doing for years to make this Nobel Prize possible, and are laying the groundwork for the next one.

Share