• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

USLHC | USA

Calm before the storm: Preparing for LHC Run2

Emily Thompson
Wednesday, September 17th, 2014

It’s been a relatively quiet summer here at CERN, but now as the leaves begin changing color and the next data-taking period draws nearer, physicists on the LHC experiments are wrapping up their first-run analyses and turning their attention towards the next data-taking period. “Run2″, expected to start in the spring of 2015, will be the biggest achievement yet for particle physics, with the LHC reaching a higher collision energy than has ever been produced in a laboratory before.

As someone who was here before the start of Run1, the vibe around CERN feels subtly different. In 2008, while the ambitious first-year physics program of ATLAS and CMS was quite broad in scope, the Higgs prospects were certainly the focus. Debates (and even some bets) about when we would find the Higgs boson – or even if we would find it – cropped up all over CERN, and the buzz of excitement could be felt from meeting rooms to cafeteria lunch tables.

Countless hours were also spent in speculation about what it would mean for the field if we *didn’t* find the elusive particle that had evaded discovery for so long, but it was Higgs-centric discussion nonetheless. If the Higgs boson did exist, the LHC was designed to find this missing piece of the Standard Model, so we knew we were eventually going to get our answer one way or another.

Slowly but surely, the Higgs boson emerged in Run1 data

Slowly but surely, the Higgs boson emerged in Run1 data. (via CERN)

Now, more than two years after the Higgs discovery and armed with a more complete picture of the Standard Model, attention is turning to the new physics that may lie beyond it. The LHC is a discovery machine, and was built with the hope of finding much more than predicted Standard Model processes. Big questions are being asked with more tenacity in the wake of the Higgs discovery: Will we find supersymmetry? will we understand the nature of dark matter? is the lack of “naturalness” in the Standard Model a fundamental problem or just the way things are?

The feeling of preparedness is different this time around as well. In 2008, besides the data collected in preliminary cosmic muon runs used to commission the detector, we could only rely on simulation to prepare the early analyses, inducing a bit of skepticism in how much we could trust our pre-run physics and performance expectations. Compounded with the LHC quenching incident after the first week of beam on September 19, 2008 that destroyed over 30 superconducting magnets and delayed collisions until the end of 2009, no one knew what to expect.

Expect the unexpected.

Expect the unexpected…unless it’s a cat.

Fast forward to 2014, we have an increased sense of confidence stemming from our Run1 experience, having put our experiments to the test all the way from data acquisition to event reconstruction to physics analysis to publication…done at a speed which surpassed even our own expectations. We know to what extent we can rely the simulation, and know how to measure the performance of our detectors.

We also have a better idea of what our current analysis limitations are, and have been spending this LHC shutdown period working to improve them. Working meeting agendas, usually with the words “Run2 Kick-off” or “Task Force” in the title, have been filled with discussions of how we will handle data in 2015, with what precision can we measure objects in the detector, and what our early analysis priorities should be.

The Run1 dataset was also used as a dress rehearsal for future runs, where for example, many searches employed novel techniques to reconstruct highly boosted final states often predicted in new physics scenarios. The aptly-named BOOST conference recently held at UCL this past August highlighted some of the most state-of-the-art tools currently being developed by both theorists and experimentalists in order to extend the discovery reach for new fundamental particles further into the multi-TeV region.

Even prior to Run1, we knew that such new techniques would have to be validated in data in order to convince ourselves they would work, especially in the presence of extreme pileup (ie: multiple, less-interesting interactions in the proton bunches we send around the LHC ring…a side effect from increased luminosity). While the pileup conditions in 7 and 8 TeV data were only a taste of what we’ll see in Run2 and beyond, Run1 gave us the opportunity to try out these new techniques in data.

One of the first ever boosted top candidate events recorded in the ATLAS detector, where all three top decay products can be found inside a single hadronic jet.

One of the first ever boosted hadronic top candidate events recorded in the ATLAS detector, where all three decay products (denoted by red circles) can be found inside a single large jet, denoted by a green circle. (via ATLAS)

Conversations around CERN these days sound similar to those we heard before the start of Run1…what if we discover something new, or what if we don’t, and what will that mean for the field of particle physics? Except this time, the prospect of not finding anything is less exciting. The Standard Model Higgs boson was expected to be in a certain energy range accessible at the LHC, and if it was excluded it would have been a major revelation.

There are plenty of well-motivated theoretical models (such as supersymmetry) that predict new interactions to emerge around the TeV scale, but in principle there may not be anything new to discover at all until the GUT scale. This dearth of any known physics processes spanning a range of orders of magnitude in energy is often referred to as the “electroweak desert.”

Physicists taking first steps out into the electroweak desert will still need their caffeine.

Physicists taking first steps out into the electroweak desert will still need their caffeine. (via Dan Piraro)

Particle physics is entering a new era. Was the discovery of the Higgs just the beginning, and there is something unexpected to find in the new data? or will we be left disappointed? Either way, the LHC and its experiments struggled through the growing pains of Run1 to produce one of the greatest discoveries of the 21st century, and if new physics is produced in the collisions of Run2, we’ll be ready to find it.

Share

ICHEP at a distance

Ken Bloom
Friday, July 11th, 2014

I didn’t go to ICHEP this year.  In principle I could have, especially given that I have been resident at CERN for the past year, but we’re coming down to the end of our stay here and I didn’t want to squeeze in one more work trip during a week that turned out to be a pretty good opportunity for one last family vacation in Europe.  So this time I just kept track of it from my office, where I plowed through the huge volume of slides shown in the plenary sessions earlier this week.  It was a rather different experience for me from ICHEP 2012, which I attended in person in Melbourne and where we had the first look at the Higgs boson.  (I’d have to say it was also probably the pinnacle of my career as a blogger!)

Seth’s expectations turned out to be correct — there were no earth-shattering announcements at this year’s ICHEP, but still a lot to chew on.  The Standard Model of particle physics stands stronger than ever.  As Pauline wrote earlier today, the particle thought to be the Higgs boson two years ago still seems to be the Higgs boson, to the best of our abilities to characterize it.  The LHC experiments are starting to move beyond measurements of the “expected” properties — the dominant production and decay modes — into searches for unexpected, low-rate behavior.  While there are anomalous results here and there, there’s nothing that looks like more than a fluctuation.  Beyond the Higgs, all sectors of particle physics look much as predicted, and some fluctuations, such as the infamous forward-backward asymmetry of top-antitop production at the Tevatron, appear to have subsided.  Perhaps the only ambiguous result out there is that of the BICEP2 experiment which might have observed gravitational waves, or maybe not.  We’re all hoping that further data from that experiment and others will resolve the question by the end of the year.  (See the nice talk on the subject of particle physics and cosmology by Alan Guth, one of the parents of that field.)

This success of the Standard Model is both good and bad news.  It’s good that we do have a model that has stood up so well to every experimental test that we have thrown at it, in some cases to startling precision.  You want models to have predictive power.  But at the same time, we know that the model is almost surely incomplete.  Even if it can continue to work at higher energy scales than we have yet explored, at the very least we seem to be missing some particles (those that make up the dark matter we know exists from astrophysical measurements) and it also fails to explain some basic observations (the clear dominance of matter over antimatter in the universe).  We have high hopes for the next run of the LHC, which will start in Spring 2015, in which we will have higher beam energies and collision rates, and a greater chance of observing new particles (should they exist).

It was also nice to see the conference focus on the longer-term future of the field.  Since the last ICHEP, every region of the world has completed long-range strategic planning exercises, driven by recent discoveries (including that of the Higgs boson, but also of various neutrino properties) and anchored by realistic funding scenarios for the field.  There were several presentations about these plans during the conference, and a panel discussion featuring leaders of the field from around the world.  It appears that we are having a nice sorting out of which region wants to host which future facility, and when, in such a way that we can carry on our international efforts in a straightforward way.  Time will tell if we can bring all of these plans to fruition.

I’ll admit that I felt a little left out by not attending ICHEP this year.  But here’s the good news: ICHEP 2016 is in Chicago, one of the few places in the world that I can reach on a single plane flight from Lincoln.  I have marked my calendar!

Share

P5 and the fifth dimension that Einstein missed

Ken Bloom
Tuesday, May 27th, 2014

Among the rain
and lights
I saw the figure 5
in gold
on a red
firetruck
moving
tense
unheeded
to gong clangs
siren howls
and wheels rumbling
through the dark city.

William Carlos Williams, “The Great Figure”, 1921

Ever since the Particle Physics Project Prioritization Panel (P5) report was released on Thursday, May 22, I have been thinking very hard about the number five. Five is in the name of the panel, it is embedded in the science that the report describes, and in my opinion, the panel has figured out how to manipulate a fifth dimension. Please give me a chance to explain.

Having had a chance to read the report, let me say that I personally am very impressed by it and very supportive of the conclusions drawn and the recommendations made. The charge to P5 was to develop “an updated strategic plan for the U.S. that can be executed over a ten-year timescale, in the context of a twenty-year global vision for the field.” Perhaps the key phrase here is “can be executed”: this must be a plan that is workable under funding scenarios that are more limited than we might wish. It requires making some hard decisions about priorities, and these priorities must be set by the scientific questions that we are trying to address through the techniques of particle physics.

Using input from the Snowmass workshop studies that engaged a broad swath of the particle-physics community, P5 has done a nice job of distilling the intellectual breadth of our field into a small number of “science drivers”. How many? Well, five of course:

• Use the Higgs boson as a new tool for discovery
• Pursue the physics associated with neutrino mass
• Identify the new physics of dark matter
• Understand cosmic acceleration: dark energy and inflation
• Explore the unknown: new particles, interactions, and physical principles

I would claim that four of the drivers represent imperatives that are driven by recent and mostly unexpected discoveries — exactly how science should work. (The fifth and last listed is really the eternal question of particle physics.) While the discovery of the Higgs boson two years ago was dramatic and received a tremendous amount of publicity, it was not totally unexpected. The Higgs is part of the standard model, and all indirect evidence was pointing to its existence; now we can use it to look for things that actually are unexpected. The observation of the Higgs was not the end of an era, but the start of a new one. Meanwhile, neutrino masses, dark matter and dark energy are all outside our current theories, and they demand explanation that can only come through further experimentation. We now have the technical abilities to do these experiments. These science drivers are asking exciting, fundamental questions about how the universe came to be, what it is made of and how it all interacts, and they are questions that, finally, can be addressed in our time.

But, how to explore these questions in a realistic funding environment? Is it even possible? The answer from P5 is yes, if we are clever about how we do things. I will focus here on the largest projects that the P5 report addresses, the ones that cost at least $200M to construct; the report also discusses many medium-size and small efforts, and recommends hard choices on which we should continue to pursue and which, despite having merit, simply cannot fit into realistic funding scenarios. The three biggest projects are the LHC and its high-luminosity upgrade that should be completed about about ten years from now; a long-baseline neutrino experiment that would create neutrinos at Fermilab and observe them in South Dakota, and a high-energy electron-positron collider, the International Linear Collider (ILC) that could do precision studies of the Higgs boson but is at least ten years away from realization. They are all interesting projects that each address at least two of the science drivers, but is it possible for the U.S. to take a meaningful role in all three? The answer is yes…if you understand how to use the fifth dimension.

The high-luminosity LHC emerged as “the first high-priority large-category project” in the program recommended by P5, and it is to be executed regardless of budget scenario. (See below about the use of the word “first” here.)  As an LHC experimenter who write for the U.S. LHC blog, I am of course a bit biased, but I think this is a good choice. The LHC is an accelerator that we have in hand; there is nothing else that could be built in the next ten years that can do anything like it, and we must fully exploit its potential. It can address three of the science drivers — the Higgs, dark matter, and the unknown. U.S. physicists form the largest national contingent in each of the two big multi-purpose experiments, ATLAS and CMS, and the projects depend on U.S. participation and expertise for their success. While we can never make any guarantees of discovery, I personally think that the LHC gives us as good a chance as anything, and that it will be an exciting environment to work in over the coming years.

P5 handled the long-baseline neutrino experiment by presenting some interesting challenges to the U.S. and global particle physics communities. While there is already a plan to build this project, in the form of a proposed experiment called LBNE, it was considered to be inadequate for the importance of the science. The currently proposed LBNE detector in South Dakota would be too small to collect enough data on a timescale that would give interesting and conclusive results. Even the proponents of LBNE recognized these limitations.  So, P5 recommends that the entire project “should be reformulated under the auspices of a new international collaboration, as as an internationally coordinated and internationally funded program, with Fermilab as the host,” that will truly meet the scientific demands. It wouldn’t just be a single experiment, but a facility — the Long-Baseline Neutrino Facility (LBNF).

This is a remarkable strategic step. First, it makes the statement that if we are going to do the science, we must do it well. LBNF would be bigger then LBNE, and also much better in terms of its capabilities. It also fully integrates the U.S. program into the international community of particle physics — it would commit the U.S. to hosting a major facility that would draw world-wide collaboration and participation. The U.S. will hold up its end of the efforts to build particle-physics facilities that scientists from all over the world can take part in, just as CERN has successfully done with the LHC. To organize this new facility will take some time, such that peak costs of building LBNE will be pushed to a time later than the peak costs of upgrading the LHC.

One of the important ideas of special relativity is that the three dimensions of space and one dimension of time are placed on an equal footing. Two events in space-time that have given spatial and time separations in one frame of reference will have different spatial and time separations in a different frame. With LBNF, P5 has postulated a fifth dimension that must be considered: cost. If we were to try to upgrade the LHC and build LBNF at the same time, the cost would be more than we could afford, even with international participation. But by spacing out these two events in time, doing the HL-LHC first and LBNF second, the cost per year of these projects has become smaller; time and cost have been put on a more equal footing. Why didn’t Einstein think of that?

Thus, it is straightforward to set the international LBNF as “the highest-priority large project in its timeframe.” The title of the P5 report is “Building for Discovery”; LBNF will be the major project that the U.S. will build for discoveries in the areas of neutrino masses and exploration of the unknown.

As for the ILC, which Japan has expressed an interest in building, the scientific case for it is strong enough that “the U.S. should engage in modest and appropriate levels of ILC accelerator and detector design” no matter what the funding scenario. How much involvement there will be will depend on the funds available, and on whether the project actually goes forward. We will understand this better within the next few years. If the ILC is built, it will be a complement to the LHC and let us explore the properties of the Higgs and other particles in precise detail. With that, P5 has found a way for the U.S. to participate in all three major projects on the horizon, if we are careful about the timing of the projects and accept reasonable bounds on what we do with each.

These are the headlines from the report, but there is much more to it. The panel emphasizes the importance of maintaining a balance between the funds spent to build new facilities, to operate those facilities, and to do the actual research that leads to scientific discovery at the facilities. In recent years, there have been few building projects in the pipeline, and the fraction of the U.S. particle-physics budget devoted to new projects has languished at around 15%. P5 proposes that this be raised to the 20-25% level and maintained there, so that there will always be a push to create facilities that can address the scientific drivers — building for discovery. The research program is what funds graduate students and postdoctoral researchers, the future leaders of the field, and is where many exciting new physics ideas come from. Research has also been under financial pressure lately, and P5 proposes that it should not receive less than 40% of the budget. In addition, long-standing calls to invest in research and development that could lead to cheaper particle accelerators, more sensitive instrumentation, and revolutionary computational techniques are repeated.

This strategic vision is laid out in the context of three different funding scenarios. The most constrained scenario imagines flat budgets through 2018, and then annual increases of 2%, which is likely below the rate of inflation and thus would represent effectively shrinking budgets. The program described could be carried out, but it would be very challenging. LBNF could still be built, but it would be delayed. Various other projects would be cancelled, reduced or delayed. The research program would lose some of its capabilities. It would make it difficult for the U.S. to be a full international partner in particle physics, one that would be capable of hosting a large project and thus being a global leader in the field. Can we do better than that? Can we instead have a budget that grows at 3% per year, closer to the rate of inflation? The answer is ultimately up to our elected leaders. But I hope that we will be able to convince them, and you, that the scientific opportunities are exciting, and that the broad-based particle-physics community’s response to them is visionary while also being realistic.

Finally, I would like to offer some words on the use of logos. Since the last P5 report, in 2008, the U.S. particle physics program has relied on a logo that represented three “frontiers” of scientific exploration:

three_frontiers

It is a fine way to classify the kinds of experiments and projects that we pursue, but I have to say that the community has chafed a bit under this scheme. These frontiers represent different experimental approaches, but a single physics question can be addressed through multiple approaches. (Only the lack of time has kept me from writing a blog post titled “The tyranny of Venn diagrams.”) Indeed, in his summary presentation about the Energy Frontier for the Snowmass workshop, Chip Brock of Michigan State University suggested a logo that represented the interconnectedness of these approaches:

chip_rings

“Building for Discovery” brings us a new logo, one that represents the five science drivers as five interlocked crescents:

P5-swirl

I hope that this logo does an even better job of emphasizing the interconnectedness not just of experimental approaches to particle physics, but also of the five (!) scientific questions that will drive research in our field over the next ten to twenty years.

Of course, I’m also sufficiently old that this logo reminded me of something else entirely:

American_revolution_bicentennial

Maybe we can celebrate the P5 report as the start of an American revolution in particle physics? But I must admit that with P5, 5 science drivers and 5 dimensions, I still see the figure 5 in gold:

"I Saw the Figure 5 in Gold", Charles Demuth, 1928

“I Saw the Figure 5 in Gold”, Charles Demuth, 1928

Share

Building for Discovery

Ken Bloom
Thursday, May 22nd, 2014

After years in the making — from the earliest plans in 2011 for an extended Snowmass workshop that started in October 2012 and culminated in August 2013, to the appointment of a HEPAP subpanel in September, to today — we have now received the report of the Particle Physics Project Prioritization Panel, or P5. As has been discussed elsewhere, this is a major report outlining the strategic plan for United States participation in the global enterprise of particle physics for the next two decades.

As I writing this, Steve Ritz of UC Santa Cruz, the chair of the panel, is making his presentation on the report, which has the title “Building for Discovery: Strategic Plan for U.S. Particle Physics in the Global Context.” While at CERN, I am watching remotely (or trying to do, the system must be heavily loaded, and it sounds like there are technical difficulties in the meeting room). I am restraining myself from live-blogging the presentation, as I want to take the time to read the report carefully before discussing it. (The report will be available in a couple of hours, but the executive summary is ready now.) Anything this important takes some time for proper digestion! If you are reading this, you are already a fan of particle physics, so I invite you to read it also and see what you think. I hope to discuss the matter further in a post next week.

But in any case, a huge thank-you to the hard-working members of P5 who developed this report!

Share

Snowmass, P5, HEPAP, HEP and what it all means to you

Adam Davis
Tuesday, May 20th, 2014

I know that the majority of the posts I’ve written have focused on physics issues and results, specifically those related to LHCb. I’d like to take this opportunity, however, to focus on the development of the field of High Energy Physics (HEP) and beyond.

As some of you know, in 2013, we witnessed an effectively year-long conversation about the state of our field, called Snowmass. This process is meant to collect scientists in the field, young and old alike, and ask them what the pressing issues for the development of our field are. In essence, it’s a “hey, stop working on your analysis for a second and let’s talk about the big issues” meeting. They came out with a comprehensive list of questions and also a bunch of working papers about the discussions. If you’re interested, go look at the website. The process was separated into “frontiers,” or groups that the US funding agencies put together to divide the field into the groups that they saw fit. I’ll keep my personal views on the “frontiers” language for a different day, and instead share a much more apt interpretation of the frontiers, which emerged from Jonathan Asaadi, of Snowmass Young and Quantum Diaries. He emphasizes that we are coming together to tackle the biggest problems as a team, as opposed to dividing into groups, illustrated as Voltron in his slide below.

snowmass_young_asaadi

Slide from presentation of Jonathan Asaadi at the USLUO (now USLUA) 2013 annual meeting in Madison, Wisconsin. The point here is collaboration between frontiers to solve the biggest problems, rather than division into separate groups.

And that’s just what happened. While I willingly admit that I had zero involvement in this process aside from taking the Snowmass Young survey, I still agree with the conclusions which were reached about what the future of our field should look like. Again, I highly encourage you to go look at the outcome.

Usually, this would be the end of the story, but this year, the recommendations from Snowmass were passed to a group called P5 (Particle Physics Project Prioritization Panel). The point of this panel was to review the findings of Snowmass and come up with a larger plan about how the future of HEP will proceed. The big ideas had effectively been gathered, now the hard questions about which projects can pursue these questions effectively are being asked. This specifically focuses on what the game plan will be for HEP over the next 10-20 years, and identifies the distinct physics reach in a variety of budget situations. Their recommendation will be passed to HEPAP (High Energy Physics Advisory Panel), which reviews the findings, then passes its recommendation to the US government and funding agencies. The P5 findings will be presented to HEPAP  on May 22nd, 2014 at 10 AM, EST. I invite you to listen to the presentation live here. The preliminary executive report and white paper can be found after 10 EST on the 22nd of May on the same site, as I understand.

This is a big deal.

There are two main points here. First, 10-20 years is a long time, and any sort of recommendation about the future of the field over such a long period will be a hard one. P5 has gone through the hard numbers under many different budget scenarios to maximize the science reach that the US is capable of. Looking at the larger political picture, in 2013, the US also entered the Sequester, which cut spending across the board and had wide implications for not only the US but worldwide. This is a testament to the tight budget constraints that we are working in now, and will most certainly face in the future. Even considering such a process as P5 shows that the HEP community recognizes this point, and understands that without well defined goals and tough considerations of how to achieve them, we will endanger the future funding of any project in the US or with US involvement.

Without this process, we will endanger future funding of US HEP.

We can take this one step further with a bit more concrete example. The majority of HEP workings are done through international collaboration, both experiment and theory alike. If any member of such a collaboration does not pull their weight, it puts the entire project into jeopardy. Take, for example, the US ATLAS and CMS programs, which have 23% and 33% involvement from the US, respectively, in both analysis and detector R&D. If these projects were cut drastically over the next years, there would have to be a massive rethinking about the strategies of their upgrades, not to mention possible lack of manpower. Not only would this delay one of the goals outlined by Snowmass, to use the Higgs as a discovery tool, but would also put into question the role of the US in the future of HEP. This is a simple example, but is not outside the realm of possibility.

The second point is how to make sure a situation like this does not happen.

I cannot say that communication of the importance of this process has been stellar. A quick google search yields no mainstream news articles about the process, nor the impact. In my opinion, this is a travesty and that’s the reason why I am writing this post. Symmetry Magazine also, just today, came out with an article about the process. Young members of our community who were not necessarily involved in Snowmass, but seem to know about Snowmass, do not really know about P5 or HEPAP. I may be wrong, but I draw this conclusion from a number of conversations I’ve had at CERN with US postdocs and students. Nonetheless, people are quite adamant about making sure that the US does continue to play a role in the future of HEP. This is true across HEP, the funding agencies and the members of Congress. (I can say this as I went on a trip with the USLUO, FNAL and SLAC representatives to lobby congress on behalf of HEP in March of this year, and this is the sentiment which I received.) So the first step is informing the public about what we’re doing and why.

The stuff we do is really cool! We’re all organized around how to solve the biggest issues facing physics! Getting the word out about this is key.

Go talk to your neighbor!

Go talk to your local physicist!

Go talk to your congressperson!

Just talk about physics! Talk about why it excites you and talk about why it’s interesting to explore! Maybe leave out the CLs plots, though. If you didn’t know, there’s also a whole mess of things that HEP is good for besides colliding particles! See this site for a few.

The final step is understanding the process. The biggest worry I have is what happens after HEPAP reviews the P5 recommendations. We, as a community, have to be willing to endure the pains of this process. Good science will be excluded. However, there are not infinite funds, nor was a guarantee of funding ever given. Recognition of this, while focusing on the big problems at hand and thinking about how to work within the means allowed is *the point* of the conversation. The better question is, will we emerge from the process unified or split? Will we get behind the Snowmass process and answer the questions posed to us, or fight about how to answer them? I certainly hope the answer is that we will unify, as we unified for Snowmass.

An allegorical example is from a slide from Nima Arkani-Hamed at Pheno2014, shown in the picture.

One slide from Nima Arkani-Hamed's presentation at Pheno2014

One slide from Nima Arkani-Hamed’s presentation at Pheno2014

 

The take home point is this: If we went through the exercise of Snowmass, and cannot pull our efforts together to the wishes of the community, are we going to survive? I would prefer to ask a different question: Will we not, as a community, take the opportunity to answer the biggest questions facing physics today?

We’ll see on the 22nd and beyond.

 

*********************************************

Update: May 27, 2014

*********************************************

As posted in the comments, the full report can be found here, the presentation given by Steve Ritz, chair of P5 can be found here, and the full P5 report can be found here.  Additionally, Symmetry Magazine has a very nice piece on the report itself. As they state in the update at the bottom of the page, HEPAP voted to accept the report.

Share

Can 2130 physicists pounding on keyboards turn out Shakespeare plays?

Ken Bloom
Tuesday, April 22nd, 2014

The CMS Collaboration, of which I am a member, has submitted 335 papers to refereed journals since 2009, including 109 such papers in 2013. Each of these papers had about 2130 authors. That means that the author list alone runs 15 printed pages. In some cases, the author list takes up more space than the actual content of the paper!

One might wonder: How do 2130 people write a scientific paper for a journal? Through a confluence of circumstances, I’ve been directly involved in the preparation of several papers over the last few months, so I have been thinking a lot about how this gets done, and thought I might use this opportunity to shed some light on the publication process. What I will not discuss here is why a paper should have 2130 authors and not more (or fewer)—this is a very interesting topic, but for now we will work from the premise that there are 2130 authors who, by signing the paper, take scientific responsibility for the correctness of its contents. How can such a big group organize itself to submit a scientific paper at all, and how can it turn out 109 papers in a year?

Certainly, with this many authors and this many papers, some set of uniform procedures are needed, and some number of people must put in substantial effort to maintain and operate the procedures. Each collaboration does things a bit differently, but all have the same goal in mind: to submit papers that are first correct (in the scientific sense of “correct” as in “not wrong with a high level of confidence”), and that are also timely. Correct takes precedence over timely; it would be quite an embarrassment to produce a paper that was incorrect because the work was done quickly and not carefully. Fortunately, in my many years in particle physics, I can think of very few cases when a correction to a published paper had to be issued, and never have I seen a paper from an experiment I have worked be retracted. This suggests that the publication procedures are indeed meeting their goals.

But even though being correct trumps everything, having an efficient publication process is still important. It would also be a shame to be scooped by a competitor on an interesting result because your paper was stuck inside your collaboration’s review process. So there is an important balance to be struck between being careful and being efficient.

One thing that would not be efficient would be for every one of the 2130 authors to scrutinize every publishable result in detail. If we were to try to do this, everyone would soon become consumed by reviewing data analyses, rather than working on the other necessary tasks of the experiment, from running the detector to processing the data to designing upgrades of the experiment. And it’s hard to imagine that, say, once 1000 people have examined a result carefully, another thousand would uncover a problem. That being said, everyone needs to understand that even if they decline to take part in the review of a particular paper, they are still responsible for it, in accordance with generally accepted guidelines for scientific authorship.

Instead, the review of each measurement or set of measurements destined for publication in a single paper is delegated by the collaboration to a smaller group of people. Different collaborations have different ways of forming these review committees—some create a new committee for a particular paper that dissolves when that paper is published, while others have standing panels that review multiple analyses within a certain topic area. These committees usually include several people with expertise in that particular area of particle physics or data analysis techniques, but one or two who serve as interested outsiders who might look at the work in a different way and come up with new questions about it. The reviewers tend to be more senior physicists, but some collaborations have allowed graduate students to be reviewers too. (One good way to learn how to analyze data is to carefully study how other people are doing it!)

The scientists who are performing a particular measurement with the data are typically also responsible for producing a draft of the scientific paper that will be submitted to the journal. The review committee is then responsible for making sure that the paper accurately describes the work and will be understandable to physicists who are not experts on this particular topic. There can also be a fair amount of work at this stage to shape the message of the paper; measurements produce results in the form of numerical values of physical quantities, but scientific papers have to tell stories about the values and how they are measured, and expressing the meaning of a measurement in words can be a challenge.

Once the review committee members think that a paper is of sufficient quality to be submitted to a journal, it is circulated to the entire collaboration for comment. Many collaborations insert a “style review” step at this stage, in which a physicist who has a lot of experience in the matter checks that the paper conforms to the collaboration’s style guidelines. This ensures some level of uniformity in terminology across the all of the collaboration’s papers, and it is also a good chance to check that the figures and tables are working as intended.

The circulation of a paper draft to the collaboration is a formal process that has potential scaling issues, given how many people might submit comments and suggestions. On relatively small collaborations such as those at the Tevatron (my Tevatron-era colleagues will find the use of the word “small” here ironic!), it was easy enough to take the comments by email, but the LHC collaborations have a more structured system for collecting and archiving comments. Collaborators are usually given about two weeks to read the draft paper and make comments. How many people send feedback can vary greatly with each paper; hotter topics might attract more attention. Some conscientious collaborators do in fact read every paper draft (as far as I can tell). To encourage participation, some collaborations do make explicit requests to a randomly-chosen set of institutes to scrutinize the paper, while some institutes have their own traditions of paper review. Comments on all aspects of the paper are typically welcome, from questions about the physics or the veracity of the analysis techniques, to suggestions on the organization of the paper and descriptions of data analysis, to matters like the placement of commas.

In any case, given the number of people who read the paper, the length of the comments can often exceed the length of the paper itself. The scientists who wrote the paper draft then have to address all of the comments. Some comments lead to changes in the paper to explain things better, or to additional cross-checks of the analysis to address a point that was raised. Many textual suggestions are implemented, while others are turned down with an explanation of why they are not necessary or harmful to the paper. The analysis review committee then verifies that all significant comments have been properly considered, and checks that the resulting revised paper draft is in good shape for submission.

Different collaborations have different final steps before the paper is actually submitted to a journal. Some have certain leaders of the collaboration, such as the spokespersons and/or physics coordinators, read the draft and make a final set of recommendations that are to be implemented before submission. Others have “publication committees” that organize public final readings of a paper that can lead to changes. At this stage the authors of the original draft very much hope that things go smoothly and that paper submission will be imminent.

And this whole process comes before the scientific tradition of independent, blind peer review! Journals have their own procedures for appointing referees who read the paper and give the journal editors advice on whether a paper should be published, and what changes or checks they might require before recommending publication. The interaction with the journal and its referees can also take quite some time, but almost always it ends with a positive result. The paper has gone through so many levels of scrutiny already that the output is really a high-quality scientific product that describes reproducible results, and that will ultimately stand the test of time.

A paper that describes a measurement in particle physics is the last step of a long journey, from the conception of the experiment, the design and subsequent construction of the apparatus, its operation over the course of years to collect the data sample, the processing of the data, and the subsequent analysis that leads to numerical values of physical quantities and their associated uncertainties. The actual writing of the papers, and process of validating them and bringing 2130 physicists to agree that the paper has told the right story about the whole journey is an important step in the creation of scientific knowledge.

Share

The Realineituhedron

Kyle Cranmer
Tuesday, April 1st, 2014

Inspired by the deep insights revealed in the recent work around the Amplituhedron, a new and deeper mathematical principle has revealed itself. While the amplituhedron caused quite a buzzeven outside of the world of theoretical particle physics, thus far it is restricted to N=4 supersymmetry. In contrast, this new object is able to represent all known predictions for physical observables. The new object, outlined in a recent paper is being called “The Realineituhedron”.

The key observation is that at the end of the day, everything we measure can be represented as a real number. The paper outlines a particular way of projecting these observations onto the realineituhedron, in which the “volume” Ω of the object represents the value of the observation.

In fact, the physically observable quantity must be a real number, a feature foreshadoewed by the Hermitian postulate of quantum mechanics.

The paper is full of beautiful hand-drawn figures, such as the ones below:

 Is it possible that there is some geometrical object is able to capture the Hermitian nature of these operators–indeed, is it able to represent all fundamental observables?

This masterful work will take some time to digest — it was only released today! One of the most intriguing ideas is that of a “The Master Realineituhedron”, denoted ℝ², in which all realineituhedrons can be embeded.

It would be interesting to see whether this larger space has any interesting role to play in understanding the m = 1 geometry relevant to physics.

 

[This post was originally posted here]


Share

A quick ski through history

Ken Bloom
Sunday, March 23rd, 2014

This past week about 175 lucky particle physicists gathered in La Thuile, a mountain town in the Italian Alps, for one of the annual Rencontres de Moriond conferences. This is one of the highlights of the particle-physics calendar, perhaps the most important gathering of particle physicists between the summer-time Lepton-Photon and ICHEP conferences for the presentation of new results. The major experimental collaborations of the world have been wrapping up a flurry of activity in preparation for the high-profile meetings taking place over the next few weeks. The atmosphere on the LHC experiments has been a bit less intense this year than last year, as the flashiest results from the 2010-12 data sample have already been released, but there was still a push to complete as many measurements as possible for presentation at this conference in particular.

I’ve only been to a Moriond conference once, but it was quite an experience. The conference is held at a ski resort to encourage cameraderie and scientific exchanges outside the conference room, and that leads to an action-packed week. Each morning of the week opens with about three hours of scientific presentations. The mid-morning finish allows for an almost-full day of skiing for those who chose to go (and as you might imagine, many do). This is a great opportunity to spend leisure time with colleagues, meet new people and discuss what had been learned that morning. After the lifts have closed, everyone returns to the hotel for another three hours of presentations. This is followed by a group dinner to continue the conversation. Everyone who has the chance to go realizes that they are very lucky to be there, but at the same time it is a rather exhausting experience! Or, as Henry Frisch, my undergraduate mentor and a regular Moriond attendee, once told me, “There are three things going on at Moriond — the physics, the skiing, and the food — and you can only do two out of the three.” (I skipped lunch on most days.)

As friends were getting ready to head south from CERN through the Mont Blanc tunnel to Italy (and as I was getting ready for my first visit to the United States in more than seven months, for the annual external review of the US LHC operations programs), I realized that it has in fact been ten years since the Moriond conference I went to. Thankfully, the conference organizers have maintained the conference website from 2004, allowing me to relive my presentation from that time. It is a relief to observe that our understanding of particle physics has advanced quite a bit since then! At that Moriond, the Tevatron was just starting to kick into gear for its “Run 2,” and during the previous year we had re-established the signal for the top quark that had first been observed in the mid-1990s. We were just starting to explore the properties of the top quark, but we were hampered by the size of the data sample at that point. It is amusing to look back and see that we were trying to measure the mass of the top quark with a mere six dilepton decay events! Over the coming years, the Tevatron would produce hundreds more such events, and the CDF and D0 experiments would complete the first thorough explorations of the top quark, demonstrating that its properties are totally in line with the predictions of the standard model. And since then, the LHC has done the Tevatron one better, thanks to both an increase in the top-quark production rate at the higher LHC energy and the larger LHC collision rate. The CMS top-quark sample now boasts about 70,000 dilepton candidate events, and the CMS measurement of the top-quark mass is now the best in the world.

Top-quark physics is one of the topics I’m most familiar with, so it is easy for me to mark progress there, but of course it has been a remarkable decade of advances for particle physics, with the discovery of the Higgs boson, a more thorough understanding of neutrino masses and mixing, and constraints on the properties of dark matter. Next year, the LHC will resume operations in its own “Run 2″, with an even higher collision energy and higher collision rates than we had in 2012. It is a change almost as great as that we experienced in moving from the Tevatron to the first run of the LHC. I cannot wait to see how the LHC will be advancing our knowledge of particle physics, possibly through the discovery of new particles that will help explain the puzzles presented by the Higgs boson. You can be sure that there will be a lot of excited chatter on the chair lifts around the dinner table at the 2016 Moriond conferences!

Share

Dear Google: Hire us!

Ken Bloom
Monday, March 3rd, 2014

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

Share

B Decays Get More Interesting

Adam Davis
Friday, February 28th, 2014

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for \( b\to s\gamma\) transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

\(b\to s\gamma \\)

One loop penguin diagram representing the transition \(b \to s \gamma \).

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay \(b\to s\gamma\) is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for \(b\to s\gamma\) agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays \(B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma \) are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Definition of \(\theta\)

Picture defining the angle \(\theta\) in the analysis of \(B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma\). From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into \(K^\pm \pi^\mp \pi^\pm\). It is then possible to form the triple product of \(p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})\). Effectively, this defines the angle \(\theta\) defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

\(\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta\)

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or \(A_{ud}\). For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb\(^{-1}\) collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of \(b\to s\gamma\).

Up-down asymmetry for the analysis of \(b\to s\gamma\). From the Lake Louise Conference Talk

In bins of invariant mass of the \(K \pi \pi\) system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2\(\sigma\) observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

Share