• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

USLHC | USA

Theory and experiment come together — bazinga!

Ken Bloom
Thursday, February 5th, 2015

Regular readers of Quantum Diaries will know that in the world of particle physics, there is a clear divide between the theorists and the experimentalists. While we are all interested in the same big questions — what is the fundamental nature of our world, what is everything made of and how does it interact, how did the universe come to be and how might it end — we have very different approaches and tools. The theorists develop new models of elementary particle interactions, and apply formidable mathematical machinery to develop predictions that experimenters can test. The experimenters develop novel instruments, deploy them on grand scales, and organize large teams of researchers to collect data from particle accelerators and the skies, and then turn those data into measurements that test the theorists’ models. Our work is intertwined, but ultimately lives in different spheres. I admire what theorists do, but I also know that I am much happier being an experimentalist!

But sometimes scientists from the two sides of particle physics come together, and the results can be intriguing. For instance, I recently came across a new paper by two up-and-coming physicists at Caltech. One, S. Cooper, has been a noted prodigy in theoretical pursuits such as string theory. The other, L. Hofstadter, is an experimental particle physicist who has been developing a detector that uses superfluid liquid helium as an active element. Superfluids have many remarkable properties, such as friction-free flow, that can make them very challenging to work with in particle detectors.

Hofstadter’s experience in working with a superfluid in the lab gave him new ideas about how it could be used as a physical model for space-time. There have already been a number of papers that posit a theory of the vacuum as having properties similar to that of a superfluid. But the new paper by Cooper and Hofstadter take this theory in a different direction, positing that the universe actually lives on the surface of such a superfluid, and that the negative energy density that we observe in the universe could be explained by the surface tension. The authors have difficulty generating any other testable hypotheses from this new theory, but it is inspiring to see how scientists from the two sides of physics can come together to generate promising new ideas.

If you want to learn more about this paper, watch “The Big Bang Theory” tonight, February 5, 2015, on CBS. And Leonard and Sheldon, if you are reading this post — don’t look at the comments. It will only be trouble.

In case you missed the episode, you can watch it here.

Like what you see here? Read more Quantum Diaries on our homepage, subscribe to our RSS feed, follow us on Twitter, or befriend us on Facebook!

Share

A particle detector in your pocket

Kyle Cranmer
Wednesday, February 4th, 2015

Do you love science and technology and sometimes wish you could contribute to a major discovery? I’ve got good news: “there’s an app for that.” With the Crayfis app, you can join a world-wide network of smartphones designed to detect ultra-high energy cosmic rays.

Cosmic rays were discovered by Victor Hess in 1912, for which he received the Nobel Prize in Physics in 1936. They are constantly raining down on us from space; typically atomic nuclei that hit the upper atmosphere leading to a huge shower of particles, some of which make it to the Earth’s surface.

Just last year a team of scientists published a result based on data from the Fermi gamma-ray space telescope indicating that lower-energy cosmic rays are associated to supernovae. However, the origin of the most energetic ones remains a mystery.

The highest energy cosmic rays are amazing, they have about as much kinetic energy as a 60 mph (~100 km/h) baseball packed into a single atomic nucleus! This is much higher energy than what is probed by the LHC, but these kinds of ultra-high energy cosmic rays are very rare. To get a feel for the numbers, the Pierre Auger Observatory, which is about the size of Rhode Island or Luxembourg, observes one of these ultra-high energy cosmic rays roughly every four weeks. What could possibly be responsible for accelerating particles to such high energies?

Untangling the mystery of these ultra-high energy cosmic rays will require observing many more, which means either a very long-running experiment or a very large area. Current arrays with large, highly-efficient devices like Auger cannot grow dramatically larger without becoming much more expensive. This motivates some out of the box thinking.

Smartphones are perfect candidates for a global cosmic ray detector. Phones these days are high-tech gadgets. The camera sensor is a lot like the pixel detectors of ATLAS and CMS, so they are capable of detecting particles from cosmic ray showers (check out the video for a quick demo). In addition, most phones have GPS to tell them where they are, wifi connections to the internet, and significant processing power. Perhaps most importantly, there are billions of smartphones already in use.

Late last year a small team led by Daniel Whiteson and Michael Mulhearn put out a paper making the case for such a world-wide network of smartphones. The paper is backed up by lab tests of the smart phone cameras and simulations of ultra-high energy cosmic ray showers. The results indicate that if we can have roughly a thousand sq. km clusters each with a thousand phones that the exposure time would be roughly equivalent to the Pierre Auger observatory. The paper quickly garnered attention as indicated by the “altmetric” summary below.

After the initial press release, more than 50,000 people signed up to the Crayfis project! That’s a great start. The Crayfis app for iOS and android are currently in beta testing and should be ready soon. I’ve joined this small project by helping develop the iOS app and the website, which are both a lot of fun. All you have to do is plug your phone in and set it camera down, probably at night when you are sleeping. If your phone thinks it has been hit by a cosmic ray, it will upload the data to the Crayfis servers. Later, we will look for groups of phones that were hit simultaneously, which indicates that its not just noise but a real cosmic ray shower.

The image below shows a screenshot of the live monitor of the Crayfis network so far — check it out, it’s fun to play with. As you can see Crayfis is already a world-wide network and may soon have the claim for the world’s largest detector.

Crayfis: A global network of smartphones

Crayfis: A global network of smartphones

 

What’s stopping you? Turn your phone into a cosmic ray detector, join the search, Get the app!

 

————————————————————————————————-


Help the Crayfis project grow, like us on facebook and

Kyle Cranmer is a Professor of physics and data science at New York University. His blog is at theoryandpractice.org.

Share

Looking Forward to 2015: Analysis Techniques

Adam Davis
Tuesday, January 27th, 2015

With 2015 a few weeks old, it seems like a fine time to review what happened in 2014 and to look forward to the new year and the restart of data taking. Along with many interesting physics results, just to name a few, LHCb saw its 200th publication, a test of lepton universality. With protons about to enter the LHC, and the ALICE and LHCb detectors recording muon data from transfer line tests between the SPS and LHC (see also here), the start of data-taking is almost upon us. For some implications, see Ken Bloom’s post here. Will we find supersymmetry? Split Higgs? Nothing at all? I’m not going to speculate on that, but I would like to review two techniques which played a key role in two results from LHCb and a few analysis techniques which enabled them.

The first result I want to discuss is the \(Z(4430)^{-}\). The first evidence for this state came from the Belle Collaboration in 2007, with subsequent studies in 2009 and in 2013. BaBar also searched for the state, and while they did not see it, they did not rule it out.

The LHCb collaboration searched for this state, using the specific decay mode \(B^0\to \psi’ K^{+} \pi^{-} \), with \(\psi’\) decaying to two muons. For more reading, see the nice writeup from earlier in 2014. As in the Belle analyses, which looked using muons or electrons in the final \(\psi’\) state, the trick here is to look for bumps in the \(\psi’ \pi^{-}\) mass distribution. If a peak appears which is not described  by the conventional 2 and 3 quark states, mesons and baryons, we know and love, it must be from a state involving a \(c \overline{c}d\overline{u}\) quark combination. The search is performed in two ways: a model-dependent search, which looks at the \(K\pi\) and \(\psi’\pi\) invariant mass and decay angle distributions, and a “model independent” search which looks for structure induced in the \(K\pi\) system induced by a resonance in the \(\psi’\pi\) system and does not invoke any exotic resonances.

At the end of the day, it is found in both cases that the data are not described without including a resonance for the \(Z(4430)^-\).

Now, it appears that we have a resonance on our hands, but how can we be sure? In the context of the aforementioned model dependent analysis, the amplitude for the \(Z(4430)^{-}\) is modeled as a Breit-Wigner amplitude, which is a complex number. If this amplitude is plotted in the imaginary plane as a function of the invariant mass of the resonance, a circular shape is traced out. This is characteristic of a resonance. Therefore, by fitting the real and imaginary parts of the amplitude in six bins of \(\psi’\pi\) invariant mass, the shape can be directly compared to that of an exected resonance. That’s exactly what’s done in the plot below:

The argand plane for the Z(4430)- search. Units are arbitrary.

The argand plane for the Z(4430)- search. Units are arbitrary.

What is seen is that the data (black points) roughly follow the outlined circular shape given by the Breit-Wigner resonance (red). The outliers are pulled due to detector effects. The shape quite clearly follows the circular characteristic of a resonance. This diagram is called an Argand Diagram.

 

Another analysis technique to identify resonances was used to find the two newest particles by LHCb:

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig (http://italicpig.com/blog/)

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig

Or perhaps seen as

 

Xi_b resonances, depicted by Lison Bernet.

Xi_b resonances, depicted by Lison Bernet.

Any way that you draw them, the two new particles, the \(\Xi_b’^-\) and \(\Xi_b^{*-}\) were seen by the LHCb collaboration a few months ago. Notably, the paper was released almost 40 years to the day that the discovery of the \(J/\psi\) was announced, sparking the November Revolution, and the understanding that mesons and baryons are composed of quarks. The \(\Xi_b’^-\) and \(\Xi_b^{*-}\) baryons are yet another example of the quark model at work. The two particles are shown in \(\delta m \equiv m_{candidate}(\Xi_b^0\pi_s^-)-m_{candidate}(\Xi_b^0)-m(\pi)\) space below.

Xi_b'^- and Xi_b^{*-} mass peaks shown in delta(m_candidate) space.

\(\Xi_b’^-\) and \(\Xi_b^{*-}\) mass peaks shown in \(\delta(m_{candidate})\) space.

Here, the search is performed by reconstructing \(\Xi_b^0 \pi^-_s\) decays, where the \(\Xi_b^0\) decays to \(\Xi_c^+\pi^-\), and \(\Xi_c^+\to p K^- \pi^+\). The terminology \(\pi_s\) is only used to distinguish between that pion and the other pions. The peaks are clearly visible. Now, we know that there are two resonances, but how do we determine whether or not the particles are the \(\Xi_b’^-\) and \(\Xi_b^{*-}\)? The answer is to fit what is called the helicity distributions of the two particles.

To understand the concept, let’s consider a toy example. First, let’s say that particle A decays to B and C, as \(A\to B C\). Now, let’s let particle C also decay, to particles D and F, as \(C\to D F\). In the frame where A decays at rest, the decay looks something like the following picture.

Simple Model of A->BC, C->DF

Simple Model of \(A\to BC\), \(C\to DF\)

There should be no preferential direction for B and C to decay if A is at rest, and they will decay back to back from conservation of momentum. Likewise, the same would be true if we jump to the frame where C is at rest; D and F would have no preferential decay direction. Therefore, we can play a trick. Let’s take the picture above, and exactly at the point where C decays, jump to its rest frame. We can then measure the directions of the outgoing particles. We can then define a helicity angle \(\theta_h\) as the angle between the C flight in A’s rest frame and D’s flight in C’s rest frame. I’ve shown this in the picture below.

Helicity Angle Definition for a simple model

Helicity Angle Definition for a simple model

If there is no preferential direction of the decay, we would expect a flat distribution of \(\theta_h\). The important caveat here is that I’m not including anything about angular momentum, spin or otherwise, in this argument. We’ll come back to that later. Now, we can identify A as the \(\Xi_b’\) or \(\Xi_b^*\) candidate, C as the \(\Xi_b^0\) and D as the \(\Xi_C\) candidates used in the analysis. The actual data are shown below.

Helicity angle distributions for the Xi_b' and Xi_b* candidates (upper and lower, respectively).

Helicity angle distributions for the \(\Xi_b’ \)and \(\Xi_b*\) candidates (upper and lower, respectively).

While it appears that the lower mass may have variations, it is statistically consistent with being a flat line. Now the extra power of such an analysis is that if we now consider angular momentum of the particles themselves, there are implied selection rules which will alter the distributions above, and which allow for exclusion or validation of particle spin hypotheses simply by the distribution shape. This is the rationale for having the extra fit in the plot above. As it turns out, both distributions being flat allows for the identification of  the \(\Xi ‘_b^-\) and the \(\Xi_b^{*-}\), but do not allow for conclusive ruling out of other spins.

With the restart of data taking at the LHC almost upon us (go look on Twitter for #restartLHC), if you see a claim for a new resonance, keep an eye out for Argand Diagrams or Helicity Distributions.

Share

2015: The LHC returns

Ken Bloom
Saturday, January 10th, 2015

I’m really not one for New Year’s resolutions, but one that I ought to make is to do more writing for the US LHC blog.  Fortunately, this is the right year to be making that resolution, as we will have quite a lot to say in 2015 — the year that the Large Hadron Collider returns!  After two years of maintenance and improvements, everyone is raring to go for the restart of the machine this coming March.  There is still a lot to do to get ready.  But once we get going, we will be entering a dramatic period for particle physics — one that could make the discovery of the Higgs seem humdrum.

The most important physics consideration for the new run is the increase of the proton collision energy from 8 TeV to 13 TeV.  Remember that the original design energy of the LHC is 14 TeV — 8 TeV was just an opening step.  As we near the 14 TeV point, we will be able to do the physics that the LHC was meant to do all along.  And it is important to remember that we have no feasible plan to build an accelerator that can reach a higher energy on any near time horizon.  While we will continue to learn more as we record more and more data, through pursuits like precision measurements of the properties of the Higgs boson, it is increases in energy that open the door to the discovery of heavy particles, and there is no major energy increase coming any time soon.  If there is any major breakthrough to be made in the next decade, it will probably come within the first few years of it, as we get our first look at 13 TeV proton collisions.

How much is our reach for new physics extended with the increase in energy?  One interesting way to look at it is through a tool called Collider Reach that was devised by theorists Gavin Salam and Andreas Weiler.  (My apologies to them if I make any errors in my description of their work.)  This tool makes a rough estimate of the mass scale of new physics that we could have access to at a new LHC energy given previous studies at an old LHC energy, based on our understanding of how the momentum distributions of the quarks and gluons inside the proton evolve to the new beam energy.  There are many assumptions made for this estimate — in particular, that the old data analysis will work just as well under new conditions.  This might not be the case, as the LHC will be running not just at a higher energy, but also a higher collision rate (luminosity), which will make the collisions more complicated and harder to interpret.  But the tool at least gives us an estimate of the improved reach for new physics.

During the 2008 LHC run at 8 TeV, each experiment collected about 20 fb-1 of proton collision data.  In the upcoming “Run 2” of the LHC at 13 TeV, which starts this year and is expected to run through the middle of 2018, we expect to record about 100 fb-1 of data, a factor of five increase.  (This is still a fairly rough estimate of the future total dataset size.)  Imagine that in 2008, you were looking for a particular model of physics that predicted a new particle, and you found that if that particle actually existed, it would have to have a mass of at least 3 TeV — a mass 24 times that of the Higgs boson.  How far in mass reach could your same analysis go with the Run 2 data?  The Collider Reach tool tells you:

100fb

Using the horizontal axis to find the 3 TeV point, we then look at the height of the green curve to tell us what to expect in Run 2.  That’s a bit more than 5 TeV — a 70% increase in the mass scale that your data analysis would have sensitivity to.

But you are impatient — how well could we do in 2015, the first year of the run?  We hope to get about 10 fb-1 this year. Here’s the revised plot:

10fb

The reach of the analysis is about 4 TeV. That is, with only 10% of the data, you get 50% of the increase in sensitivity that you would hope to achieve in the entire run.  So this first year counts!  One year from now, we will know a lot about what physics we have an opportunity to look at in the next few years — and if nature is kind to us, it will be something new and unexpected.

So what might this new physics be?  What are the challenges that we face in getting there?  How are physicists preparing to meet them?  You’ll be hearing a lot more about this in the year to come — and if I can keep to my New Year’s resolution, some of it you’ll hear from me.

Share

Where the wind goes sweeping ’round the ring?

Ken Bloom
Thursday, October 23rd, 2014

I travel a lot for my work in particle physics, but it’s usually the same places over and over again — Fermilab, CERN, sometimes Washington to meet with our gracious supporters from the funding agencies.  It’s much more interesting to go someplace new, and especially somewhere that has some science going on that isn’t particle physics.  I always find myself trying to make connections between other people’s work and mine.

This week I went to a meeting of the Council of the Open Science Grid that was hosted by the Oklahoma University Supercomputing Center for Education and Research in Norman, OK.  It was already interesting that I got to visit Oklahoma, where I had never been before.  (I think I’m up to 37 states now.)  But we held our meeting in the building that hosts the National Weather Center, which gave me an opportunity to take a tour of the center and learn a bit more about how research in meteorology and weather forecasting is done.

OU is the home of the largest meteorology department in the country, and the center hosts a forecast office of the National Weather Service (which produces forecasts for central and western Oklahoma and northern Texas, at the granularity of one hour and one kilometer) and the National Severe Storms Laboratory (which generates storm watches and warnings for the entire country — I saw the actual desk where the decisions get made!).  So how is the science of the weather like and not like the science that we do at the LHC?

(In what follows, I offer my sincere apologies to meteorologists in case I misinterpreted what I learned on my tour!)

Both are fields that can generate significant amounts of data that need to be interpreted to obtain a scientific result.  As has been discussed many times on the blog, each LHC experiment records petabytes of data each year.  Meteorology research is performed by much smaller teams of observers, which makes it hard to estimate their total data volume, but the graduate student who led our tour told us that he is studying a mere three weather events, but he has more than a terabyte of data to contend with — small compared to what a student on the LHC might have to handle, but still significant.

But where the two fields differ is what limits the rate at which the data can be understood.  At the LHC, it’s all about the processing power needed to reconstruct the raw data by performing the algorithms that turn the voltages read out from millions of amplifiers into the energies and momenta of individual elementary particles.  We know what the algorithms for this are, we know how to code them; we just have to run them a lot.  In meteorology, the challenge is getting to the point where you can even make the data interpretable in a scientific sense.  Things like radar readings still need to be massaged by humans to become sensible.  It is a very labor-intensive process, akin to the work done by the “scanner girls” of the particle physics days of yore, who carefully studied film emulsions by eye to identify particle tracks.  I do wonder what the prospects are in meteorology for automating this process so that it can be handed off to humans instead.  (Clearly this has to apply more towards forefront research in the field about how tornadoes form and the like, rather than to the daily weather predictions that just tell you the likelihood of tornado-forming conditions.)

Weather forecasting data is generally public information, accessible by anyone.  The National Weather Service publishes it in a form that has already had some processing done on it so that it can be straightforwardly ingested by others.  Indeed, there is a significant private weather-forecasting industry that makes use of this, and sells products with value added to the NWS data.  (For instance, you could buy a forecast much more granular than that provided by the NWS, e.g. for the weather at your house in ten-minute intervals.)  Many of these companies rent space in buildings within a block of the National Weather Center.  The field of particle physics is still struggling with how to make our data publicly available (which puts us well behind many astronomy projects which make all of their data public within a few years of the original observations).  There are concerns about how to provide the data in a form that will allow people who are not experts to learn something from the data without making mistakes.  But there has been quite a lot of progress in this in recent years, especially as it has been recognized that each particle physics experiment creates a unique dataset that will probably never be replicated in the future.  We can expect an increasing number of public data releases in the next few years.  (On that note, let me point out the NSF-funded Data and Software Preservation for Open Science (DASPOS) project that I am associated with on its very outer edges, which is working on some aspects of the problem.)  However, I’d be surprised if anyone starts up a company that will sell new interpretations of LHC data!

Finally, here’s one thing that the weather and the LHC has in common — they’re both always on!  Or, at least we try to run the LHC for every minute possible when the accelerator is operational.  (Remember, we are currently down for upgrades and will start up again this coming spring.)  The LHC experiments have physicists on on duty 24 hours a day, monitoring data quality and ready to make repairs to the detectors should they be needed.  Weather forecasters are also on shift at the forecasting center and the severe-storm center around the clock.  They are busy looking at data being gathered by their own instruments, but also from other sources.  For instance, when there are reports of tornadoes near Oklahoma City, the local TV news stations often send helicopters out to go take a look.  The forecasters watch the TV news to get additional perspectives on the storm.

Now, if only the weather forecasters on shift could make repairs to the weather just like our shifters can fix the detector!

Share

Let there be beam!

Adam Davis
Wednesday, October 15th, 2014

It’s been a little while since I’ve posted anything, but I wanted to write a bit about some of the testbeam efforts at CERN right now. In the middle of July this year, the Proton Synchrotron, or PS, the second ring of boosters/colliders which are used to get protons up to speed to collide in the LHC, saw its first beam since the shutdown at the end Run I of the LHC. In addition to providing beam to experiments like CLOUD, the beam can also be used to create secondary particles of up to 15 GeV/c momentum, which are then used for studies of future detector technology. Such a beam is called a testbeam, and all I can say is WOOT, BEAM! I must say that being able to take accelerator data is amazing!

The next biggest milestone is the testbeams from the SPS, which started on the 6th of October. This is the last ring before the LHC. If you’re unfamiliar with the process used to get protons up to the energies of the LHC, a great video can be found at the bottom of the page.

Just to be clear, test beams aren’t limited to CERN. Keep your eyes out for a post by my friend Rebecca Carney in the near future.

I was lucky enough to be part of the test beam effort of LHCb, which was testing both new technology for the VELO and for the upgrade of the TT station, called the Upstream Tracker, or UT. I worked mainly with the UT group, testing a sensor technology which will be used in the 2019 upgraded detector. I won’t go too much into the technology of the upgrade right now, but if you are interested in the nitty-gritty of it all, I will instead point you to the Technical Design Report itself.

I just wanted to take a bit to talk about my experience with the test beam in July, starting with walking into the experimental area itself. The first sight you see upon entering the building is a picture reminding you that you are entering a radiation zone.

ps_entrance

The Entrance!!

Then, as you enter, you see a large wall of radioactive concrete.

the_wall

Don’t lick those!

This is where the beam is dumped. Following along here, you get to the control room, which is where all the data taking stuff is set up outside the experimental area itself. Lots of people are always working in the control room, focused and making sure to take as much data as possible. I didn’t take their picture since they were working so hard.

Then there’s the experimental area itself.

the_setup

The Setup! To find the hardhat, look for the orange and green racks, then follow them towards the top right of the picture.

Ah, beautiful. 🙂

There are actually 4 setups here, but I think only three were being used at this time (click on the picture for a larger view). We occupied the area where the guy with the hardhat is.

Now the idea behind a tracker testbeam is pretty straight forward. A charged particle flies by, and many very sensitive detector planes record where the charged particle passed. These planes together form what’s called a “telescope.” The setup is completed when you add a detector to be tested either in the middle of the telescope or at one end.

Cartoon of a test beam setup. The blue indicates the "telescope", the orange is the detector under test, and the red is the trajectory of a charged particle.

Cartoon of a test beam setup. The blue indicates the “telescope”, the orange is the detector under test, and the red is the trajectory of a charged particle.

 

From timing information and from signals from these detectors, a trajectory of the particle can be determined. Now, you compare the position which your telescope gives you to the position you record in the detector you want to test, and voila, you have a way to understand the resolution and abilities of your tested detector. After that, the game is statistics. Ideally, you want to be in the middle of the telescope, so you have the information on where the charged particle passed on either side of your detector as this information gives the best resolution, but it can work if you’re on one side or the other, too.

This is the setup which we have been using for the testbeam at the PS.  We’ll be using a similar setup for the testbeam at the SPS next week! I’ll try to write a follow up post on that when we finish!

And finally, here is the promised video.

 

Share

Teaming up on top and Higgs

Ken Bloom
Monday, October 6th, 2014

While the LHC experiments are surely turning their attention towards the 2015 run of the collider, at an energy nearly double that of the previous run, we’re also busy trying to finalize and publish measurements using the data that we already have in the can.  Some measurements just take longer than others, and some it took us a while to get to.  And while I don’t like tooting my own horn too much here at the US LHC blog, I wanted to discuss a new result from CMS that I have been working on with a student, Dan Knowlton, here at the University of Nebraska-Lincoln, along with collaborators from a number of other institutions.  It’s been in the works for so long that I’m thrilled to get it out to the public!

(This is one of many CMS results that were shown for the first time last week at the TOP 2014 conference.  If you look through the conference presentations, you’ll find that the top quark, which has been around for about twenty years now, has continued to be a very interesting topic of study, with implications for searches for new physics and even for the fate of the universe.  One result that’s particularly interesting is a new average of CMS top-quark mass measurements, which is now the most accurate measurement of that quantity in the world.)

The LHC experiments have studied the Higgs boson through many different Higgs decay modes, and many different production mechanisms also.  Here is a plot of the expected cross sections for different Higgs production mechanisms as a function of Higgs mass; of course we know now that the Higgs has a mass of 125 GeV:

The most common production mechanism has a Higgs being produced with nothing else, but it can also be produced in association with other particles.  In our new result, we search for a Higgs production mechanism that is so much more rare that it doesn’t even appear on the above plot!  The mechanism is the production of a Higgs boson in association with a single top quark, and in the standard model, the cross section is expected to be 0.018 pb, about an order of magnitude below the cross section for Higgs production in association with a top-antitop pair.  Why even bother to look for such a thing, given how rare it is?

The answer lies in the reason for why this process is so rare.  There are actually two ways for this particular final state to be produced. Here are the Feynman diagrams for them:

   

In one case, the Higgs is radiated off the virtual W, while in the other it comes off the real final-state top quark.  Now, this is quantum mechanics: if you have two different ways to connect an initial and final state, you have to add the two amplitudes together before you square them to get a probability for the process.  It just so happens that these two amplitudes largely destructively interfere, and thus the production cross section is quite small.  There isn’t anything deep at work (e.g. no symmetries that suppress this process), it’s just how it comes out.

At least, that’s how it comes out in the standard model.  We assume certain values for the coupling factors of the Higgs to the top and W particles that appear in the diagrams above.  Other measurements of Higgs properties certainly suggest that the coupling factors do have the expected values, but there is room within the constraints for deviations.  It’s even possible that one of the two coupling values has the exact opposite sign from what we expect.  In that case, the destructive interference between the two amplitudes would become constructive, and the cross section would be almost a factor of 13 larger than expected!

The new result from CMS is a search for this anomalous production of the Higgs in association with a single top quark.  CMS already has a result for a search in which the Higgs decays to pair of photons; this new result describes a search in which the Higgs decays to bottom quarks.  That is a much more common Higgs decay mode, so there ought to be more events to see, but at the same time the backgrounds are much higher.  The production of a top-antitop pair along with an extra jet of hadrons that is mis-identified as arising from a bottom quark looks very much like the targeted Higgs production mechanism.  The top-antitop cross section is about 1000 times bigger than that of the anomalous production mechanism that we are looking for, and thus even a tiny bottom mis-identification rate leads to a huge number of background events.  A lot of the work in the data analysis goes into figuring out how to distinguish the (putative) signal events from the dominant background, and then verifying that the estimations of the background rates are correct.

The analysis is so challenging that we predicted that even by throwing everything we had at it, the best we could expect to do was to exclude the anomalous Higgs production process at a level of about five times the predicted rate for it.  When we looked at the data, we found that we could exclude it at about seven times the anomalous rate, roughly in line with what we expected.  In short, we do not see an anomalous rate for anomalous Higgs production!  But we are able to set a fairly tight limit, at around 1.8 pb.

What do I like about this measurement?  First, it’s a very different way to try to measure the properties of the Higgs boson.  The measurements we have are very impressive given the amount of data that we have so far, but they are not very constraining, and there is enough wiggle room for some strange stuff to be going on.  This is one of the few ways to probe the Higgs couplings through the interference of two processes, rather than just through the rate for one dominant process.  All of these Higgs properties measurements are going to be much more accurate in next year’s data run, when we expect to integrate more data and all of the production rates will be larger due to the increase in beam energy.  (For this anomalous production process, the cross section will increase by about a factor of four.)  In this particular case, we should be able to exclude anomalous Higgs couplings through this measurement…or, if nature surprises us, we will actually observe them!  There is a lot of fun ahead for Higgs physics (and top physics) at the LHC.

I’ve also really enjoyed working with my CMS colleagues on this project.  Any measurement coming out of the experiment is truly the work of thousands of people who have built and operated the detector, gotten the data recorded and processed, developed and refined the reconstruction algorithms, and defined the baselines for how we identify all kinds of particles that are produced in the proton collisions.  But the final stages of any measurement are carried out by smaller groups of people, and in this case we worked with colleagues from the Catholic University of Louvain in Belgium, the Karlsruhe Institute of Technology in Germany, the University of Malaya in Malaysia, and the University of Kansas (in Kansas).  We relied on the efforts of a strong group of graduate students with the assistance of harried senior physicists like myself, and the whole team did a great job of supporting each other and stepping up to solve problems as they arose.  These team efforts are one of the things that I’m proud of in particle physics, and that make our scientists so successful in the wider world.

Share

Calm before the storm: Preparing for LHC Run2

Emily Thompson
Wednesday, September 17th, 2014

It’s been a relatively quiet summer here at CERN, but now as the leaves begin changing color and the next data-taking period draws nearer, physicists on the LHC experiments are wrapping up their first-run analyses and turning their attention towards the next data-taking period. “Run2”, expected to start in the spring of 2015, will be the biggest achievement yet for particle physics, with the LHC reaching a higher collision energy than has ever been produced in a laboratory before.

As someone who was here before the start of Run1, the vibe around CERN feels subtly different. In 2008, while the ambitious first-year physics program of ATLAS and CMS was quite broad in scope, the Higgs prospects were certainly the focus. Debates (and even some bets) about when we would find the Higgs boson – or even if we would find it – cropped up all over CERN, and the buzz of excitement could be felt from meeting rooms to cafeteria lunch tables.

Countless hours were also spent in speculation about what it would mean for the field if we *didn’t* find the elusive particle that had evaded discovery for so long, but it was Higgs-centric discussion nonetheless. If the Higgs boson did exist, the LHC was designed to find this missing piece of the Standard Model, so we knew we were eventually going to get our answer one way or another.

Slowly but surely, the Higgs boson emerged in Run1 data

Slowly but surely, the Higgs boson emerged in Run1 data. (via CERN)

Now, more than two years after the Higgs discovery and armed with a more complete picture of the Standard Model, attention is turning to the new physics that may lie beyond it. The LHC is a discovery machine, and was built with the hope of finding much more than predicted Standard Model processes. Big questions are being asked with more tenacity in the wake of the Higgs discovery: Will we find supersymmetry? will we understand the nature of dark matter? is the lack of “naturalness” in the Standard Model a fundamental problem or just the way things are?

The feeling of preparedness is different this time around as well. In 2008, besides the data collected in preliminary cosmic muon runs used to commission the detector, we could only rely on simulation to prepare the early analyses, inducing a bit of skepticism in how much we could trust our pre-run physics and performance expectations. Compounded with the LHC quenching incident after the first week of beam on September 19, 2008 that destroyed over 30 superconducting magnets and delayed collisions until the end of 2009, no one knew what to expect.

Expect the unexpected.

Expect the unexpected…unless it’s a cat.

Fast forward to 2014, we have an increased sense of confidence stemming from our Run1 experience, having put our experiments to the test all the way from data acquisition to event reconstruction to physics analysis to publication…done at a speed which surpassed even our own expectations. We know to what extent we can rely the simulation, and know how to measure the performance of our detectors.

We also have a better idea of what our current analysis limitations are, and have been spending this LHC shutdown period working to improve them. Working meeting agendas, usually with the words “Run2 Kick-off” or “Task Force” in the title, have been filled with discussions of how we will handle data in 2015, with what precision can we measure objects in the detector, and what our early analysis priorities should be.

The Run1 dataset was also used as a dress rehearsal for future runs, where for example, many searches employed novel techniques to reconstruct highly boosted final states often predicted in new physics scenarios. The aptly-named BOOST conference recently held at UCL this past August highlighted some of the most state-of-the-art tools currently being developed by both theorists and experimentalists in order to extend the discovery reach for new fundamental particles further into the multi-TeV region.

Even prior to Run1, we knew that such new techniques would have to be validated in data in order to convince ourselves they would work, especially in the presence of extreme pileup (ie: multiple, less-interesting interactions in the proton bunches we send around the LHC ring…a side effect from increased luminosity). While the pileup conditions in 7 and 8 TeV data were only a taste of what we’ll see in Run2 and beyond, Run1 gave us the opportunity to try out these new techniques in data.

One of the first ever boosted top candidate events recorded in the ATLAS detector, where all three top decay products can be found inside a single hadronic jet.

One of the first ever boosted hadronic top candidate events recorded in the ATLAS detector, where all three decay products (denoted by red circles) can be found inside a single large jet, denoted by a green circle. (via ATLAS)

Conversations around CERN these days sound similar to those we heard before the start of Run1…what if we discover something new, or what if we don’t, and what will that mean for the field of particle physics? Except this time, the prospect of not finding anything is less exciting. The Standard Model Higgs boson was expected to be in a certain energy range accessible at the LHC, and if it was excluded it would have been a major revelation.

There are plenty of well-motivated theoretical models (such as supersymmetry) that predict new interactions to emerge around the TeV scale, but in principle there may not be anything new to discover at all until the GUT scale. This dearth of any known physics processes spanning a range of orders of magnitude in energy is often referred to as the “electroweak desert.”

Physicists taking first steps out into the electroweak desert will still need their caffeine.

Physicists taking first steps out into the electroweak desert will still need their caffeine. (via Dan Piraro)

Particle physics is entering a new era. Was the discovery of the Higgs just the beginning, and there is something unexpected to find in the new data? or will we be left disappointed? Either way, the LHC and its experiments struggled through the growing pains of Run1 to produce one of the greatest discoveries of the 21st century, and if new physics is produced in the collisions of Run2, we’ll be ready to find it.

Share

ICHEP at a distance

Ken Bloom
Friday, July 11th, 2014

I didn’t go to ICHEP this year.  In principle I could have, especially given that I have been resident at CERN for the past year, but we’re coming down to the end of our stay here and I didn’t want to squeeze in one more work trip during a week that turned out to be a pretty good opportunity for one last family vacation in Europe.  So this time I just kept track of it from my office, where I plowed through the huge volume of slides shown in the plenary sessions earlier this week.  It was a rather different experience for me from ICHEP 2012, which I attended in person in Melbourne and where we had the first look at the Higgs boson.  (I’d have to say it was also probably the pinnacle of my career as a blogger!)

Seth’s expectations turned out to be correct — there were no earth-shattering announcements at this year’s ICHEP, but still a lot to chew on.  The Standard Model of particle physics stands stronger than ever.  As Pauline wrote earlier today, the particle thought to be the Higgs boson two years ago still seems to be the Higgs boson, to the best of our abilities to characterize it.  The LHC experiments are starting to move beyond measurements of the “expected” properties — the dominant production and decay modes — into searches for unexpected, low-rate behavior.  While there are anomalous results here and there, there’s nothing that looks like more than a fluctuation.  Beyond the Higgs, all sectors of particle physics look much as predicted, and some fluctuations, such as the infamous forward-backward asymmetry of top-antitop production at the Tevatron, appear to have subsided.  Perhaps the only ambiguous result out there is that of the BICEP2 experiment which might have observed gravitational waves, or maybe not.  We’re all hoping that further data from that experiment and others will resolve the question by the end of the year.  (See the nice talk on the subject of particle physics and cosmology by Alan Guth, one of the parents of that field.)

This success of the Standard Model is both good and bad news.  It’s good that we do have a model that has stood up so well to every experimental test that we have thrown at it, in some cases to startling precision.  You want models to have predictive power.  But at the same time, we know that the model is almost surely incomplete.  Even if it can continue to work at higher energy scales than we have yet explored, at the very least we seem to be missing some particles (those that make up the dark matter we know exists from astrophysical measurements) and it also fails to explain some basic observations (the clear dominance of matter over antimatter in the universe).  We have high hopes for the next run of the LHC, which will start in Spring 2015, in which we will have higher beam energies and collision rates, and a greater chance of observing new particles (should they exist).

It was also nice to see the conference focus on the longer-term future of the field.  Since the last ICHEP, every region of the world has completed long-range strategic planning exercises, driven by recent discoveries (including that of the Higgs boson, but also of various neutrino properties) and anchored by realistic funding scenarios for the field.  There were several presentations about these plans during the conference, and a panel discussion featuring leaders of the field from around the world.  It appears that we are having a nice sorting out of which region wants to host which future facility, and when, in such a way that we can carry on our international efforts in a straightforward way.  Time will tell if we can bring all of these plans to fruition.

I’ll admit that I felt a little left out by not attending ICHEP this year.  But here’s the good news: ICHEP 2016 is in Chicago, one of the few places in the world that I can reach on a single plane flight from Lincoln.  I have marked my calendar!

Share

P5 and the fifth dimension that Einstein missed

Ken Bloom
Tuesday, May 27th, 2014

Among the rain
and lights
I saw the figure 5
in gold
on a red
firetruck
moving
tense
unheeded
to gong clangs
siren howls
and wheels rumbling
through the dark city.

William Carlos Williams, “The Great Figure”, 1921

Ever since the Particle Physics Project Prioritization Panel (P5) report was released on Thursday, May 22, I have been thinking very hard about the number five. Five is in the name of the panel, it is embedded in the science that the report describes, and in my opinion, the panel has figured out how to manipulate a fifth dimension. Please give me a chance to explain.

Having had a chance to read the report, let me say that I personally am very impressed by it and very supportive of the conclusions drawn and the recommendations made. The charge to P5 was to develop “an updated strategic plan for the U.S. that can be executed over a ten-year timescale, in the context of a twenty-year global vision for the field.” Perhaps the key phrase here is “can be executed”: this must be a plan that is workable under funding scenarios that are more limited than we might wish. It requires making some hard decisions about priorities, and these priorities must be set by the scientific questions that we are trying to address through the techniques of particle physics.

Using input from the Snowmass workshop studies that engaged a broad swath of the particle-physics community, P5 has done a nice job of distilling the intellectual breadth of our field into a small number of “science drivers”. How many? Well, five of course:

• Use the Higgs boson as a new tool for discovery
• Pursue the physics associated with neutrino mass
• Identify the new physics of dark matter
• Understand cosmic acceleration: dark energy and inflation
• Explore the unknown: new particles, interactions, and physical principles

I would claim that four of the drivers represent imperatives that are driven by recent and mostly unexpected discoveries — exactly how science should work. (The fifth and last listed is really the eternal question of particle physics.) While the discovery of the Higgs boson two years ago was dramatic and received a tremendous amount of publicity, it was not totally unexpected. The Higgs is part of the standard model, and all indirect evidence was pointing to its existence; now we can use it to look for things that actually are unexpected. The observation of the Higgs was not the end of an era, but the start of a new one. Meanwhile, neutrino masses, dark matter and dark energy are all outside our current theories, and they demand explanation that can only come through further experimentation. We now have the technical abilities to do these experiments. These science drivers are asking exciting, fundamental questions about how the universe came to be, what it is made of and how it all interacts, and they are questions that, finally, can be addressed in our time.

But, how to explore these questions in a realistic funding environment? Is it even possible? The answer from P5 is yes, if we are clever about how we do things. I will focus here on the largest projects that the P5 report addresses, the ones that cost at least $200M to construct; the report also discusses many medium-size and small efforts, and recommends hard choices on which we should continue to pursue and which, despite having merit, simply cannot fit into realistic funding scenarios. The three biggest projects are the LHC and its high-luminosity upgrade that should be completed about about ten years from now; a long-baseline neutrino experiment that would create neutrinos at Fermilab and observe them in South Dakota, and a high-energy electron-positron collider, the International Linear Collider (ILC) that could do precision studies of the Higgs boson but is at least ten years away from realization. They are all interesting projects that each address at least two of the science drivers, but is it possible for the U.S. to take a meaningful role in all three? The answer is yes…if you understand how to use the fifth dimension.

The high-luminosity LHC emerged as “the first high-priority large-category project” in the program recommended by P5, and it is to be executed regardless of budget scenario. (See below about the use of the word “first” here.)  As an LHC experimenter who write for the U.S. LHC blog, I am of course a bit biased, but I think this is a good choice. The LHC is an accelerator that we have in hand; there is nothing else that could be built in the next ten years that can do anything like it, and we must fully exploit its potential. It can address three of the science drivers — the Higgs, dark matter, and the unknown. U.S. physicists form the largest national contingent in each of the two big multi-purpose experiments, ATLAS and CMS, and the projects depend on U.S. participation and expertise for their success. While we can never make any guarantees of discovery, I personally think that the LHC gives us as good a chance as anything, and that it will be an exciting environment to work in over the coming years.

P5 handled the long-baseline neutrino experiment by presenting some interesting challenges to the U.S. and global particle physics communities. While there is already a plan to build this project, in the form of a proposed experiment called LBNE, it was considered to be inadequate for the importance of the science. The currently proposed LBNE detector in South Dakota would be too small to collect enough data on a timescale that would give interesting and conclusive results. Even the proponents of LBNE recognized these limitations.  So, P5 recommends that the entire project “should be reformulated under the auspices of a new international collaboration, as as an internationally coordinated and internationally funded program, with Fermilab as the host,” that will truly meet the scientific demands. It wouldn’t just be a single experiment, but a facility — the Long-Baseline Neutrino Facility (LBNF).

This is a remarkable strategic step. First, it makes the statement that if we are going to do the science, we must do it well. LBNF would be bigger then LBNE, and also much better in terms of its capabilities. It also fully integrates the U.S. program into the international community of particle physics — it would commit the U.S. to hosting a major facility that would draw world-wide collaboration and participation. The U.S. will hold up its end of the efforts to build particle-physics facilities that scientists from all over the world can take part in, just as CERN has successfully done with the LHC. To organize this new facility will take some time, such that peak costs of building LBNE will be pushed to a time later than the peak costs of upgrading the LHC.

One of the important ideas of special relativity is that the three dimensions of space and one dimension of time are placed on an equal footing. Two events in space-time that have given spatial and time separations in one frame of reference will have different spatial and time separations in a different frame. With LBNF, P5 has postulated a fifth dimension that must be considered: cost. If we were to try to upgrade the LHC and build LBNF at the same time, the cost would be more than we could afford, even with international participation. But by spacing out these two events in time, doing the HL-LHC first and LBNF second, the cost per year of these projects has become smaller; time and cost have been put on a more equal footing. Why didn’t Einstein think of that?

Thus, it is straightforward to set the international LBNF as “the highest-priority large project in its timeframe.” The title of the P5 report is “Building for Discovery”; LBNF will be the major project that the U.S. will build for discoveries in the areas of neutrino masses and exploration of the unknown.

As for the ILC, which Japan has expressed an interest in building, the scientific case for it is strong enough that “the U.S. should engage in modest and appropriate levels of ILC accelerator and detector design” no matter what the funding scenario. How much involvement there will be will depend on the funds available, and on whether the project actually goes forward. We will understand this better within the next few years. If the ILC is built, it will be a complement to the LHC and let us explore the properties of the Higgs and other particles in precise detail. With that, P5 has found a way for the U.S. to participate in all three major projects on the horizon, if we are careful about the timing of the projects and accept reasonable bounds on what we do with each.

These are the headlines from the report, but there is much more to it. The panel emphasizes the importance of maintaining a balance between the funds spent to build new facilities, to operate those facilities, and to do the actual research that leads to scientific discovery at the facilities. In recent years, there have been few building projects in the pipeline, and the fraction of the U.S. particle-physics budget devoted to new projects has languished at around 15%. P5 proposes that this be raised to the 20-25% level and maintained there, so that there will always be a push to create facilities that can address the scientific drivers — building for discovery. The research program is what funds graduate students and postdoctoral researchers, the future leaders of the field, and is where many exciting new physics ideas come from. Research has also been under financial pressure lately, and P5 proposes that it should not receive less than 40% of the budget. In addition, long-standing calls to invest in research and development that could lead to cheaper particle accelerators, more sensitive instrumentation, and revolutionary computational techniques are repeated.

This strategic vision is laid out in the context of three different funding scenarios. The most constrained scenario imagines flat budgets through 2018, and then annual increases of 2%, which is likely below the rate of inflation and thus would represent effectively shrinking budgets. The program described could be carried out, but it would be very challenging. LBNF could still be built, but it would be delayed. Various other projects would be cancelled, reduced or delayed. The research program would lose some of its capabilities. It would make it difficult for the U.S. to be a full international partner in particle physics, one that would be capable of hosting a large project and thus being a global leader in the field. Can we do better than that? Can we instead have a budget that grows at 3% per year, closer to the rate of inflation? The answer is ultimately up to our elected leaders. But I hope that we will be able to convince them, and you, that the scientific opportunities are exciting, and that the broad-based particle-physics community’s response to them is visionary while also being realistic.

Finally, I would like to offer some words on the use of logos. Since the last P5 report, in 2008, the U.S. particle physics program has relied on a logo that represented three “frontiers” of scientific exploration:

three_frontiers

It is a fine way to classify the kinds of experiments and projects that we pursue, but I have to say that the community has chafed a bit under this scheme. These frontiers represent different experimental approaches, but a single physics question can be addressed through multiple approaches. (Only the lack of time has kept me from writing a blog post titled “The tyranny of Venn diagrams.”) Indeed, in his summary presentation about the Energy Frontier for the Snowmass workshop, Chip Brock of Michigan State University suggested a logo that represented the interconnectedness of these approaches:

chip_rings

“Building for Discovery” brings us a new logo, one that represents the five science drivers as five interlocked crescents:

P5-swirl

I hope that this logo does an even better job of emphasizing the interconnectedness not just of experimental approaches to particle physics, but also of the five (!) scientific questions that will drive research in our field over the next ten to twenty years.

Of course, I’m also sufficiently old that this logo reminded me of something else entirely:

American_revolution_bicentennial

Maybe we can celebrate the P5 report as the start of an American revolution in particle physics? But I must admit that with P5, 5 science drivers and 5 dimensions, I still see the figure 5 in gold:

"I Saw the Figure 5 in Gold", Charles Demuth, 1928

“I Saw the Figure 5 in Gold”, Charles Demuth, 1928

Share