• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Ken Bloom | USLHC | USA

Read Bio

LHC 2015: what’s different in four years?

Monday, December 21st, 2015

After a long-anticipated data run, LHC proton-proton running concludes in early November.  A mere six weeks later, on a mid-December afternoon, the ATLAS and CMS collaborations present their first results from the full dataset to a packed CERN auditorium, with people all over the world watching the live webcast.  Both collaborations see slight excesses in events with photon pairs; the CMS excess is quite modest, but the ATLAS data show something that could be interpreted as a peak.  If it holds up with additional data, it would herald a major discovery.  While the experimenters caution that the results do not have much statistical significance, news outlets around the world run breathless stories about the possible discovery of a new particle.

December 15, 2015? No — December 13, 2011, four years ago.  That seminar presented what we now know were the first hints of the Higgs boson in the LHC data.  At the time, everyone was hedging their bets, and saying that the effects we were seeing could easily go away with more data.  Yet now we look back and know that it was the beginning of the end for the Higgs search.  And even at the time, everyone was feeling pretty optimistic.  Yes, we had seen effects of that size go away before, but at this time four years ago, a lot of people were guessing that this one wouldn’t (while still giving all of the caveats).

But while both experiments are reporting an effect at 750 GeV — and some people are getting very excited about it — it seems to me that caution is needed here, more so than we did with the emerging evidence for the Higgs boson.  What’s different about what we’re seeing now compared to what we saw in 2011?

I found it instructive to look back at the presentations of four years ago.  Then, ATLAS had an effect in diphotons around an invariant mass of 125 GeV that had a 2.8 standard deviation local significance, which was reduced to 1.5 standard deviations when the “look elsewhere effect” (LEE) was taken into account.  (The LEE exists because if there is a random fluctuation in the data, it might appear anywhere, not just the place you happen to be looking, and the statistical significance needs to be de-weighted for that.)  In CMS, the local significance was 2.1 standard deviations.  Let’s compare that to this year, when both experiments see an effect in diphotons around an invariant mass of 750 GeV.  At ATLAS, it’s a 3.6 standard deviation local effect which reduced to 2.0 standard deviations after the LEE.  For CMS the respective values are 2.6 and 1.2 standard deviations.  So it sounds like the 2015 signals are even stronger than the 2011 ones, although, on their own, still quite weak, when we consider that five standard deviations is the usual standard to claim a discovery because we are sure that a fluctuation of that size would be very unlikely.

But the 2011 signals had some other things going for them.  The first were experimental.  There were simultaneous excesses in other channels that were consistent with what you’d expect from a Higgs decay.  This included in particular the ZZ channel, which had a low expected rate, but also very low backgrounds and excellent mass resolution.  In 2011, both experiments had the beginning of signals in ZZ too (although at a slightly different putative Higgs mass value) and some early hints in other decay channels.  There were multiple results supporting the diphotons, whereas in 2015 there are no apparent excesses in other channels indicating anything at 750 GeV.

And on top of that, there was something else going for the Higgs in December 2011: there was good reason to believe it was on the way.  From a myriad of other experiments we had indirect evidence that a Higgs boson ought to exist, and in a mass range where the LHC effects were showing up.  This indirect evidence came through the interpretation of the “standard model” theory that had done an excellent job of describing all other data in particle physics and thus gave us confidence that it could make predictions about the Higgs too.  And for years, both the Tevatron and the LHC had been slowly but surely directly excluding other possible masses for the Higgs.  If a Higgs were going to show up, it made perfect sense for it to happen right where the early effects were being observed, at just that level of significance with so little data.

Do we have any of that with the 750 GeV effect in 2015?  No.  There are no particular reasons to expect this decay with this rate at this mass (although in the wake of last week’s presentations, there have been many conjectures as to what kind of new physics could make this happen).  Thus, one can’t help but to think that this is some kind of fluctuation.  If you look at enough possible new-physics effects, you have a decent chance of seeing some number of fluctuations at this level, and that seems to be the most reasonable hypothesis right now.

But really there is no need to speculate.  In 2016, the LHC should deliver ten times as much data as it did this year.  That’s even better than what happened in 2012, when the LHC exceeded its 2011 performance by a mere factor of five.  We can anticipate another set of presentations in December 2016, and by then we will know for sure if 2015 gave us a fluctuation or the first hint of a new physical theory that will set the research agenda of particle physics for years to come.  And if it is the latter, I will be the first to admit that I got it wrong.


What have we learned from the LHC in 2015?

Saturday, December 5th, 2015

The Large Hadron Collider is almost done running for 2015.  Proton collisions ended in early November, and now the machine is busy colliding lead nuclei.  As we head towards the end-of-year holidays, and the annual CERN shutdown, everyone wants to know — what have we learned from the LHC this year, our first year of data-taking at 13 TeV, the highest collision energies we have ever achieved, and the highest we might hope to have for years to come?

We will get our first answers to this question at a CERN seminar scheduled for Tuesday, December 15, where ATLAS and CMS will be presenting physics results from this year’s run.  The current situation is reminiscent of December 2011, when the experiments had recorded their first significant datasets from LHC Run 1, and we saw what turned out to be the first hints of the evidence for the Higgs boson that was discovered in 2012.  The experiments showed a few early results from Run 2 during the summer, and some of those have already resulted in journal papers, but this will be our first chance to look at the broad physics program of the experiments.  We shouldn’t have expectations that are too great, as only a small amount of data has been recorded so far, much less than we had in 2012.  But what science might we hope to hear about next week?  

Here is one thing to keep in mind — the change in collision energy affects particle production rates, but not the properties of the particles that are produced.  Any measurement of particle production rates is inherently interesting at a new collision energy, as will be a measurement that has never been done before.  Thus any measurement of a production rate that is possible with this amount of data would be a good candidate for presentation.  (The production rates of top quarks at 13 TeV have already been measured by both CMS and ATLAS; maybe there will be additional measurements along these lines.)

We probably won’t hear anything new about the Higgs boson.  While the Higgs production rates are larger than in the previous run, the amount of data recorded is still relatively small compared to the 2010-12 period.  This year, the LHC has delivered about 4 fb-1 of data, which could be compared to the 5 fb-1 that was delivered in 2011.  At that time there wasn’t enough data to say anything definitive about the Higgs boson, so it is hard to imagine that there will be much in the way of Higgs results from the new data (not even the production rate at 13 TeV), and certainly nothing that would tell us anything more about its properties than we already know from the full Run 1 dataset of 30 fb-1.  We’ll all probably have to wait until sometime next year before we will know more about the Higgs boson, and if anything about it will disagree with what we expect from the standard model of particle physics.

If there is anything to hope for next week, it is some evidence for new, heavy particles.  Because the collision energy has been increased from 8 TeV to 13 TeV, the ability to create a heavy particle of a given mass has increased too.  A little fooling around with the “Collider Reach” tool (which I had discussed here) suggests that even as little data as we have in hand now can give us improved chances of observing such particles now compared to the chances in the entire Run 1 dataset as long as the particle masses are above about 3 TeV.  Of course there are many theories that predict the existence of such particles, the most famous of which is supersymmetry.  But so far there has been scant evidence of any new phenomena in previous datasets.  If we were to get even a hint of something at a very high mass, it would definitely focus our scientific efforts for 2016, where we might get about ten times as much data as we did this year.

Will we get that hint, like we did with the Higgs boson four years ago?  Tune in on December 15 to find out!


Double time

Thursday, August 27th, 2015

In particle physics, we’re often looking for very rare phenomena, which are highly unlikely to happen in any given particle interaction. Thus, at the LHC, we want to have the greatest possible proton collision rates; the more collisions, the greater the chance that something unusual will actually happen. What are the tools that we have to increase collision rates?

Remember that the proton beams are “bunched” — there isn’t a continuous current current of protons in a beam, but a series of smaller bunches of protons, each only a few centimeters long, with gaps of many centimeters between each bunch.  The beams are then timed so that bunches from each beam pass through each other (“cross”) inside one of the big detectors.  A given bunch can have 10E11 protons in it, and when two bunches cross, perhaps tens of the protons in each bunch — a tiny fraction! — will interact.  This bunching is actually quite important for the operation of the detectors — we can know when bunches are crossing, and thus when collisions happen, and then we know when the detectors should really be “on” to record the data.

If one were to have a fixed number of protons in the machine (and thus a fixed total amount of beam current), you could imagine two ways to create the same number of collisions: have N bunches per beam, each with M protons, or 2N bunches per beam with M/sqrt(2) protons.  The more bunches in the beam, the more closely spaced they would have to be, but that can be done.  From the perspective of the detectors, the second scenario is much preferred.  That’s because you get fewer proton collisions per bunch crossing, and thus fewer particles streaming through the detectors.  The collisions are much easier to interpret if you have fewer collisions per crossing; among other things, you need less computer processing time to reconstruct each event, and you will have fewer mistakes in the event reconstruction because there aren’t so many particles all on top of each other.

In the previous LHC run (2010-12), the accelerator had “50 ns spacing” between proton bunches, i.e. bunch crossings took place every 50 ns.  But over the past few weeks, the LHC has been working on running with “25 ns spacing,” which would allow the beam to be segmented into twice as many bunches, with fewer protons per bunch.  It’s a new operational mode for the machine, and thus some amount of commissioning and tuning and so forth are required.  A particular concern is “electron cloud” effects due to stray particles in the beampipe striking the walls and ejecting more particles, which is a larger effect with smaller bunch spacing.  But from where I sit as one of the experimenters, it looks like good progress has been made so far, and as we go through the rest of this year and into next year, 25 ns spacing should be the default mode of operation.  Stay tuned for what physics we’re going to be learning from all of this!


Starting up LHC Run 2, step by step

Thursday, June 11th, 2015

I know what you are thinking. The LHC is back in action, at the highest energies ever! Where are the results? Where are all the blog posts?

Back in action, yes, but restarting the LHC is a very measured process. For one thing, when running at the highest beam energies ever achieved, we have to be very careful about how we operate the machine, lest we inadvertently damage it with beams that are mis-steered for whatever reason. The intensity of the beams — how many particles are circulating — is being incrementally increased with successive fills of the machine. Remember that the beam is bunched — the proton beams aren’t continuous streams of protons, but collections that are just a few centimeters long, spaced out by at least 750 centimeters. The LHC started last week with only three proton bunches in each beam, only two of which were actually colliding at an interaction point. Since then, the LHC team has gone to 13 bunches per beam, and then 39 bunches per beam. Full-on operations will be more like 1380 bunches per beam. So at the moment, the beams are of very low intensity, meaning that there are not that many collisions happening, and not that much physics to do.

What’s more, the experiments have much to do also to prepare for the higher collision rates. In particular, there is the matter of “timing in” all the detectors. Information coming from each individual component of a large experiment such as CMS takes some time to reach the data acquisition system, and it’s important to understand how long that time is, and to get all of the components synchronized. If you don’t have this right, then you might not be getting the optimal information out of each component, or worse still, you could end up mixing up information from different bunch crossings, which would be disastrous. This, along with other calibration work, is an important focus during this period of low-intensity beams.

But even if all these things were working right out of the box, we’d still have a long way to go until we had some scientific results. As noted already, the beam intensities have been low, so there aren’t that many collisions to examine. There is much work to do yet in understanding the basics in a revised detector operating at a higher beam energy, such as how to identify electrons and muons once again. And even once that’s done, it will take a while to make measurements and fully vet them before they could be made public in any way.

So, be patient, everyone! The accelerator scientists and the experimenters are hard at work to bring you a great LHC run! Next week, the LHC takes a break for maintenance work, and that will be followed by a “scrubbing run”, the goal of which is to improve the vacuum in the LHC beam pipe. That will allow higher-intensity beams, and position us to take data that will get the science moving once again.


Ramping up to Run 2

Thursday, March 19th, 2015

When I have taught introductory electricity and magnetism for engineers and physics majors at the University of Nebraska-Lincoln, I have used a textbook by Young and Freedman. (Wow, look at the price of that book! But that’s a topic for another day.) The first page of Chapter 28, “Sources of Magnetic Field,” features this photo:


It shows the cryostat that contains the solenoid magnet for the Compact Muon Solenoid experiment. Yes, “solenoid” is part of the experiment’s name, as it is a key element in the design of the detector. There is no other magnet like it in the world. It can produce a 4 Tesla magnetic field, 100,000 times greater than that of the earth. (We actually run at 3.8 Tesla.) Charged particles that move through a magnetic field take curved paths, and the stronger the field, the stronger the curvature. The more the path curves, the more accurately we can measure it, and thus the more accurately we can measure the momentum of the particle.

The magnet is superconducting; it is kept inside a cryostat that is full of liquid helium. With a diameter of seven meters, it is the largest superconducting magnet ever built. When in its superconducting state, the magnet wire carries more than 18,000 amperes of current, and the energy stored is about 2.3 gigajoules, enough energy to melt 18 tons of gold. Should the temperature inadvertently rise and the magnet become normal conducting, all of that energy needs to go somewhere; there are some impressively large copper conduits that can carry the current to the surface and send it safely to ground. (Thanks to the CMS web pages for some of these fun facts.)

With the start of the LHC run just weeks away, CMS has turned the magnet back on by slowly ramping up the current. Here’s what that looked like today:


You can see that they took a break for lunch! It is only the second time since the shutdown started two years ago that the magnet has been ramped back up, and now we’re pretty much going to keep it on for at least the rest of the year. From the experiment’s perspective, the long shutdown is now over, and the run is beginning. CMS is now prepared to start recording cosmic rays in this configuration, as a way of exercising the detector and using the observed muon to improve our knowledge of the alignment of detector components. This is a very important milestone for the experiment as we prepare for operating the LHC at the highest collision energies ever achieved in the laboratory!


Video contest: Rock the LHC!

Tuesday, March 17th, 2015

Do you think the Large Hadron Collider rocks? I sure do, and as the collider rocks back to life in the coming weeks (more on that soon), you can celebrate by entering the Rock the LHC video contest. It’s simple: you make a short video about why you are excited about research at the LHC, and submit it to a panel of physicists and communication experts. The producer of the best video will win an all-expenses paid trip for two to Fermilab in Batavia, IL, the premier particle physics laboratory in the United States, for a VIP tour. What a great way to celebrate the restart of the world’s most powerful particle collider!

This contest is funded by the University of Notre Dame, and please note the rules — you must be over 18 and a legal U.S. resident currently residing in the 50 states or the District of Columbia to enter. The deadline for entries is May 31.

As we get ready for the collider run of a lifetime, let’s see how creative you can be about the exciting science that is ahead of us!



Behind the scenes of our “Big Bang Theory” post

Tuesday, February 10th, 2015
Well, that was fun!

At 8 PM ET on February 5, 2015, Quantum Diaries ran a post that was tied to “The Troll Manifestation”, an episode of “The Big Bang Theory” (TBBT) that was being aired at exactly that time.  This was generated in partnership with the show’s writers, staff and advisers. What happens when you couple a niche-interest website to one of the most popular TV shows in the United States? The QD bloggers and support staff had a great time getting ready for this synergistic event and tracking what happened next.  Here’s the story behind the story.

I’ve mentioned previously, in my largely unheralded essay about the coffee culture at CERN, that I have known David Saltzberg, UCLA faculty member and science adviser to TBBT, for a very long time, since we were both students in the CDF group at The University of Chicago.  On January 14, David contacted me (and fellow QD blogger Michael DuVernois) to say that Quantum Diaries was going to be mentioned in an episode of the show that was going to be taped in the coming week.  David wanted to know if I could sign a release form allowing them to use the name of the blog.

I couldn’t — the blog is not mine, but is operated by the InterAction Collaboration, which is an effort of the communications organizations of the world’s particle physics laboratories.  (They signed the release form.)  But I did come up with an idea.  David had said that the show would refer to a Quantum Diaries blog post about a paper that Leonard and Sheldon had written.  Why not actually write such a post and put it up on the site?  A real blog post about a fake paper by fake scientists.  David was intrigued; he discussed it with the TBBT producers, and they liked the idea too.  The show was to air on February 5.  Game on!

David shared the shooting script with me, and explained that this was one of the rare TBBT episodes in which he didn’t just add in some science, but also had an impact on the plot.  He had described his own experience of talking about something with a theorist colleague, and getting the response, “That’s an interesting idea — we should write a paper about it together!”  I myself wouldn’t know where to get started in that situation.  This gave me the idea for how to write about the episode.  The script had enough information about Leonard and Sheldon’s paper for me to say something intelligible about it.  The fun for me in writing the post was in figuring out how to point to the show without giving it all away too quickly.  I ran my text by David, who passed it on to the show’s producers, and everyone enjoyed it.  We knew there was some possibility that the show’s social media team would promote the QD site through their channels; their Facebook page has 33 million likes and their Twitter account has 3.1 million followers.

Meanwhile, the Quantum Diaries team sprung into action.  Kelen Tuttle, the QD webmaster, told the other bloggers for the site about our opportunity to gain national recognition for the blog, and encouraged everyone to generate some exciting new content.  Regular QD readers might have noticed all the bloggers becoming very voluble in the past week!  Kevin Munday and his team at Xeno Media prepared the site for the possible onslaught of visitors  — remember, twenty million people watch TBBT each week! — by migrating the site to the CloudFlare content delivery network, with 30 data centers worldwide, and protecting the site against possible security issues.

We all crossed our fingers for Thursday night.  I spent Thursday at Fermilab, and was flying back to Lincoln in the afternoon, scheduled to land at 6:43 PM, a few minutes before the 7 PM air time.  When I got home, I started keeping an eye on the computer.  The blog post was up, but was TBBT going to say anything about it?  Alas, their Twitter feed was quiet during the show.  (No, I didn’t watch — I have to admit that we watch so little television that we couldn’t figure out which channel it might be on!)

All of us involved were a bit disappointed that evening.  But David took up the case again with the CBS interactive team the next day, and was told that they’d put out a tweet as long as we changed our blog post to link to the archive of the show.  We did that, and then at 12:45 Central Time, we got the shout-out that we were hoping for:

So what happens when a TV audience of around 20 million people hear a website (which may or may not be real) mentioned in a show?  Or when 3.1 million people who are fans of a TV show receive a tweet pointing to a blog post?  The Quantum Diaries traffic metrics tell the tale. Here is a plot of the number of visitors to the site during the past four weeks, including the February 5 air date and the February 6 tweet date:
When the blog was mentioned on the air, there was a definite spike in activity, and an even bigger spike on the day after, when the tweet went out. Traffic on the site was up by a factor of four thanks to TBBT!

However, the plot doesn’t show the absolute scale. On February 6, the site had about 4600 visitors, compared to a typical level of 800-1000 visitors. This means that only 0.1% of people who saw the TBBT tweet actually went and clicked on the link that took them to QD. This is nowhere near the level of activity we saw when the Higgs boson was discovered. TBBT may be a great TV show, but it’s no fundamental scientific discovery.

However, the story did have some pretty strong legs in Nebraska.  My employer, the University of Nebraska-Lincoln, graciously wrote a story about my involvement in the show and promoted it pretty heavily through social media.  This led to a couple of appearances on some news programs that enjoy making local links to national stories (if you could call this a national story).  I found it a bit surreal and was reminded that I need to get a haircut and clean my desk.

Thank you to David Salzberg for making this possible, and to the TBBT producers and writers who were supportive, and of course to all of my colleagues at Quantum Diaries who did a lot of writing and technical preparation last week.  (A special shout-out to Kelen Tuttle, who left QD for a new position at Invitae this week; at least we sent her off with a Big Bang!) And if you have discovered this blog because of “The Troll Manifestation”, I hope you stay for a while!  These are great times for particle physics — the Large Hadron Collider starts up again this year, we’re planning an exciting international program of neutrino physics that will be hosted in the United States, and we’re scanning the skies for the secrets of cosmology.  We particle physicists are excited about what we do and want to share some of our passion with you.  And besides, now we know that Stephen Hawking reads Quantum Diaries — shouldn’t you read it too?


Theory and experiment come together — bazinga!

Thursday, February 5th, 2015

Regular readers of Quantum Diaries will know that in the world of particle physics, there is a clear divide between the theorists and the experimentalists. While we are all interested in the same big questions — what is the fundamental nature of our world, what is everything made of and how does it interact, how did the universe come to be and how might it end — we have very different approaches and tools. The theorists develop new models of elementary particle interactions, and apply formidable mathematical machinery to develop predictions that experimenters can test. The experimenters develop novel instruments, deploy them on grand scales, and organize large teams of researchers to collect data from particle accelerators and the skies, and then turn those data into measurements that test the theorists’ models. Our work is intertwined, but ultimately lives in different spheres. I admire what theorists do, but I also know that I am much happier being an experimentalist!

But sometimes scientists from the two sides of particle physics come together, and the results can be intriguing. For instance, I recently came across a new paper by two up-and-coming physicists at Caltech. One, S. Cooper, has been a noted prodigy in theoretical pursuits such as string theory. The other, L. Hofstadter, is an experimental particle physicist who has been developing a detector that uses superfluid liquid helium as an active element. Superfluids have many remarkable properties, such as friction-free flow, that can make them very challenging to work with in particle detectors.

Hofstadter’s experience in working with a superfluid in the lab gave him new ideas about how it could be used as a physical model for space-time. There have already been a number of papers that posit a theory of the vacuum as having properties similar to that of a superfluid. But the new paper by Cooper and Hofstadter take this theory in a different direction, positing that the universe actually lives on the surface of such a superfluid, and that the negative energy density that we observe in the universe could be explained by the surface tension. The authors have difficulty generating any other testable hypotheses from this new theory, but it is inspiring to see how scientists from the two sides of physics can come together to generate promising new ideas.

If you want to learn more about this paper, watch “The Big Bang Theory” tonight, February 5, 2015, on CBS. And Leonard and Sheldon, if you are reading this post — don’t look at the comments. It will only be trouble.

In case you missed the episode, you can watch it here.

Like what you see here? Read more Quantum Diaries on our homepage, subscribe to our RSS feed, follow us on Twitter, or befriend us on Facebook!


2015: The LHC returns

Saturday, January 10th, 2015

I’m really not one for New Year’s resolutions, but one that I ought to make is to do more writing for the US LHC blog.  Fortunately, this is the right year to be making that resolution, as we will have quite a lot to say in 2015 — the year that the Large Hadron Collider returns!  After two years of maintenance and improvements, everyone is raring to go for the restart of the machine this coming March.  There is still a lot to do to get ready.  But once we get going, we will be entering a dramatic period for particle physics — one that could make the discovery of the Higgs seem humdrum.

The most important physics consideration for the new run is the increase of the proton collision energy from 8 TeV to 13 TeV.  Remember that the original design energy of the LHC is 14 TeV — 8 TeV was just an opening step.  As we near the 14 TeV point, we will be able to do the physics that the LHC was meant to do all along.  And it is important to remember that we have no feasible plan to build an accelerator that can reach a higher energy on any near time horizon.  While we will continue to learn more as we record more and more data, through pursuits like precision measurements of the properties of the Higgs boson, it is increases in energy that open the door to the discovery of heavy particles, and there is no major energy increase coming any time soon.  If there is any major breakthrough to be made in the next decade, it will probably come within the first few years of it, as we get our first look at 13 TeV proton collisions.

How much is our reach for new physics extended with the increase in energy?  One interesting way to look at it is through a tool called Collider Reach that was devised by theorists Gavin Salam and Andreas Weiler.  (My apologies to them if I make any errors in my description of their work.)  This tool makes a rough estimate of the mass scale of new physics that we could have access to at a new LHC energy given previous studies at an old LHC energy, based on our understanding of how the momentum distributions of the quarks and gluons inside the proton evolve to the new beam energy.  There are many assumptions made for this estimate — in particular, that the old data analysis will work just as well under new conditions.  This might not be the case, as the LHC will be running not just at a higher energy, but also a higher collision rate (luminosity), which will make the collisions more complicated and harder to interpret.  But the tool at least gives us an estimate of the improved reach for new physics.

During the 2008 LHC run at 8 TeV, each experiment collected about 20 fb-1 of proton collision data.  In the upcoming “Run 2” of the LHC at 13 TeV, which starts this year and is expected to run through the middle of 2018, we expect to record about 100 fb-1 of data, a factor of five increase.  (This is still a fairly rough estimate of the future total dataset size.)  Imagine that in 2008, you were looking for a particular model of physics that predicted a new particle, and you found that if that particle actually existed, it would have to have a mass of at least 3 TeV — a mass 24 times that of the Higgs boson.  How far in mass reach could your same analysis go with the Run 2 data?  The Collider Reach tool tells you:


Using the horizontal axis to find the 3 TeV point, we then look at the height of the green curve to tell us what to expect in Run 2.  That’s a bit more than 5 TeV — a 70% increase in the mass scale that your data analysis would have sensitivity to.

But you are impatient — how well could we do in 2015, the first year of the run?  We hope to get about 10 fb-1 this year. Here’s the revised plot:


The reach of the analysis is about 4 TeV. That is, with only 10% of the data, you get 50% of the increase in sensitivity that you would hope to achieve in the entire run.  So this first year counts!  One year from now, we will know a lot about what physics we have an opportunity to look at in the next few years — and if nature is kind to us, it will be something new and unexpected.

So what might this new physics be?  What are the challenges that we face in getting there?  How are physicists preparing to meet them?  You’ll be hearing a lot more about this in the year to come — and if I can keep to my New Year’s resolution, some of it you’ll hear from me.


Where the wind goes sweeping ’round the ring?

Thursday, October 23rd, 2014

I travel a lot for my work in particle physics, but it’s usually the same places over and over again — Fermilab, CERN, sometimes Washington to meet with our gracious supporters from the funding agencies.  It’s much more interesting to go someplace new, and especially somewhere that has some science going on that isn’t particle physics.  I always find myself trying to make connections between other people’s work and mine.

This week I went to a meeting of the Council of the Open Science Grid that was hosted by the Oklahoma University Supercomputing Center for Education and Research in Norman, OK.  It was already interesting that I got to visit Oklahoma, where I had never been before.  (I think I’m up to 37 states now.)  But we held our meeting in the building that hosts the National Weather Center, which gave me an opportunity to take a tour of the center and learn a bit more about how research in meteorology and weather forecasting is done.

OU is the home of the largest meteorology department in the country, and the center hosts a forecast office of the National Weather Service (which produces forecasts for central and western Oklahoma and northern Texas, at the granularity of one hour and one kilometer) and the National Severe Storms Laboratory (which generates storm watches and warnings for the entire country — I saw the actual desk where the decisions get made!).  So how is the science of the weather like and not like the science that we do at the LHC?

(In what follows, I offer my sincere apologies to meteorologists in case I misinterpreted what I learned on my tour!)

Both are fields that can generate significant amounts of data that need to be interpreted to obtain a scientific result.  As has been discussed many times on the blog, each LHC experiment records petabytes of data each year.  Meteorology research is performed by much smaller teams of observers, which makes it hard to estimate their total data volume, but the graduate student who led our tour told us that he is studying a mere three weather events, but he has more than a terabyte of data to contend with — small compared to what a student on the LHC might have to handle, but still significant.

But where the two fields differ is what limits the rate at which the data can be understood.  At the LHC, it’s all about the processing power needed to reconstruct the raw data by performing the algorithms that turn the voltages read out from millions of amplifiers into the energies and momenta of individual elementary particles.  We know what the algorithms for this are, we know how to code them; we just have to run them a lot.  In meteorology, the challenge is getting to the point where you can even make the data interpretable in a scientific sense.  Things like radar readings still need to be massaged by humans to become sensible.  It is a very labor-intensive process, akin to the work done by the “scanner girls” of the particle physics days of yore, who carefully studied film emulsions by eye to identify particle tracks.  I do wonder what the prospects are in meteorology for automating this process so that it can be handed off to humans instead.  (Clearly this has to apply more towards forefront research in the field about how tornadoes form and the like, rather than to the daily weather predictions that just tell you the likelihood of tornado-forming conditions.)

Weather forecasting data is generally public information, accessible by anyone.  The National Weather Service publishes it in a form that has already had some processing done on it so that it can be straightforwardly ingested by others.  Indeed, there is a significant private weather-forecasting industry that makes use of this, and sells products with value added to the NWS data.  (For instance, you could buy a forecast much more granular than that provided by the NWS, e.g. for the weather at your house in ten-minute intervals.)  Many of these companies rent space in buildings within a block of the National Weather Center.  The field of particle physics is still struggling with how to make our data publicly available (which puts us well behind many astronomy projects which make all of their data public within a few years of the original observations).  There are concerns about how to provide the data in a form that will allow people who are not experts to learn something from the data without making mistakes.  But there has been quite a lot of progress in this in recent years, especially as it has been recognized that each particle physics experiment creates a unique dataset that will probably never be replicated in the future.  We can expect an increasing number of public data releases in the next few years.  (On that note, let me point out the NSF-funded Data and Software Preservation for Open Science (DASPOS) project that I am associated with on its very outer edges, which is working on some aspects of the problem.)  However, I’d be surprised if anyone starts up a company that will sell new interpretations of LHC data!

Finally, here’s one thing that the weather and the LHC has in common — they’re both always on!  Or, at least we try to run the LHC for every minute possible when the accelerator is operational.  (Remember, we are currently down for upgrades and will start up again this coming spring.)  The LHC experiments have physicists on on duty 24 hours a day, monitoring data quality and ready to make repairs to the detectors should they be needed.  Weather forecasters are also on shift at the forecasting center and the severe-storm center around the clock.  They are busy looking at data being gathered by their own instruments, but also from other sources.  For instance, when there are reports of tornadoes near Oklahoma City, the local TV news stations often send helicopters out to go take a look.  The forecasters watch the TV news to get additional perspectives on the storm.

Now, if only the weather forecasters on shift could make repairs to the weather just like our shifters can fix the detector!