• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Ken Bloom | USLHC | USA

View Blog | Read Bio

Place your bets: 25 or 50?

Note to readers: this is my best attempt to describe some issues in accelerator operations; I welcome comments from people more expert than me if you think I don’t have things quite right.

The operators of the Large Hadron Collider seek to collide as many protons as possible. The experimenters who study these collisions seek to observe as many proton collisions as possible. Everyone can agree on the goal of maximizing the number of collisions that can be used to make discoveries. But where the accelerator physicists and particle physicists might part ways over just how those collisions might best be delivered.

Let’s remember that the proton beams that circulate in the LHC are not a continuous current like you might imagine running through your electric appliances. Instead, the beam is bunched — about 1011 protons are gathered in a formation that is about as long as a sewing needle, and each proton beam is made up of 1380 such bunches. As the bunches travel around the LHC ring, they are separated by 50 nanoseconds in time. This bunching is necessary for the operation of the experiments — it ensures that collisions occur only at certain spots along the ring (where the detectors are) and the experiments can know exactly when the collisions are occurring and synchronize the response of the detector to that time. Note that because there are so many protons in each beam, there can be multiple collisions each time two bunches pass by each other. At the end of the last LHC run, there were typically 30 collisions that occurred per bunch crossing.

There are several ways to maximize the number of collisions that occur. Increasing the number of protons in each bunch crossing will certainly increase the number of collisions. Or, one could imagine increasing the total number of bunches per beam, and thus the number of bunch crossings. The collision rate increases like the square of the number of particles per bunch, but only linearly with the number of bunches. On the face of it, then, it would make more sense to add more particles to each bunch rather than to increase the number of bunches if one wanted to maximize the total number of collisions.

But the issue is slightly more subtle than that. The more collisions that occur per beam crossing, the harder the collisions are to interpret. With 30 collisions happening at the same time, one must contend with hundreds, if not thousands, of charged particle tracks that cross each other and are harder to reconstruct, which means more computing time to process the event. With more stuff going on each event, the most important parts of the event are increasingly obscured by everything else that is going on, degrading the energy and momentum resolution that are needed to help identify the decay products of particles like the Higgs boson. So from the perspective of an experimenter at the LHC, one wants to maximize the number of collisions while having as few collisions per bunch crossing as possible, to keep the interpretation of each bunch crossing simple. This argument favors increasing the number of bunches, even if this might ultimately mean having fewer total collisions than could be obtained by increasing the number of protons per bunch. It’s not very useful to record collisions that you can’t interpret because the events are just too busy.

This is the dilemma that the LHC and the experiments will face as we get ready to run in 2015. In the current jargon, the question is whether to run with 50 ns between collisions, as we did in 2010-12, or 25 ns between collisions. For the reasons given above, the experiments generally prefer to run with a 25 ns spacing. At peak collision rates, the number of collisions per crossing is expected to be about 25, a number that we know we can handle on the basis of previous experience. In contrast, the LHC operators generally to prefer the 50 ns spacing, for a variety of operational reasons, including being able to focus the beams better. The total number of collisions delivered per year could be about twice as large with 50 ns spacing…but with many more collisions per bunch crossing, perhaps by a factor of three. This is possibly more than the experiments could handle, and it could well be necessary to limit the peak beam intensities, and thus the total number of collisions, to allow the experiment to operate.

So how will the LHC operate in 2015 — at 25 ns or 50 ns spacing? One factor in this is that the machine has only done test runs at 25 ns spacing, to understand what issues might be faced. The LHC operators will re-commission the machine with 50 ns spacing, with the intention of switching to 25 ns spacing later, as soon as a couple of months later if all goes well. But then imagine that 50 ns running works very well outset. Would the collision pileup issues motivate the LHC to change the bunch spacing? Or would the machine operators just like to keep going with a machine that is operating well?

In ancient history I worked on the CDF experiment at the Tevatron, which was preparing to start running again in 2001 after some major reconfigurations. It was anticipated that the Tevatron was going to start out with a 396 ns bunch spacing and then eventually switch over to 132 ns, just like we’re imagining for the LHC in 2015. We designed all of the experiment’s electronics to be able to function in either mode. But in the end, 132 ns running never happened; increases in collision rates were achieved by increasing beam currents. This was less of an issue at the Tevatron, as the overall collision rate was much smaller, but the detectors still ended up operating with numbers of collisions per bunch crossing much larger than they were designed for.

In light of that, I find myself asking — will the LHC ever operate in 25 ns mode? What do you think? If anyone would like to make an informal wager (as much as is permitted by law) on the matter, let me know. We’ll pay out at the start of the next long shutdown at the end of 2017.

Share