• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Ken Bloom | USLHC | USA

View Blog | Read Bio

Double time

In particle physics, we’re often looking for very rare phenomena, which are highly unlikely to happen in any given particle interaction. Thus, at the LHC, we want to have the greatest possible proton collision rates; the more collisions, the greater the chance that something unusual will actually happen. What are the tools that we have to increase collision rates?

Remember that the proton beams are “bunched” — there isn’t a continuous current current of protons in a beam, but a series of smaller bunches of protons, each only a few centimeters long, with gaps of many centimeters between each bunch.  The beams are then timed so that bunches from each beam pass through each other (“cross”) inside one of the big detectors.  A given bunch can have 10E11 protons in it, and when two bunches cross, perhaps tens of the protons in each bunch — a tiny fraction! — will interact.  This bunching is actually quite important for the operation of the detectors — we can know when bunches are crossing, and thus when collisions happen, and then we know when the detectors should really be “on” to record the data.

If one were to have a fixed number of protons in the machine (and thus a fixed total amount of beam current), you could imagine two ways to create the same number of collisions: have N bunches per beam, each with M protons, or 2N bunches per beam with M/sqrt(2) protons.  The more bunches in the beam, the more closely spaced they would have to be, but that can be done.  From the perspective of the detectors, the second scenario is much preferred.  That’s because you get fewer proton collisions per bunch crossing, and thus fewer particles streaming through the detectors.  The collisions are much easier to interpret if you have fewer collisions per crossing; among other things, you need less computer processing time to reconstruct each event, and you will have fewer mistakes in the event reconstruction because there aren’t so many particles all on top of each other.

In the previous LHC run (2010-12), the accelerator had “50 ns spacing” between proton bunches, i.e. bunch crossings took place every 50 ns.  But over the past few weeks, the LHC has been working on running with “25 ns spacing,” which would allow the beam to be segmented into twice as many bunches, with fewer protons per bunch.  It’s a new operational mode for the machine, and thus some amount of commissioning and tuning and so forth are required.  A particular concern is “electron cloud” effects due to stray particles in the beampipe striking the walls and ejecting more particles, which is a larger effect with smaller bunch spacing.  But from where I sit as one of the experimenters, it looks like good progress has been made so far, and as we go through the rest of this year and into next year, 25 ns spacing should be the default mode of operation.  Stay tuned for what physics we’re going to be learning from all of this!

Share
  • Greg Fystro

    That is very interesting information. Is the fill pattern symmetric with a bunch every 50/25ns, filling the entire ring? And why does the 25ns spacing lead to a stronger electron cloud effect? What is the mechanism that causes this?

  • Kud

    Stray electrons cause issues in the beam pipes when they pass close to the beam and get accelerated by a positively charged proton bunch passing by so much that they will release further particles (electrons and sometimes atoms/ions) when they impact the beam pipe wall. 25 ns means that the odds of released particles interacting with the beam again, thus getting accelerated too and releasing further particles, are much higher.

    The fill pattern is not symmetric. Magnets are required for feeding further protons into the ring, and just like a railway switch couldn’t instantly switch between two adjacent wagons of a train going faster than maybe walking pace these magnets also have a certain delay – the more powerful the beam, the more powerful the magnet field required for switching needs to be, the more time it takes for switching.

    First the PS accelerator ring is filled with protons from a linear accelerator. They’re accelerated, then they’re fed into the SPS ring. 4 PS fills are collected here (with a small gap inbetween, due to switching), then the whole SPS ring content is accelerated further. They’re then fed into the LHC itself, leaving an even bigger switching gap this time. (IIRC) 4 SPS fills fit into the LHC, but after them a big gap is left. This big gap is needed for the final switch – the LHC beam dump, which must be able to redirect the beam at full energy into the beam dump. That’s needed as a gone-wild beam could wreak havoc to the LHC, thus it’s discarded when beam parameters deviate too much from expectations (or something else goes wrong).

    The LHC is thus filled with a proton pattern like this: #.#.#.#…#.#.#.#…#.#.#.#…#.#.#.#…… after which it starts again. Each # is a series of proton bunches with 25 ns spacing.

  • Kud

    Please see my answer above, I hit the wrong button

  • nikkkom

    How long do people need to wait for beam dump to “cool” before it’s safe to stand near it?