• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Gavin Hesketh | CERN | Switzerland

View Blog | Read Bio

So you built the world’s highest energy particle accelerator

What’s the first thing you do? Find the Higgs? Well, actually, you measure charged hadron spectra (see the first paper with LHC collision data from ALICE). It’s something all the LHC experiments are doing, so In case you were wondering why (or what), thought I’d try to explain. And for some reason, only after writing all this did I see Zoe’s nice post on the ALICE analysis. D’oh. Well, there are not many LHC results to talk about… yet…

On the multi-purpose experiments like ATLAS and CMS, these measurements are made by what is usually called the “Minimum Bias” group (maybe on ALICE and LHCb too, I’m not sure). The “minimum bias” part relates to how these collisions end up being recorded: by firing a “minimum bias” trigger. Some triggers are highly biased and only record collisions where a muon was produced, for example. A minimum bias trigger only requires that something happened: that was a collision. It gets a bit more complicated, because there are also “diffractive” collisions, but this post is already going to be long enough! Normally the rate of collisions is too high to record them all, so only a (random) small fraction are actually recorded. But for the first LHC runs, the rate was low enough that as many as possible were kept. This is one of the nice thing about minimum bias (and why it was the first paper): you don’t have to run for very long to get a lot of events!

Then the measurements themselves, which are things like dN/deta and dN/dpT. So, the proton is a ball of stuff: mostly quarks and gluons. When you smash two of these together, it is really the quarks and gluons that interact. Most of the time, the interactions are fairly “soft” (low energy), and the results is a spray of particles from the remains of the two protons. A seasonal analogy: take a snowball and throw it at a wall, you’ll get a satisfying thud and bits of snowball fly everywhere. Now, if you have a friend who is also willing to take part in this experiment, try throwing two snowballs so they hit head on in the air. This is pretty much like these “soft” proton interactions: bits of snowball flying everywhere. The aim here is to measure the debris of the protons in these collisions, and quantifying it by looking at, for example, dN/deta. The N here is the number of charged particles produced in the debris; eta is an angle with eta=0 is perpendicular (transverse) to the beam direction. So, it’s a measurement of the average number of particles produced at different angles from the beam. Similarly, dN/dpT is the average number of particles produced with different momenta in the direction transverse to the beam.

Now, why is this interesting? Well, depends who you ask, but here is my bias: occasionally, a quark or gluon will be carrying a large fraction of the proton momentum, and a “hard” (high energy) interaction will take place: a Z might be produced, or (we hope) the Higgs or some new particle. This is where the snowball analogy breaks down: it’s like throwing two snowballs, and a tiger being produced when they collide (see Figure 1). Quantum mechanics is strange…

Figure 1: What happens when you stretch an analogy too far

Figure 1: What happens when you stretch an analogy too far

Anyway, even when there is a hard interaction which produces the Higgs, there will still be some debris from the remains of the protons. And as the LHC reaches high beam intensity, there will probably be a couple (or more) of soft interactions between different protons at the same time. In other words, a lot of this debris flying around. We need to understand it, because when all this stuff hits the detector it might make it harder to find the Higgs signal.

And the problem is that this kind of soft interaction is very hard to calculate. We have some excellent models, but they have been tuned to the measurements made in the past. And different models tend to diverge when extrapolated up to the energies we will see in the LHC soon. So we really don’t know exactly what the debris will look like, and repeating these minimum bias measurements is essential. And how do you know you are measuring this correctly? Well, compare to previous measurements and see if you get the same answer. So, it’s very convenient that the LHC has so far run at 450 GeV per beam (earlier CERN experiments like UA1 and UA5 measured minimum bias at this energy) and 2.3 TeV (very close to the Tevatron energy, where CDF measured minimum bias).

Share