• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Aidan Randle-Conde | Université Libre de Bruxelles | Belgium

View Blog | Read Bio

What the L?!

There are few things that particle physicists like to talk about more than luminosity (know affectionately as “L”). We measure it obsessively, we boast about it shamelessly and we never forget to mention it in our papers, plots and talks. So what’s the big deal? What is luminosity and why is it important?

The concept of instantaneous luminosity is borrowed from the field of astrophysics, and in that field it’s used to describe how much energy a star gives off. To calculate the instantaneous luminosity, simple measure how much energy flows through a surface in an interval of time. To get the instantaneous luminosity in particle physics simply swap energy for the number of particles and the definition is the same!

The instantaneous luminosity is a measure of how many particles (blue) pass through a surface of unit area (yellow) in unit time (not shown.)

Well, not quite. If you take a quick look at any of the experiments at the LHC you’ll notice that there are two beams, so to get any meaningful measurement of luminosity you’ll have to take the flows of particles in both beams, a task which doesn’t seem easy! In order to use the concept of instantaneous luminosity we need to apply some knowledge of special relativity. We imagine that the protons in one of the beams are all at rest, and see how many protons from the other beam pass through per unit area and unit time. (The instantaneous luminosity makes more sense for fixed target experiments, where there is only one beam and the other matter is kept at rest. This is how most early experiments operated, and we’ve been stuck using luminosity ever since!)

In itself, the instantaneous luminosity is useless to us, and to make any real use of it we must combine it with a cross section. A cross section used to describe how often some process happens, and the analogy is very simple! Imagine placing lots of targets in front of the beam of particles, each one representing a different process. The larger targets will be hit by more protons, so we’ll see those processes more often. A larger cross section means a higher rate of process! To get the number of events where that process happens (per unit time) we just multiply the cross section by the luminosity, and that tells us how many “hits” we can expect. Simple!

Since having a larger instantaneous luminosity means having more events, we want to do everything we can to increase instantaneous luminosity. We can do that in quite a few ways, and the most obvious way is to increase the number of protons in the beam. After all, each proton has its own tiny (very very tiny) targets, and since the cross section of a given process is the same for each proton, you can increase the total size of a given target by increasing the numbers of protons. Another way to increase the instantaneous luminosity is to cram the same number of protons into a narrower beam, and this is called squeezing. After a while we start to reach physical limits of what we can achieve (this is due to phase space factors, beam shape parameters and all sorts of fascinating properties of the beam that would make for another blog post!) so we need to resort to simpler methods. One of the most effective methods is to increase the number of bunches in the LHC ring, and this means that instead of cramming more protons into the same part of the ring at the LHC, we put more protons in the empty regions of the ring.

The protons presents many different processes, and each process has its own cross section. This diagram is not at all to scale, and the QCD cross section is much larger than the other cross section shown!

As usual, things aren’t quite as simple as this. There are many different processes and each with its own cross section. Some of them are much, much larger than others, and most of the larger cross sections are boring to us, so if we want to get to the interesting physics we need a way to artificially reduce the sizes of the boring cross sections. (It would be nice if we could increase the sizes of the interesting cross sections instead, but that’s not physically possible at the LHC!) The notoriously large cross section at the LHC is the quantum chromodynamical (QCD) cross section, which dominates everything we see and for most people it’s an annoyance that makes it harder to find the interesting physics. To reduce the cross sections of these processes we use a prescale, which is very simple. We only record events that fire the trigger, and the trigger looks for different kinds of events. A prescale tells the trigger to ignore a proportion of a specific kind of decay, and that way we can record fewer boring events and save our precious resources for the most interesting ones.

Now if you see a plot from a collaboration you’ll often see the luminosity written on the plot, but this is not the instantaneous luminosity, it’s the integrated luminosity. To get the integrated luminosity we multiply the instantaneous luminosity by the time interval when the instantaneous luminosity was delivered. This means that it has units of inverse area, and when we multiply it by a cross section we get a number of events. This is why the integrated luminosity is so important to us- if we know the cross section for a process, and we know the integrated luminosity we can work out how many events we expect to see, and compare that to how many we actually see. This tells us when to expect a discovery, and when we find something truly new and interesting!

A typical mass spectrum plot, proudly declaring the integrated luminosity for all to see. arXiv:1103.6218v1 hep-ex

It seems elegant and simple, but personally I find the whole thing is spoiled by the choice of units and converting things ever so slightly baffling (probably not something I should admit to in public!) Instantaneous luminosity is usually measured in cm2s-1, which is an odd choice. In these units a typical value is 1033, which is an unimaginably large number! This is almost inevitable because luminosity varies so widely between experiments and as new technologies become available. If we choose new units now to make the numbers more manageable, they’ll still become ridiculously large in the future. To confuse things further the integrated luminosity is usually measured in inverse barns (as in “You can’t hit a barn with that!”). A barn is 10-28m2, so this makes the integrated luminosity a little bit easier to express in terms that don’t make my head spin. But even after that, our integrated luminosities need prefixes to make the numbers nice, so you’ll often see integrated luminosities written in inverse picobarns (pb-1) or inverse femtobarns (fb-1) and then the smaller the prefix, the large the amount of integrated luminosity! I find that the easiest way to remember whether I need to multiply or divide by 1,000 to convert the units is to just go with what feels wrong and it’ll be right.  Smaller inverse areas mean larger numbers of events. If that isn’t a crazy choice of units, I don’t know what is!

To get an idea of a typical integrated luminosity, let’s think about how much data we’d need to see a standard model Higgs boson of mass 200GeV. Let’s imagine we see 100 events which are not consistent with known backgrounds. To make our job easier, let’s think about the “gold plated” decay of H→ZZ and Z→ll, where l is a charged lepton. The branching fraction for this decay is about 25% for H→ZZ and about 7% for Z→ll, and let’s assume we are 50% efficient at reconstructing a Z. Altogether we’d need to produce about 80,000 Higgs bosons to see 100 events of this type. Dividing by the cross section of Higgs production at 200GeV gives us an integrated luminosity of 16ab-1. That’s a lot of events! Luckily, there are many more final states we can explore, and when we add it all up, it turns out we’ll have enough data to be sensitive to a standard model Higgs before too long.

That’s all very impressive, but the punchline comes from the world of “low high energy physics”, for example the BaBar experiment. Whenever I want to tease my friends at the LHC, I remind them that my previous experiment had 550fb-1 of data, about 5,000 times what we have right now, and a number the LHC will not reach any time soon!

You can usually tell what kind of physicist you’re talking to immediately by asking them what the luminosity is at the LHC. An experimental physicist will tell you in terms of data (ie inverse barns) where as an accelerator physicist will tell you in terms of beams (ie cm-2s-1.) I find it quite amusing that the accelerator physicists generally find everything up to the point of collision deeply fascinating, and everything after that a frightful bore that makes their work even more complicated, whereas the experimental physicists thinks the other way around!

Share

Tags: