• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

CERN | Geneva | Switzerland

View Blog | Read Bio

Why don’t we just say collision rate?

The LHC run for 2011 is just getting underway, and it’s poised to serve up a menu of exciting new physics. While 2010 was a learning year, with an equitable divide between physics running and machine development, 2011 is a physics year. The aim is to establish good running conditions as quickly as possible, and then to run solidly for physics until the end of the year. The beam energy will remain at 3.5 TeV in 2011, but there will be a big drive to increase the luminosity by at least a factor of three compared to what was achieved in 2010. I guess that makes now a good time for that post I promised on luminosity…

Luminosity gives a measure of how many collisions are happening in a particle accelerator, so we’re often asked why we don’t just say collision rate. It’s a very reasonable question. The answer is because luminosity isn’t strictly speaking the collision rate: it measures how many particles we’re able to squeeze through a given space in a given time. That doesn’t mean that those particles will all collide, but the more we can squeeze into a given space, the more likely they are to collide.

The best place to start is with what physicists refer to as a cross-section. Usually, a cross-section is a measure of the size of something. For example, a barn door has a bigger cross-section – it covers a larger area – than a cat flap. In particle physics, a cross-section is a measure of the probability of something happening, and it’s measured in units called…. barns. In reality, a barn is a huge cross-section and most processes have cross-sections measured in tiny fractions of a barn.

When protons collide in the LHC, many things can happen: the protons can just glance off each other, or they can collide more violently producing any of a range of new particles. Each of these processes has its own cross-section. The cross-section for the production of a Higgs particle, for example, is very small at the scale of nanobarns – billionths of a barn  – which means that Higgs particles, if they exist, will be produced very rarely.

When looking for something that rare, the more collisions you have, the more likely it is that the rare thing will happen, and that’s why particle physicists care so much about luminosity. It works a bit like this: if you multiply the luminosity of the beam by the cross-section for any process, such as Higgs production, you get the rate at which you can expect that process to happen. If you multiply the luminosity by the sum of the cross-sections for all possible processes, you get the total number of collisions.

To help figure out the different ways that luminosity contributes to the collision rate, think of rolling marbles from opposite ends of a corridor. If you’ve just got one person at each end of the corridor, the chances they’ll get their marbles to collide are small: the luminosity is pretty low. But put lots of people at each end and the luminosity goes up. Similarly, if you increase the cross-section by rolling footballs down the corridor, the luminosity remains the same, but since a football has a bigger cross-section than a marble, the number of collisions goes up. It’s not a perfect analogy, but I hope you get the idea.

The LHC’s design luminosity is 1034 per square centimetre per second. That’s a big number, and although we can’t say exactly how many collisions it equates to, we can say that and it’s around 600 million collisions per second on average. When the LHC run ended in 2010, the luminosity was around 2×1032 per square centimetre per second, giving a few million proton collisions per second.

The other way that physicists use luminosity is to add it all up, or integrate it. If you do that, you get a measure of the total number of collisions that have happened. So, for example, the integrated luminosity recorded by the ATLAS and CMS experiments in 2010 was around 45 inverse picobarns, which translates to over 3000 billion collisions recorded.  And what about the Higgs particle? With the peak luminosity reached by the LHC last year, we might have expected Higgs particles to be produced at the rate of a handful a day, which is not yet enough for us to see a signal above the background from known and well-understood physics. To claim a discovery, physicists need to see a statistically significant excess over and above what they expect to see from known physics, and they measure the degree of significance using a quantity called a standard deviation, or sigma for short…. But that’s a subject for another post…

James Gillies

Share