• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Posts Tagged ‘luminosity’

What the L?!

Tuesday, April 19th, 2011

There are few things that particle physicists like to talk about more than luminosity (know affectionately as “L”). We measure it obsessively, we boast about it shamelessly and we never forget to mention it in our papers, plots and talks. So what’s the big deal? What is luminosity and why is it important?

The concept of instantaneous luminosity is borrowed from the field of astrophysics, and in that field it’s used to describe how much energy a star gives off. To calculate the instantaneous luminosity, simple measure how much energy flows through a surface in an interval of time. To get the instantaneous luminosity in particle physics simply swap energy for the number of particles and the definition is the same!

The instantaneous luminosity is a measure of how many particles (blue) pass through a surface of unit area (yellow) in unit time (not shown.)

Well, not quite. If you take a quick look at any of the experiments at the LHC you’ll notice that there are two beams, so to get any meaningful measurement of luminosity you’ll have to take the flows of particles in both beams, a task which doesn’t seem easy! In order to use the concept of instantaneous luminosity we need to apply some knowledge of special relativity. We imagine that the protons in one of the beams are all at rest, and see how many protons from the other beam pass through per unit area and unit time. (The instantaneous luminosity makes more sense for fixed target experiments, where there is only one beam and the other matter is kept at rest. This is how most early experiments operated, and we’ve been stuck using luminosity ever since!)

In itself, the instantaneous luminosity is useless to us, and to make any real use of it we must combine it with a cross section. A cross section used to describe how often some process happens, and the analogy is very simple! Imagine placing lots of targets in front of the beam of particles, each one representing a different process. The larger targets will be hit by more protons, so we’ll see those processes more often. A larger cross section means a higher rate of process! To get the number of events where that process happens (per unit time) we just multiply the cross section by the luminosity, and that tells us how many “hits” we can expect. Simple!

Since having a larger instantaneous luminosity means having more events, we want to do everything we can to increase instantaneous luminosity. We can do that in quite a few ways, and the most obvious way is to increase the number of protons in the beam. After all, each proton has its own tiny (very very tiny) targets, and since the cross section of a given process is the same for each proton, you can increase the total size of a given target by increasing the numbers of protons. Another way to increase the instantaneous luminosity is to cram the same number of protons into a narrower beam, and this is called squeezing. After a while we start to reach physical limits of what we can achieve (this is due to phase space factors, beam shape parameters and all sorts of fascinating properties of the beam that would make for another blog post!) so we need to resort to simpler methods. One of the most effective methods is to increase the number of bunches in the LHC ring, and this means that instead of cramming more protons into the same part of the ring at the LHC, we put more protons in the empty regions of the ring.

The protons presents many different processes, and each process has its own cross section. This diagram is not at all to scale, and the QCD cross section is much larger than the other cross section shown!

As usual, things aren’t quite as simple as this. There are many different processes and each with its own cross section. Some of them are much, much larger than others, and most of the larger cross sections are boring to us, so if we want to get to the interesting physics we need a way to artificially reduce the sizes of the boring cross sections. (It would be nice if we could increase the sizes of the interesting cross sections instead, but that’s not physically possible at the LHC!) The notoriously large cross section at the LHC is the quantum chromodynamical (QCD) cross section, which dominates everything we see and for most people it’s an annoyance that makes it harder to find the interesting physics. To reduce the cross sections of these processes we use a prescale, which is very simple. We only record events that fire the trigger, and the trigger looks for different kinds of events. A prescale tells the trigger to ignore a proportion of a specific kind of decay, and that way we can record fewer boring events and save our precious resources for the most interesting ones.

Now if you see a plot from a collaboration you’ll often see the luminosity written on the plot, but this is not the instantaneous luminosity, it’s the integrated luminosity. To get the integrated luminosity we multiply the instantaneous luminosity by the time interval when the instantaneous luminosity was delivered. This means that it has units of inverse area, and when we multiply it by a cross section we get a number of events. This is why the integrated luminosity is so important to us- if we know the cross section for a process, and we know the integrated luminosity we can work out how many events we expect to see, and compare that to how many we actually see. This tells us when to expect a discovery, and when we find something truly new and interesting!

A typical mass spectrum plot, proudly declaring the integrated luminosity for all to see. arXiv:1103.6218v1 hep-ex

It seems elegant and simple, but personally I find the whole thing is spoiled by the choice of units and converting things ever so slightly baffling (probably not something I should admit to in public!) Instantaneous luminosity is usually measured in cm2s-1, which is an odd choice. In these units a typical value is 1033, which is an unimaginably large number! This is almost inevitable because luminosity varies so widely between experiments and as new technologies become available. If we choose new units now to make the numbers more manageable, they’ll still become ridiculously large in the future. To confuse things further the integrated luminosity is usually measured in inverse barns (as in “You can’t hit a barn with that!”). A barn is 10-28m2, so this makes the integrated luminosity a little bit easier to express in terms that don’t make my head spin. But even after that, our integrated luminosities need prefixes to make the numbers nice, so you’ll often see integrated luminosities written in inverse picobarns (pb-1) or inverse femtobarns (fb-1) and then the smaller the prefix, the large the amount of integrated luminosity! I find that the easiest way to remember whether I need to multiply or divide by 1,000 to convert the units is to just go with what feels wrong and it’ll be right.  Smaller inverse areas mean larger numbers of events. If that isn’t a crazy choice of units, I don’t know what is!

To get an idea of a typical integrated luminosity, let’s think about how much data we’d need to see a standard model Higgs boson of mass 200GeV. Let’s imagine we see 100 events which are not consistent with known backgrounds. To make our job easier, let’s think about the “gold plated” decay of H→ZZ and Z→ll, where l is a charged lepton. The branching fraction for this decay is about 25% for H→ZZ and about 7% for Z→ll, and let’s assume we are 50% efficient at reconstructing a Z. Altogether we’d need to produce about 80,000 Higgs bosons to see 100 events of this type. Dividing by the cross section of Higgs production at 200GeV gives us an integrated luminosity of 16ab-1. That’s a lot of events! Luckily, there are many more final states we can explore, and when we add it all up, it turns out we’ll have enough data to be sensitive to a standard model Higgs before too long.

That’s all very impressive, but the punchline comes from the world of “low high energy physics”, for example the BaBar experiment. Whenever I want to tease my friends at the LHC, I remind them that my previous experiment had 550fb-1 of data, about 5,000 times what we have right now, and a number the LHC will not reach any time soon!

You can usually tell what kind of physicist you’re talking to immediately by asking them what the luminosity is at the LHC. An experimental physicist will tell you in terms of data (ie inverse barns) where as an accelerator physicist will tell you in terms of beams (ie cm-2s-1.) I find it quite amusing that the accelerator physicists generally find everything up to the point of collision deeply fascinating, and everything after that a frightful bore that makes their work even more complicated, whereas the experimental physicists thinks the other way around!

Share

Imagine you’re in charge of a budget for a large organization of a few thousand people who are experts in their field.  Imagine that if you don’t spend some of the money in the budget that you can’t keep what you’ve saved- it will be lost forever.  Now imagine that there’s another group of a few thousand experts with exactly the same budget, right down the last penny.

That’s the kind of scenario that we face at the LHC, except the budget is in time and not money.  We count proton collisions and not dollars.  The LHC is delivering world record luminosities right now, and the different experiments are getting as much data as they can.  For LHCb and ALICE there is pressure to perform, but between ATLAS and CMS the competition is cut throat.  They’re literally looking at the same protons and racing for the same discoveries.  Any slight advantage one side can get in terms of data is crucial.

What does any of this have to do with my work at ATLAS?  Well I’m one of the trigger rates experts for pileup.  When we take data we can’t record every proton collision, there are simple too many.  Instead, we pick the interesting events out and save those.  To find the interesting events we use the trigger, and we only record events when the trigger fires.  Even when we exclude most of the uninteresting events we still have more data than we can handle!  To get around this problem we have prescales, which is where we only keep a certain fraction of events.  The trigger is composed of a range of trigger lines, which can be independent of one another, and each trigger line has its own prescale.

A high pileup event at ATLAS

High pileup scenarios. Can you count the vertices? (ATLAS Collaboration)

The term “pileup” refers to the number of proton collisions per bunch crossing (roughly how many interactions we can expect to see when we record an event.)  When I came to ATLAS from BaBar I had to get used to a whole new environment and terminology.  The huge lists of trigger lines alone made my head spin, and so far pileup has been the strangest concept I’ve had to deal with.  Why take a scenario that is already overwhelmingly complicated, with one of the most intricate machines the world, and make it even harder to understand, for the sake of a few more events?  Because we’re in competition with CMS, that’s why, and everything counts.  The image on the right shows a typical event with multiple interactions.  Even counting the number of vertices is difficult!

Balancing the different prescales is where things get interesting, because we have to decide how we’re going to prescale each trigger.  We have to make sure that we take as much data as possible, but also that we don’t over-burden our data taking system.  It’s a fine balancing act and it’s hard to predict.  Our choice of trigger prescales is informed by what the physicists want from the dataset, and what range of types of events will maximize our output.  The details of what kinds of events we want is a very hotly debated topic and one that is best left to a separate blog post!  For now, we’ll assume that the physicists can come up with a set of prescales that match the demands of their desired dataset.  What usually happens then is that the trigger menu experts ask what would happen if things were a little different, if we increased or decreased a certain prescale.

The effects of proton burning on luminosity.

The effects of proton burning on luminosity. (LHC)

We need to pick the right times to change the prescales, and it turns out that as we keep taking data, the luminosity decreases because we lose protons when they interact.  This is known as proton burning and you can see the small but noticeable effect of this the image above.  As we burn more protons we can change the prescales to keep the rate of data-taking high, and that’s where my work comes in.  The rates for different trigger lines depend on pileup in different ways, so understanding how they act in different scenarios allows us to change the prescales in just the right way.  We can make our trigger very versatile, picking up the slack by changing prescales on interesting trigger lines, and pushing our systems to the limit.  My job is to investigate the best way to make these predictions, and use the latest data to do this.  The pileup scenarios change quite rapidly, so keeping up to date is a full time job!  And every second spent working on this means more protons have been burned and more collisions have taken place.

It’s not an easy task, it forces me to think about things I’ve never considered before, and keeps the competition at the forefront of my mind.  I knew I’d be in a race for discovery when I joined ATLAS, but I never realized just how intense it would be.  It’s exciting and a little nerve-wracking.  I don’t want to think about how many protons pass by in the time it takes to write a blog post.  Did we record enough of them?  Probably.  Can we do better?  Almost certainly.  There’s always more space in this budget, and always pressure to stretch it that little bit further.

Share

This morning, the LHC ended proton-proton collisions for 2010. What an exciting year! Seven months ago, there had never been particle collisions at a center of mass energy of 7 TeV in an accelerator. Now, LHC physicists are busy combing through the mountains of data that have been accumulated since. True, the collisions are still a factor of two below design energy, and a factor of a hundred below design collision rate — we have a long way to go. However, the improvements we have seen this year have been very encouraging and show that we are well on our way to getting there. We have seen the instantaneous collision rate (luminosity, really, for those who want the right techical term) increase by a factor of 100,000 over the past seven months. As a result, the bulk of the collision data has actually arrived within the past month. Everyone has had to be on their toes to keep up with it.

Today thus marks the end of at least one era. With the heavy-ion run about to start and then an extended technical stop to begin in early December, we don’t expect proton collisions again until late February 2011. This break of at least three months gives everyone a chance to chew on the data that we do have in hand. This dataset is thus going to be the basis for a raft of papers that are going to be published in 2011. At the very least, this data will be used to re-establish a variety of standard-model processes at this energy scale, and will be able to exclude a number of theories of new phenomena. (Or, if we are very lucky, discover some new phenomena!) On top of that is another intriguing possibility: it is possible (but certainly not yet confirmed!) that in 2011 the LHC will run at a collision energy of 8 TeV rather than 7 TeV. This decision will likely be made at the Chamonix workshop in January, where CERN will set the run plan for the year. A move to 8 TeV will increase the production rate of a variety of particles, including the much-sought Higgs boson (if such a thing actually exists). If this happens, then it is likely that 7 TeV collisions will never be done again, in which case the data we have collected this year, and the measurements done with them, will be something unique in the history of particle physics.

Your LHC physicists will be hard at work over the next few months to fully explore the 2010 data. Watch this space for more news about the science that will come out of it!

Share

A great fill!

Friday, September 24th, 2010

Yesterday and last night (US time), the LHC had a really amazing fill. CERN DG Rolf Heuer just sent a message about it, and it says it so well I’m just going to copy it here:

A long period of machine development paid dividends last night with a game-changing fill in the LHC. As I write this, the fill, which started colliding at 19:00 yesterday evening, has just wound down. Both ATLAS and CMS have posted integrated luminosities of over 680 inverse nanobarns, and the initial luminosity for the fill doubles the previous record at 2 x 1031cm-2s-1.

But it’s not the records that are important this time – it’s normal that in the start-up phase of a new machine, records will fall like autumn leaves – what’s significant here is that the LHC’s performance this fill significantly exceeded some crucial design parameters, opening up the path to much better still to come.

Last night’s fill was the first with 56 bunches arranged in trains of eight bunches per train. The significance of bunch train running is that we can configure the orbits such that more bunches collide in the experiments, so even though the number of bunches may not be much higher, the collision rate is. For example, last night’s 56-bunch fill had 47 bunches colliding at ATLAS, CMS and LHCb, with 16 colliding in ALICE, whose needs are lower. This compares to a maximum of 36 colliding bunches out of 48 total before we introduced bunch trains.

A big jump in luminosity was clearly expected in moving to bunch trains and colliding more bunches. What came as a pleasant surprise is that it was accompanied by an exceptional beam lifetime of 40 hours, and less disruption to the beams caused by packing more protons into a smaller space (in technical terms, the beam-beam tune shift was much less destructive to the beams than anticipated). This result means that the LHC operators have more leeway in operational parameters in the quest for higher luminosity.

The plan for today and the weekend is to run for one more fill with 56 bunches in trains of eight before moving on to 104 bunches in 13 trains of eight, with 93 bunches colliding in ATLAS and CMS. Ultimately, the LHC will run with 2808 bunches in each beam, so there’s still a long way to go. We’ll get there slowly but surely by adding bunches to each train until the trains meet in a single machine-filling train. That will take time, but for the moment, last night’s fill puts us well on the way to achieving the main objective for 2010: a luminosity of 1032 cm-2s-1.

To put this in perspective: the much-heralded LHC results at the July ICHEP conference were based on about 250 inverse nanobarns of data, drawn from about 350 inverse nanobarns delivered by the LHC from April through mid-June. Yesterday, in less than one day, the LHC delivered almost double that entire amount! The state of play will be changing quite rapidly on all LHC experiments. Stay tuned!

Share

OK, I’ll admit it — instead of writing blog posts or reviewing results that are headed for ICHEP or doing something else productive, I find myself all too easily distracted by information on the current status of the LHC. As the gallant accelerator physicists work to push the machine to higher beam intensities and collision rates, I’m eager to learn about each little bit of progress. It definitely has some meaning to me — the more collisions the LHC produces, the more the experiments can record, and the greater the chance that we will see any particular physics process take place. Especially as we get close to the big ICHEP conference, we are all curious about how much data we might record before then, because that will determine what measurements might possibly be ready. (Of course it’s also determined by how quickly we can push the data through data analyses, how well we can understand detector performance and so forth; let’s not put all of the burden on the LHC.)

It’s not like I can do anything to make the luminosity go up, but I feel better (or at least distracted) by knowing what’s going on at this minute. This is akin to scoreboard watching in baseball, where the outfielders in one game might have their eye on the scoreboard above them to see how the competition is doing. (In fact, back at the Cornell Electron Storage Ring, the display that showed the luminosity numbers for the past 24 hours was called the “scoreboard”, so the analogy fits.)

So, if you want to play along at home, here are a few Web pages you can keep an eye on. Some of these have been mentioned in previous posts on this blog, but it’s been a little while and I’ll give a few more details.

To know what’s happening right now, check out LHC Page 1, which gives the current machine status and the (very) short-term running plan. Here you’ll typically see plots of the amount of beam current and the beam energy in the LHC, and, during periods of collisions for “physics” (i.e. data-taking by the experiment as opposed to studies of collisions done to optimize machine performance) there will be plots of the observed instantaneous luminosity reported by each of the four experiments. (Instantaneous luminosity is a measure of collision rate; its units of inverse centimeter squared per second deserve explanation in a second post.) The experiment reports can also be seen on the LHC Operation page. At other times, it will show the status of preparing to go to collisions, such as “ramp” (increasing beam energy to 3.5 TeV) or “squeeze” (focusing the beams to increase the collision rate). There are also helpful short messages about what’s going on, such as “this fill for physics” or the somewhat unnerving “experts have been called.”

The medium term run plan can be seen on the LHC Coordination screen. Here you can see the goals for the coming week, what administrative limits are currently in place to protect the machine, and the planned activities for the next few shifts.

While the collision rate is interesting, what really counts is the “integrated luminosity”, or the total number of collisions that have taken place. Up-to-date charts can be found here; the data go back to March 30, the start of 3.5 TeV operations. You can see here that the integrated luminosity has been increasing exponentially in time (when the LHC is not in studies periods or technical stops). If the collision rate were the same all the time, the integral would only increase linearly, so this demonstrates just how quickly the LHC physicists are figuring out how to make the machine go.

That’s what I’ve been keeping an eye on. OK, all of you stop looking at Facebook, and distract yourselves with the LHC instead!

Share

Feeling squeezed

Tuesday, April 27th, 2010

Here I am at CERN, after fairly smooth travels. (At least this time I didn’t show up with the flu.) The weather here is very nice for this time of the year, and the only evidence I can see for the eruption of Eyjafjallajokull (I love that name!) is somewhat lower attendance than usual for the semiannual CMS computing and software workshop. A number of people who had planned on flying here last week had their flights rescheduled far enough into the future such that it was not worthwhile for them to come.

While changing planes at Washington Dulles, I ran into a colleague (headed in the other direction, back to Chicago from CERN) who had some very good news to report. Over the weekend, LHC operators tried “squeezing” the beams for the first time, as Mike had alluded to last week. This is a focusing of the beams that, like the name says, squeezes them so that all the particles are closer together. A greater density of beam particles means that there is a greater chance that the particles in opposing bunches will actually collide. And that was in fact what happened — the observed collision rate went up, by about a factor of ten. It’s not every day that you gain a factor of ten! As a result, more collisions were recorded in a single day than had been recorded in the entire month beforehand.

The next steps include things like adding more protons to each bunch, and adding more bunches to each beam. We hope to get another four factors of ten in collision rate yet this year. The big question is how quickly they will come. But in any case, it is very encouraging to see such progress.

Share

How much data, how soon?

Sunday, February 7th, 2010

First off, we should mention here that CMS’s first paper from collision data has now been accepted for publication by the Journal of High Energy Physics. It’s a measurement of the angular distribution and momentum spectrum of charged particles produced in proton collisions at 0.9 and 2.36 TeV, using about 50,000 collision events recorded in December. It is really wonderful that this result could be turned around so quickly! The first of many papers to come, we hope.

Meanwhile, as already mentioned here, we now have the news of the run plan for the LHC. CERN is preparing for the longest continuous accelerator run of its history, 18 to 24 months. The inverse femtobarn of data to be recorded in that time is a lot, and will give us an opportunity to make many interesting measurements. Whether any of them will be evidence of new physics, I for one am not going to speculate! But if nothing else, this plan sets out what our LHC life for the next ~three years is going to look like.

But a shorter-term question comes to mind — 1 fb-1 over 18 to 24 months is one thing. But what about just the next few months? There is a major international conference coming up in July. What sort of LHC results might be ready by then? That will depend in part on how many collisions are delivered. I’ve seen various estimates for that, but they vary by an order of magnitude depending on the level of optimism, so I’d rather not guess. It will also depend on the experiments’ performance. How efficiently can we record those collisions? How quickly can we process them? How soon will we understand various parts of the detectors well enough to make quality measurements? How smart and clever can we be throughout the entire process? How much sleep is everyone going to get?

Ask me again in July. Meanwhile, game on.

Share

LHC #9, poised to take #1 soon?

Monday, December 14th, 2009

The successful restart of the LHC ranks #9 on Time magazine’s list of the top 10 scientific discoveries of 2009. That’s not bad considering that the LHC only had its first collisions last week and is still some time away from having the integrated luminosity to make big discoveries. Despite this, the LHC has set new records for the highest energy particle collisions made by human kind and it was no small task to get this far.

If everything goes smoothly, we’re looking at 3.5 TeV per beam collisions in 2010, maybe going up to 5 TeV. High energies are sexy and look good for the press, but discoveries are all about finding an excess in the rate of some process (as we discussed in an earlier post, also Regina’s latest). In order to observe this excess, we need lots of data. Why is this? Suppose you wanted to know if Kobe Bryant or LeBron James has a higher shooting percentage. After just a few games, you could look at the stats but they would be difficult to trust: maybe one player had an off day, etc. But over the course of the entire season, the accumulated stats become more trustworthy.

Particle physicists measure how much data they have in “inverse picobarns.” After next year the good folks at the LHC expect to have a couple hundred inverse picobarns of data. By comparison, the Tevatron at Fermilab has recorded something on the order of inverse femtobarns, i.e. thousands of invese picobarns of data. That’s around the ballpark (conservatively) where physicists can really start looking for the subtle hints that exotic particles have been created.

What does this all mean? Well, it means that unless nature is very kind 2010 might still be a bit early for “paradigm shifting” discoveries. I should mention two things: (1) people are keeping their eyes out in case nature is this kind and (2) there’s still a lot of very important science to be done in this period (e.g. top quark mass measurements).

After 2010 the LHC will have a “long” shut down to prepare to ramp up to 7 TeV per beam collisions. That’s when the machine will really ramp up its search for things like supersymmetry, extra dimensions, dark matter, and the Higgs (if we don’t discover it sooner). Then the LHC can aim for #1 on Time magazine’s list of scientific discoveries.

[If any of my fellow US/LHC bloggers have more updated information about 2010 expectations, please correct me!]

Flip

Share