• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Aidan Randle-Conde | Université Libre de Bruxelles | Belgium

View Blog | Read Bio

Ramping up

At the moment the LHC is making the transition from no beams to stable beams. It’s a complicated process that needs many crosschecks and calibrations so it takes a long time (they have already been working on the transition since mid February.) The energy is increasing from 7TeV to 8TeV, and the beams are being squeezed tighter, and this means more luminosity, more data, and better performance. As the LHC prepares for stable beams, so do the experiments. I can only see what is happening within ATLAS, but the story will be the same for CMS and LHCb.

As the LHC moves through its checks and changes its beam parameters the experiments have an opportunity to request special beam setup. We can ask that the LHC “splashes” the detector with beam in order to calibrate our hardware. This is similar to the famous first beam plots that we saw in 2008. In addition to splashes we can also request very low pileup runs to test our simulation. “Pileup” refers to the average number of events we expect to get every time the beams collide in the detectors, and by increasing the pileup we cram as many events as we can into the limited periods of time available to us. For 2011 our pileup was about 15, and this is going to increase in 2012 to about 20-30. This meant I was surprised to find out that we can use pileup of 0.01 for some of our simulation calibrations!

First ATLAS splash from 2008 (ATLAS Collaboration)

First ATLAS splash from 2008 (ATLAS Collaboration)

The timetable for the ramping up the LHC is announced as far in advance as possible, but it’s subject to small changes and delays as new problems arise. In general, the LHC outperforms its expectations, delivering higher luminosities than promised and stable beams for longer than expected, so when we factor in unexpected problems and unexpected higher performance we have to take the timetable with a pinch of salt. We expect to get stable beams around Easter weekend. You can see the timetable in the pdf document provided by the LHC team.

In the meantime the ATLAS hardware has been checked and maintenance performed to get it in good working order for the data taking. The thresholds are fine tuned to suit the new beam conditions and the trigger menu is updated to make the best use of the data available. There are plenty of decisions that need to be made and discussions that need to take place to make sure that the hardware is ready for stable beams. Today I got a glimpse at the checks that are performed for the electromagnetic calorimetry system, the trigger system and some changes to the muon systems. It’s easy to lose sight of how much work goes into maintaining the machine!

The LHC team preparing for beams.

The LHC team preparing for beams.

As the hardware improves, so does the software. Software is often a cause of frustration for analysts, because they develop their own software as a collaboration and the software is sometimes “bleeding edge”. As we learn more about the data and the differences between data and simulation we can improve our software, and that means that we constantly get new recommendations, especially as the conferences approach. There is a detailed version tracking system in place to manage these changes, and it can be difficult to keep up to date with it all. Unfortunately, updated software usually means analyzing the data or simulation again, which is time consuming and headache-inducing in itself. That is how things worked in 2011. This year it looks like we’ve already learned a lot about how the data look, so we can start with much better simulation and we can start with an improved release for all the software. This should make progress much easier for analyses and simpler for everyone (which is a very important consideration, given that we have a large range of experience with software, and a large range of knowledge of physics processes.)

The banks of super computers are ready and waiting...

The banks of super computers are ready and waiting...

Putting all this together we can conclude the following: we will have higher energy beams giving us more data, we’ll have a better functioning detector based on previous experience, we’ll have improved simulation, and we’ll have more stable and simpler software. This is very exciting on the one hand, but a bit intimidating on the other, because it means that the weak link in the chain could be the physicists performing the analyses! There are plenty of analyses which are limited by statistics of the dataset, or by resolution of the detector, or stymied by last minute changes in the software or bugs in the simulation. If we hit the ground running for 2012 we could find ourselves with analyses limited by how often the physicists are willing to stay at work until 3am to get the job done.

I’ve already explained why 2012 is going to be exciting in terms of results in another blog post. Now it looks like it will bring a whole new set of challenges for us. Bring it on, 2012, bring it on.

Share

Tags: , , ,