• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Anna Phan | USLHC | USA

View Blog | Read Bio

Detector monitoring…

Greetings from the LHCb detector control room everybody! For the past few days, I’ve been waking up very early in the morning and cycling here to do my part in keeping the LHCb detector running and recording as much data as possible.

It’s probably been mentioned by other people in previous posts, but the LHC particle physics detectors[*] are constantly monitored, 24 hours a day, 365 days a year[**]. There are various levels of detector and data monitoring: the first level consists of people in the various detector control rooms, called online shifts, the second level consists of people on call, called expert shifts, and the third level consists of people doing remote monitoring of data quality and reconstruction, called offline shifts[***].

Each experiment requires a different number of people at each monitoring level, depending on what is deemed necessary. For example, LHCb has 2 people on shift in the control room here at P8. I believe CMS has 5 people in theirs at P5 while ATLAS has 12 over at P1. These online shifts are 8 hours each, the morning one running from 7am to 3pm, the evening one running from 3pm to 11pm and the night one running from 11pm to 7am. These people are in charge of making sure that the detectors are running smoothly, event selections go as planned and data is getting read out properly and there are no obvious problems.

Online at LHCb, we know we’re doing a good job if our detector efficiency is high. We want to record as many interesting collision events as possible during stable beam periods for our physics analyses. Time lost by detector problems is data lost. Above is a nice pie chart of the detector performance for the year during stable beams. I say nice, as we are approximately 90% efficient; we have been able to record around 90% of the luminosity which LHC has delivered to us this year. Of course it would be better if we were at 100%, but this is not really possible with the time required to get the detector from standby into ready (ramping up the HV on the subdetectors and moving the VELO into position). The other two slices of the pie, related to subdetector and readout issues, we try very hard to reduce.

If things don’t look good in the control room, and we can’t figure out why, we call the people on expert shift to diagnose the problem. Expert shifts usually last a week, and require carrying a phone and being within approximately half an hour of the control room. They also need to attend the daily morning run meetings where the detector plan for the day is laid out. When there are stable beams, the optimal plan is obviously to take as much data as possible, but sometimes stable beams are needed for subdetector scans which are important for understanding how the beam radiation is affecting the electronics. When there aren’t stable beams, then the plans can include subdetector calibration or firmware and software upgrades.

Oooh! The LHC is injecting beam now, I better get back to work[****]!

—————————————-
[*] I apologise for not mentioning ALICE at all, but I don’t know anybody from that collaboration well enough to ask random questions like how their shifts are run.

[**] Okay that was a bit of an exaggeration, the detectors aren’t monitored 365 days a year, they are monitored according to the LHC schedule. For example, we don’t have shifts over the winter shutdown period and the number of shifts is reduced during machine development and technical stop periods.

[***] I’m generalising here, each of the experiments actually calls each of their shifts different names. Random fact: LHCb calls their expert shifts, piquet shifts. As far as I can tell, this is the French word for stake or picket, but I haven’t been able to figure out why this is the word used to describe these shifts.

[****] Guess I should mention that I’m on data manager shift at the moment, so my job is to check the quality of the data we are recording.

Share

Tags: , ,

  • Pierre Maxted

    My favourite blog post so far. I love the sense it gives that the LHC is _really_ running _right now_. The photo reminds me of many nights I have spent sitting in anonymous looking computer rooms, but rather than the worlds biggest experiment underneath me (is it underneath you?), I have been on top of a mountain with a telescope next door. I’m lucky that I often get an immediate feel for the results of my research as the data comes of the telescope. How can anyone have the patience to design an experiment that will take 20 years to build and then will run for 10 years to give a 5-sigma result?

  • Anna Phan

    Dear Pierre,

    Thanks for the comment. The detector hall is approximately 100m below the control room, just outside the room is the access elevators down to the pit.

    I’ve spent a fair bit of time in this control room and the ATLAS one during my PhD, they have very different feels to them. What I like about the LHCb one is that it’s the same control room they used for DEPHI in the LEP era, so it has a bit of a retro feel to it. There is also the LHCb emergency panel which is kind of cool. It consists of a schematic of the detector with big red emergency stop buttons for each of the subdetectors. I try to stay at least two metres from it from I don’t accidentally hit any of them!

    Regarding timelines, it was never meant to take 20 years to build the LHC accelerator and associated experiments, and there’s lots of analyses which don’t need ten years of data. All the experiments have published papers on the data we have already.

    Cheers,
    Anna

  • entropy

    i’m just curious, can LHC predict earthquakes as Tevatron does? I think it can, but I haven’t seen any data of that kind from LHC. Can you tell us something about this?

  • entropy

    Also, it looks like intensive care ward, shifts and monitors :)

  • Anna Phan

    Dear entropy,

    Why do you always increase? Sorry, I just had to ask. Just to clarify, the Tevatron has not predict earthquakes, it can detect them as the beam instrumentation is sensitive to the seismic activity. I actually believe that this is true of any sufficiently large and sensitive enough equipment. I’m pretty sure that multiple systems all around the LHC ring are sensitive enough to detect seismic activity. In fact, I have found an article from the CERN courier where some equipment in the ATLAS cavern used to monitor any deformation or movement of the detector supports actually felt the Sumatra earthquake of Boxing Day, 2006.

    In fact, the beam energy of the previous CERN collider, LEP was sensitive to the phases of the moon and the currents of passing superconducting French trains, as you can see in this poster if you are interested. The machine operators determined that the TGV trains affected the beam energy during a French strike, when the effect disappeared, only to reappear again when the strike ended. Somebody then had the brilliant idea of checking the timetable, and voila, the incidents were 100% correlated.

    Cheers,
    Anna

  • Stephen Girolami

    Hi Anna, I cannot ask an intelligent question to you, but I enjoy reading your posts, and you obviously love your job. Keep up the good work and I hope you guys find find some amazing new insights into the standard model. Stephen

  • http://www.plumbology.co.uk Stephen Girolami

    As an ex C++ programmer, I’d be interested to know what programming languages are used at CERN?

  • Anna Phan

    Dear Stephan,

    Thanks for you comment and questions. The main programming languages currently used at CERN are C++ and Python, though Java and Fortran are also used, as well as SQL and Oracle databases. I’m sure there are others which are also used.

    Cheers,
    Anna

  • Stefan

    Nice post! About “piquet”: The meaning is probably hard to find in a french dictionary because it is more or less a french helvetism (in the german speaking part it exists as “Pikett”). It just means an on-call duty or a person being on-call duty, e.g. fire fighters, doctors or sometimes also particle physicists ;-) .

  • http://www.plumbology.co.uk Stephen Girolami

    Dear Anna

    Can you recommend a book that explains the consequences of the spped of light being a constant? I recently read a book called ‘E=MC2′ by Brian Cox, and could not get my head around the fact that the passage of time can be different for people depending on how fast they are travelling.

    Thanks
    Stephen

  • Gabriel Ybeles Smit

    Hi Stephen,

    Of course the authoritative resource is the following article by Albert Einstein: (Here translated in english, and obviously technical ;)) http://www.fourmilab.ch/etexts/einstein/specrel/www/

    You might want to check out “Mr. Tompkins in Wonderland” by George Gamow. (http://www.zenker.se/Books/gamow.shtml)

    And have a look at this website: http://www.spacetimetravel.org/tuebingen/tuebingen.html for simulations done how objects look when travelling with the speed of light.

    Cheers,
    Gabriel