• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘control room’

Shifting expectations

Saturday, April 14th, 2012

It’s 2012. We have stable beams. We’re at 8TeV. We’re taking data and I’m sitting in the ATLAS Control Room again. Fans of my blog will remember my previous onshift posts and, yes, today I had an awesome breakfast of roasted duck (a special treat from a visiting professor).

So ATLAS Control Room, we meet again...

So ATLAS Control Room, we meet again...

The last time I took shifts was about 6 months ago, and since we’ve had a shutdown. Both the LHC and ATLAS have used this break as an opportunity to make substantial improvements and move things around a bit. The change to 8TeV came at the same time as a change in the luminosity calibration. For some reason it looks like CMS are getting about 10% more collisions than ATLAS is. That’s a little unnerving.

The writing's on the wall, literally.  CMS have more collisions than we do.

The writing's on the wall, literally. CMS have more collisions than we do.

As the beam conditions changed, so has the Trigger Shifter’s desk. Performing the checks used to take me about 20 minutes, but with the new layout it took me one hour. Hopefully as I get used to the new system it will be quicker! Since I’m supposed to perform these checks about once an hour I could spend my whole shift staring at one set of histograms! That’s the kind of environment that leads to simple mistakes which could cost data.

Just when things were going well I heard a sound over the intercom and all my trigger rates dropped to 0Hz. There were no error messages, nothing seemed to be wrong with the detector and every system seemed to be working fine. After discussing the situation with colleagues in the Control Room I realized that it was a scheduled beam dump. A scheduled beam dump. We don’t get those often, and the training doesn’t include an MP3 file of the “scheduled beam dump” sound. But then again it’s 1:00am and it’s been 6 months since I was last on shift, so I think I can be forgiven for forgetting what a scheduled beam dump sounds like.

Discussing the beam dump with the other shifters.

Discussing the beam dump with the other shifters.

I’ll be on shift for the tonight and the next two night, racking up credit for SMU and keeping the trigger alive. If all goes well it’s a good chance to catch up on work, write a few blog posts and get some time to ponder the bigger challenges in my analyses. For a few days I’m essentially free from all meetings and distractions, giving me the time and space to sort out all the little problems that have built up in the past few weeks. The broken code, the old E-mails, the unasked questions. Shifts are great.

If you liked this post you might also like:
On shift
The best and worst moment on shift

Share

Location, Location, Location

Thursday, January 19th, 2012

If I had to pick one thing that’s definitely better on my old experiment, ATLAS, than on my new experiment, CMS — and especially if I had to pick something I could write publicly without getting into trouble — it would be this: the ATLAS detector is across the street from the rest of CERN. I’m not sure how that was decided, but once you know that, you know where CMS has to be: on the other side of the ring, 5 or 6 miles away. That’s because the detectors have the same goals and need the same beam conditions; two opposite points on the LHC are where a duplicate performance is easiest. The pre-existing caverns from the LEP collider, whose tunnel the LHC now uses, probably also helped determine where the detectors are.

In any case, it used to be that when I wanted to work on my detector, I had only to go across the street. Now I have to drive out of Switzerland and several miles into France. Except, I don’t like driving. So I’ve been working on alternate means of transportation. A few months ago I walked. Last night I had to go to downtown Geneva, so I took the bus. It’s actually pretty good, although the bus stop is a mile away from CMS. There’s also the shift shuttle, which runs from the main CERN site to CMS every 8 hours via a rather roundabout route. And I can bike, once the weather gets better and I get myself a little more road-worthy. To be honest, every option for getting here is much slower than driving, but I enjoy figuring out ways to get places enough that I’m going to keep trying for a while.

I have plenty of chances to try, because I’ll be here in the CMS control room a lot of the time over the next few weeks. Right now, I’m learning and helping with the pixel detector calibration effort. (We’re changing the operating temperature, so all the settings have to be checked.) Soon I’ll be learning to take on-call shifts. So the more I stay here, the more I learn. I got here this morning, and I won’t leave tonight until about 11 pm. I could take the shift shuttle back — or maybe I’ll just get a ride.

Share

Walking Across the LHC

Monday, November 28th, 2011

About a month ago, I walked back to Saint-Genis-Pouilly, France from the CMS experiment site after my last meeting of the day, which basically amounts to walking the width of the LHC ring: about 6 miles. Here are a few pictures from the walk:

More pictures, and commentary, on Google+…

Share

It’s that moment when you realize something serious and exciting has happened, but it’s 5:45am and you have to wake somebody up to sort it out. As the LHC ramps up it’s my role to make sure that the trigger is ready. This means looking at the bunch structure in the LHC and checking that ATLAS knows what this structure looks like. It’s as simple as pressing a few buttons and updating a database, and if everything goes smoothly we have nothing to worry about.

This time it was a bit different, because the LHC used a bunch structure they had never used before. When I pressed the button I was actually telling ATLAS something new and witnessing one of those rare transitions in the normal running of the LHC! (Jim’s post gives a great explanation about what bunch structures are and how the LHC team design them.) Then I checked the instructions, and they told me I had to wake someone up and tell them about the change. Nobody likes to be woken up at 5:45am, especially if they have an important meeting the next day. To make matters worse, I know the guy on the other end of the line (although since he’s so sleepy I didn’t recognize his voice at first!) At that point I remembered what my flat mate had told me when he was on call and got woken up at night. He said “What we do would be easy if they just gave us two minutes to think about it. We need time to wake up!” So, feeling bad about waking up the expert I told him I’d call back in 5 minutes. There was a flurry of messages on the electronic logbook and short conversations in the Control Room, and then it was time to call again. This time the voice on the other end of the line was more alert and a bit happier! He said everything was fine. I could proceed as normal and as long as there are no serious problems we can take data as we usually do.

We have beams!

We have beams!

The LHC just declared stable beams. Now the fun begins…

Share

Detector monitoring…

Friday, September 9th, 2011

Greetings from the LHCb detector control room everybody! For the past few days, I’ve been waking up very early in the morning and cycling here to do my part in keeping the LHCb detector running and recording as much data as possible.

It’s probably been mentioned by other people in previous posts, but the LHC particle physics detectors[*] are constantly monitored, 24 hours a day, 365 days a year[**]. There are various levels of detector and data monitoring: the first level consists of people in the various detector control rooms, called online shifts, the second level consists of people on call, called expert shifts, and the third level consists of people doing remote monitoring of data quality and reconstruction, called offline shifts[***].

Each experiment requires a different number of people at each monitoring level, depending on what is deemed necessary. For example, LHCb has 2 people on shift in the control room here at P8. I believe CMS has 5 people in theirs at P5 while ATLAS has 12 over at P1. These online shifts are 8 hours each, the morning one running from 7am to 3pm, the evening one running from 3pm to 11pm and the night one running from 11pm to 7am. These people are in charge of making sure that the detectors are running smoothly, event selections go as planned and data is getting read out properly and there are no obvious problems.

Online at LHCb, we know we’re doing a good job if our detector efficiency is high. We want to record as many interesting collision events as possible during stable beam periods for our physics analyses. Time lost by detector problems is data lost. Above is a nice pie chart of the detector performance for the year during stable beams. I say nice, as we are approximately 90% efficient; we have been able to record around 90% of the luminosity which LHC has delivered to us this year. Of course it would be better if we were at 100%, but this is not really possible with the time required to get the detector from standby into ready (ramping up the HV on the subdetectors and moving the VELO into position). The other two slices of the pie, related to subdetector and readout issues, we try very hard to reduce.

If things don’t look good in the control room, and we can’t figure out why, we call the people on expert shift to diagnose the problem. Expert shifts usually last a week, and require carrying a phone and being within approximately half an hour of the control room. They also need to attend the daily morning run meetings where the detector plan for the day is laid out. When there are stable beams, the optimal plan is obviously to take as much data as possible, but sometimes stable beams are needed for subdetector scans which are important for understanding how the beam radiation is affecting the electronics. When there aren’t stable beams, then the plans can include subdetector calibration or firmware and software upgrades.

Oooh! The LHC is injecting beam now, I better get back to work[****]!

—————————————-
[*] I apologise for not mentioning ALICE at all, but I don’t know anybody from that collaboration well enough to ask random questions like how their shifts are run.

[**] Okay that was a bit of an exaggeration, the detectors aren’t monitored 365 days a year, they are monitored according to the LHC schedule. For example, we don’t have shifts over the winter shutdown period and the number of shifts is reduced during machine development and technical stop periods.

[***] I’m generalising here, each of the experiments actually calls each of their shifts different names. Random fact: LHCb calls their expert shifts, piquet shifts. As far as I can tell, this is the French word for stake or picket, but I haven’t been able to figure out why this is the word used to describe these shifts.

[****] Guess I should mention that I’m on data manager shift at the moment, so my job is to check the quality of the data we are recording.

Share

Scoring Points!

Sunday, May 30th, 2010

In our collaboration (CMS), every institution involved is required to do a certain number of shifts watching the detector, making sure it runs smoothly while recording data.  The number of required shifts depends on the number of members in the institution’s group, and it’s up to each group to split up the work among their members as they wish.

For example, this means that if a professor doesn’t want to do shifts, their scientists, post-docs, or students must do them.

One complicating factor is that not every shift is worth the same.  The least popular shifts, or the ones “harder” to do – like overnight shifts – are worth more than others.

Here’s how many points each shift is worth in our collaboration:

  • Weekday morning shift (7am-3pm) is 0.75 points
  • Weekday afternoon shift (3pm-11pm) is 0.75 points, and
  • Weekday overnight shift (11pm-7am) is 1.5 points.
  • Weekend shifts add extra +0.5 points to above.

And since we are asked to do 24-points worth of shifts in a year, what kind of shift is most attractive to me, an unmarried, childless, young graduate student?

The weekend-overnight shifts, of course!  At 2-points each they’re pretty attractive.

Sometimes you have to take the shifts you can get, however, so I’ll be do weekday overnight shifts starting tomorrow.

Control room for the CMS detector.

Share

If you had walked into the CMS control room (P5) today 8th of March of 2010, you would have seen an almost only-women crew at the controls.  It was my last day on-call for the CMS high level trigger system, so I had to attend the daily meeting at CMS P5.  It was fun to see an overwhelming number of women.

I haven’t been paying much attention, and I don’t know the statistics, but I have the feeling that there’s usually a good mix of women and man in the control room. As a matter of fact, this past week (when I was on-call and I had to go to P5 every day) both run field managers were women and I guess they continue for this week.

The fun part of today was that they managed to schedule women for 32 out of the 34 shift positions required to run the CMS experiment; or at least that’s what I was told.  I am sure those two other spots were not filled in with women because the women that can cover them are very busy.  Like my boss, for example, who were supposed to be here for this day but couldn’t make it because she is rather busy with some other CMS responsibilities in the US.

Now, I am curious if they could manage to do the inverse though, i.e., have mostly man scheduled for shifts.  That would be an interesting exercise; it won’t be easy for sure, as many women in CMS have essential expertise in many areas.

All in all it was a good day,  it definitely felt like a special day, and that’s always a lot of fun.  It smelled very nice too!

Hope all women had a good day !!

Edgar Carrera (Boston University)

Share

Live LHC Status Page

Wednesday, March 3rd, 2010

There is a status page, available on the web we watch while in our detector control room which shows what the LHC beam people are up to.  This is a live, constantly updated picture.

What you have to remember is there are the accelerator people (LHC) and then there are the detector people (CMS, ATLAS, LHCb, ALICE…) and we’re all in separate control rooms far from each other.  So the LHC, which provides the proton collisions, has to keep the detector people informed about what’s going to happen and when. This status page is one way of keeping the detector people informed.

The large graph in the middle (if there) shows the “intensity vs time” of the two proton beams (B1 & B2).  Anything less than “1E9” is zero – no beam.  While “intensity” is a count of number of protons in the beam, “energy” of these protons is a different thing, and listed at the top-center.  At the time of this writing, I can see “E: 450 GeV”.  That number will make it’s way up to 3,500 GeV in several weeks.

The “comments” box is also useful to watch, as it typically tells us what the LHC beam people are about to do. There, one might also see phrases like “beam dumped” which means the beam was purposefully thrown away – slammed into a giant wall underground.  “Injection” means putting protons into an accelerator.  If you’re curious, there is a full list of LHC acronyms you might see.

Keep an eye out for green “true” images in the lower right by “Stable Beam”, because that’s when we can have good proton collisions to record.

Mike

Share

I happened to be on-call for the CMS High Level Trigger (HLT) system during the week all LHC experiments saw their first collisions, so here I describe (after having some time to breath) my experience.

All the hardware subsystems in the CMS experiment have two kind of people taking care of operations.  The ones in the front-line are the so-called “shifters”, operators who sit in front of several computer screens in the control room and whose job is to monitor closely the performance of each component, and take rapid action in case something goes wrong.  Each shift is usually of 8 hours and there is always someone doing this; the operations are 24/7.  The other kind are the “experts”, who are on-call 24/7 in case there is a major problem or a more involved task that needs to be done.  For this first week, however, shifters and experts were intensively working together in the control room making sure everything works as planned.

For software subsystems, like the HLT, there are also shifters, but who usually sit somewhere else (like in the remote control room across the Atlantic, at the LPC at Fermilab) and who take the usual 8 hours shifts.  The CMS control room at P5 is always connected via video with the other remote stations, including Fermilab, Desy, CMS Meyrin centre, etc.

The experts are of two kinds, the primary and the secondary.  The team of people in charge of expert support rotate between these two states.  The primary is usually the main expert who carries a cell phone all the time in case there is an “emergency” call from the control room.  The secondary is there for backup, in case the primary needs support or if the primary is unreachable for any circumstances. The week before the collisions week I was secondary, and the primary responsibility was transferred to me the day of first collisions, so it was a very exciting (also quite stressful) moment.

The HLT system is a crucial part of the system.  After the first level of triggering (called L1), the HLT is responsible for deciding what goes into tape and what not.  For the expected first collisions, of course, there was no room for mistake.   We had to be able to record these events and make sure we don’t miss them for circumstances like timing synch of the beam with our trigger (L1), timing of the subdetectors, or any other eventuality.  The beam conditions for these first pilot runs are not as stable (and the detectors are not fully calibrated, we need collision events for that!), so we needed to make sure we considered all scenarios.  On Saturday and Sunday, before Monday 23 of November (the day of first collisions), everyone was working very enthusiastically to prepare for this.  I remember sitting down with the Run Coordinator (the person in charge of all operations), together with expert people related to the data acquisition, in order to define a strategy and adapt quickly to the expected (and not so expected) beam conditions.  We worked intensively to make sure the small modifications that needed to be done were carefully executed.

By Monday morning we were ready and very confident that if the delivered beams were to collide at the CMS detector, we were going to be able to see them and record them.  Unfortunately, on Monday afternoon (when most experiments saw their first collisions), CMS did not see any collision candidate;  everything seemed to be consistent with beam gas, or at most something colliding outside the detector.  Worrisomeness and stress could be briefly noticed  in the faces at the  control room.  But there was no time for that, for many it was the culmination of years of work, and for all of us the beginning of and exciting program, so we went back to work to confirm our explanations of what happened.  I could feel the adrenaline flowing in small but appreciable quantities;  I imagine this chemical flooded many physicists’ bodies that day.

Soon, however, we (CMS+LHC) found out that the beams were  not optimized for collisions at P5 during the afternoon, so we tried again in the evening: the LHC circulated two beams again, now optimized for CMS, and it was marvelous.  The displays showed beautiful events.  There were applauses and champagne!! The machine works !!!!

Edgar Carrera (Boston University)

Share

CMS Detector Control Room

Thursday, November 19th, 2009

[HTML1]

I’m getting word there will be circulating beams as early as tomorrow evening – another LHC milestone!  (As mentioned on CERN twitter)  First collisions are not too far away after that.

This image above is an almost-live updated image of the CMS control room – this is one of the two general-purpose detectors at CERN. (See image correctly at the US LHC blog site) Using some fancy CSS I overlaid some text of the different areas in the room.

I’ll be on shift in the Trigger area starting next week.  There’s about 6 wide-screen monitors back there that I’ll be watching to keep track of (too) many things.  (The Trigger decides what collision events to record or throw away.)

Feel free to spy on people in there.  Geneva is +6 hours from New York and +9 hours from Seattle, so it might be late there compared to your time, but people are on shift 24 hours a day!

Share