• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘shift’

A Change of Pace

Monday, February 4th, 2013

Some physicists and engineers from Purdue and DESY, and me, at the beamline we used to test new pixel designs

Every so often, a physicist needs a vacation from doing data analysis for the Higgs boson search. A working vacation, something that gets you a little closer to the actual detector you work on. So last week, I was at the DESY laboratory in Hamburg, Germany, helping a group of physicists and engineers study possible changes to the design of individual pixels in the CMS Pixel Detector. (I’ve written before about how a pixel detector works.) We were at DESY because they had an electron beam we could use, and we wanted to study how the new designs performed with actual particles passing through them. Of course, the new designs can’t be produced in large scale for a few years — but we do plan to run CMS for many, many years to come, and eventually we will need to upgrade and replace its pixel detector.

What do you actually do at a testbeam? You sit there as close to 24 hours a day as you can — in shifts, of course. You take data. You change which new design is in the beam, or you change the angle, or you change the conditions under which it’s running. Then you take more data. And you repeat for the entire week.

So do any of the new designs work better? We don’t know yet. It’s my job to install the software to analyze the data we took, and to help study the results, and I haven’t finished yet. And yes, even “working on the detector” involves analyzing data — so maybe it wasn’t so much of a vacation after all!

Share

Shifting expectations

Saturday, April 14th, 2012

It’s 2012. We have stable beams. We’re at 8TeV. We’re taking data and I’m sitting in the ATLAS Control Room again. Fans of my blog will remember my previous onshift posts and, yes, today I had an awesome breakfast of roasted duck (a special treat from a visiting professor).

So ATLAS Control Room, we meet again...

So ATLAS Control Room, we meet again...

The last time I took shifts was about 6 months ago, and since we’ve had a shutdown. Both the LHC and ATLAS have used this break as an opportunity to make substantial improvements and move things around a bit. The change to 8TeV came at the same time as a change in the luminosity calibration. For some reason it looks like CMS are getting about 10% more collisions than ATLAS is. That’s a little unnerving.

The writing's on the wall, literally.  CMS have more collisions than we do.

The writing's on the wall, literally. CMS have more collisions than we do.

As the beam conditions changed, so has the Trigger Shifter’s desk. Performing the checks used to take me about 20 minutes, but with the new layout it took me one hour. Hopefully as I get used to the new system it will be quicker! Since I’m supposed to perform these checks about once an hour I could spend my whole shift staring at one set of histograms! That’s the kind of environment that leads to simple mistakes which could cost data.

Just when things were going well I heard a sound over the intercom and all my trigger rates dropped to 0Hz. There were no error messages, nothing seemed to be wrong with the detector and every system seemed to be working fine. After discussing the situation with colleagues in the Control Room I realized that it was a scheduled beam dump. A scheduled beam dump. We don’t get those often, and the training doesn’t include an MP3 file of the “scheduled beam dump” sound. But then again it’s 1:00am and it’s been 6 months since I was last on shift, so I think I can be forgiven for forgetting what a scheduled beam dump sounds like.

Discussing the beam dump with the other shifters.

Discussing the beam dump with the other shifters.

I’ll be on shift for the tonight and the next two night, racking up credit for SMU and keeping the trigger alive. If all goes well it’s a good chance to catch up on work, write a few blog posts and get some time to ponder the bigger challenges in my analyses. For a few days I’m essentially free from all meetings and distractions, giving me the time and space to sort out all the little problems that have built up in the past few weeks. The broken code, the old E-mails, the unasked questions. Shifts are great.

If you liked this post you might also like:
On shift
The best and worst moment on shift

Share

Location, Location, Location

Thursday, January 19th, 2012

If I had to pick one thing that’s definitely better on my old experiment, ATLAS, than on my new experiment, CMS — and especially if I had to pick something I could write publicly without getting into trouble — it would be this: the ATLAS detector is across the street from the rest of CERN. I’m not sure how that was decided, but once you know that, you know where CMS has to be: on the other side of the ring, 5 or 6 miles away. That’s because the detectors have the same goals and need the same beam conditions; two opposite points on the LHC are where a duplicate performance is easiest. The pre-existing caverns from the LEP collider, whose tunnel the LHC now uses, probably also helped determine where the detectors are.

In any case, it used to be that when I wanted to work on my detector, I had only to go across the street. Now I have to drive out of Switzerland and several miles into France. Except, I don’t like driving. So I’ve been working on alternate means of transportation. A few months ago I walked. Last night I had to go to downtown Geneva, so I took the bus. It’s actually pretty good, although the bus stop is a mile away from CMS. There’s also the shift shuttle, which runs from the main CERN site to CMS every 8 hours via a rather roundabout route. And I can bike, once the weather gets better and I get myself a little more road-worthy. To be honest, every option for getting here is much slower than driving, but I enjoy figuring out ways to get places enough that I’m going to keep trying for a while.

I have plenty of chances to try, because I’ll be here in the CMS control room a lot of the time over the next few weeks. Right now, I’m learning and helping with the pixel detector calibration effort. (We’re changing the operating temperature, so all the settings have to be checked.) Soon I’ll be learning to take on-call shifts. So the more I stay here, the more I learn. I got here this morning, and I won’t leave tonight until about 11 pm. I could take the shift shuttle back — or maybe I’ll just get a ride.

Share

It’s that moment when you realize something serious and exciting has happened, but it’s 5:45am and you have to wake somebody up to sort it out. As the LHC ramps up it’s my role to make sure that the trigger is ready. This means looking at the bunch structure in the LHC and checking that ATLAS knows what this structure looks like. It’s as simple as pressing a few buttons and updating a database, and if everything goes smoothly we have nothing to worry about.

This time it was a bit different, because the LHC used a bunch structure they had never used before. When I pressed the button I was actually telling ATLAS something new and witnessing one of those rare transitions in the normal running of the LHC! (Jim’s post gives a great explanation about what bunch structures are and how the LHC team design them.) Then I checked the instructions, and they told me I had to wake someone up and tell them about the change. Nobody likes to be woken up at 5:45am, especially if they have an important meeting the next day. To make matters worse, I know the guy on the other end of the line (although since he’s so sleepy I didn’t recognize his voice at first!) At that point I remembered what my flat mate had told me when he was on call and got woken up at night. He said “What we do would be easy if they just gave us two minutes to think about it. We need time to wake up!” So, feeling bad about waking up the expert I told him I’d call back in 5 minutes. There was a flurry of messages on the electronic logbook and short conversations in the Control Room, and then it was time to call again. This time the voice on the other end of the line was more alert and a bit happier! He said everything was fine. I could proceed as normal and as long as there are no serious problems we can take data as we usually do.

We have beams!

We have beams!

The LHC just declared stable beams. Now the fun begins…

Share

Detector monitoring…

Friday, September 9th, 2011

Greetings from the LHCb detector control room everybody! For the past few days, I’ve been waking up very early in the morning and cycling here to do my part in keeping the LHCb detector running and recording as much data as possible.

It’s probably been mentioned by other people in previous posts, but the LHC particle physics detectors[*] are constantly monitored, 24 hours a day, 365 days a year[**]. There are various levels of detector and data monitoring: the first level consists of people in the various detector control rooms, called online shifts, the second level consists of people on call, called expert shifts, and the third level consists of people doing remote monitoring of data quality and reconstruction, called offline shifts[***].

Each experiment requires a different number of people at each monitoring level, depending on what is deemed necessary. For example, LHCb has 2 people on shift in the control room here at P8. I believe CMS has 5 people in theirs at P5 while ATLAS has 12 over at P1. These online shifts are 8 hours each, the morning one running from 7am to 3pm, the evening one running from 3pm to 11pm and the night one running from 11pm to 7am. These people are in charge of making sure that the detectors are running smoothly, event selections go as planned and data is getting read out properly and there are no obvious problems.

Online at LHCb, we know we’re doing a good job if our detector efficiency is high. We want to record as many interesting collision events as possible during stable beam periods for our physics analyses. Time lost by detector problems is data lost. Above is a nice pie chart of the detector performance for the year during stable beams. I say nice, as we are approximately 90% efficient; we have been able to record around 90% of the luminosity which LHC has delivered to us this year. Of course it would be better if we were at 100%, but this is not really possible with the time required to get the detector from standby into ready (ramping up the HV on the subdetectors and moving the VELO into position). The other two slices of the pie, related to subdetector and readout issues, we try very hard to reduce.

If things don’t look good in the control room, and we can’t figure out why, we call the people on expert shift to diagnose the problem. Expert shifts usually last a week, and require carrying a phone and being within approximately half an hour of the control room. They also need to attend the daily morning run meetings where the detector plan for the day is laid out. When there are stable beams, the optimal plan is obviously to take as much data as possible, but sometimes stable beams are needed for subdetector scans which are important for understanding how the beam radiation is affecting the electronics. When there aren’t stable beams, then the plans can include subdetector calibration or firmware and software upgrades.

Oooh! The LHC is injecting beam now, I better get back to work[****]!

—————————————-
[*] I apologise for not mentioning ALICE at all, but I don’t know anybody from that collaboration well enough to ask random questions like how their shifts are run.

[**] Okay that was a bit of an exaggeration, the detectors aren’t monitored 365 days a year, they are monitored according to the LHC schedule. For example, we don’t have shifts over the winter shutdown period and the number of shifts is reduced during machine development and technical stop periods.

[***] I’m generalising here, each of the experiments actually calls each of their shifts different names. Random fact: LHCb calls their expert shifts, piquet shifts. As far as I can tell, this is the French word for stake or picket, but I haven’t been able to figure out why this is the word used to describe these shifts.

[****] Guess I should mention that I’m on data manager shift at the moment, so my job is to check the quality of the data we are recording.

Share

Why are you still doing night shifts?

Thursday, November 13th, 2008

This is a question I’ve received recently from a couple of my friends in the theory community.  Theoretical particle physicists are pretty smart people, and they do know a little something about particle detectors — so if they’re wondering, then I’m sure some of you will be curious too!  This is also a chance to see a snapshot of my psychological state at the end of a night shift: I wrote all of this to explain what I was doing between 6:20 and 6:45 in the morning a couple weeks ago.  My only edits are two places where I wrote something incorrect and replaced it with a new explanation in brackets.

To summarize: I’m busy this week and getting an easy entry out of cutting and pasting from my gChat log.

Again, the question was (more or less), “Why are you still doing night shifts when the accelerator, and large parts of the ATLAS detector, are off?”  Here’s my answer:

06:22 calibrate the detector
the pixel detector has 80 million channels (i.e. pixels, 400 x 50 microns)
06:23 they actually live, physically, on about 1700 modules, which talk to various hierarchically-organized computers
06:24 [to transmit the data the 100 meters to the counting room without high voltage or repeaters] we have optical links for transmitting the data from inside the detector until it gets outside
thus we need lasers to turn digital signals into optical light, and then we also need to convert the light back
the lasers have to be timed and powered correctly, as does whatever reads the information
06:25 at the moment, the ATLAS pixel detector isn’t using some fraction like [3%] of its modules, because they aren’t set correctly. in some cases, they may be impossible to set correctly until we can open the detector and replace components — which may be many years
but in other cases, the automatic-setting didn’t work, and we have to take a closer look.
06:26 some experts were in here today to try to recover a few such modules by taking that closer look; now I’m running scans that tell us if they were succesful or not.
06:27 that’s only one example of the kind of thing we do. there are a lot of things you can set on every module, and we have to get them all set right.
06:38 [My friend asks why we run all night, and if we run all the time]
06:43 me: yes, we have finite time, and lots of work to do
and clearly more people than pixel detectors.
06:44 once the cooling goes off, in a few weeks, we have to turn the modules off. then there’s only a few kinds of calibration scans/studies we can do

It’s worth noting that now, two weeks later, all the optical links are working well, except for a very few that are hard-core unrecoverable — thanks to the work of the experts who looked at the tuning and the very small contribution I made by running scans for them overnight.  Our night shifts continue, with a few nights each from over a dozen people in this month alone.   Although the details of the work at the moment are different, but the overall plan is the same: to have our subdetector, the last one installed, be as ready as the rest of ATLAS when data finally arrives next year!

Share

First ATLAS Pixel Tracks!

Sunday, September 14th, 2008

I’m on my 18th hour on training shift since Saturday morning, getting in as much time in the control room as I can, and it’s been a very exciting time. One of my colleagues has just discovered that, last night, we recorded the first cosmic ray tracks in the ATLAS pixel detector!

First ATLAS Pixel Detector Track!

This is very exciting news for us; we’re working right up to the wire to make sure our pixel detector is able to run stably along with the rest of the detector. Collisions are coming soon soon soon!

Update (Sept 15): In response to two excellent questions in the comments, I wrote in a little more detail what you’re looking at in the picture. I figure the explanations might as well go in the entry:

1. What’s the perspective? Where’s the LHC?

You’re looking at the inner part of the ATLAS detector, which is wrapped around one of the collision points of the LHC. The large image in the upper left is a cross-section of the detector; the white dot in the very center is where the LHC beam pipe is. The image along the bottom shows the same tracks from the side; the LHC beam pipe isn’t shown, but it would run horizontally (along the Y’ = 0 cm line).

2. What do the dot colors mean? What’s the line?

All the dots are the actual points at which we have a signal from our detector. The red dots represent the signal that we think was left by a charged particle when it passed through, and the red line is the path we think that particle took (i.e. the “track”). The green dots are also signals in the detector, but we think they’re random firings in our electronics, because we can’t make any tracks out of them.

It may look like a lot of electronic noise, because there are more hits from random firings than from the track. But remember that there were only one or two tracks to be found, whereas we have over eighty million pixels in our detector. Thus the fraction of noisy pixels was actually quite small, and certainly didn’t interfere with finding the track. We also have a list of especially noisy pixels that we can “mask” (i.e. ignore), which will bring down the noise by quite a lot but which we haven’t begun to use yet.

Share

Night and Day

Friday, September 12th, 2008

It’s Saturday morning, and I’m up at 6 AM again for my fourth training shift in eight days.  I’m tired.  I’ve not only being dealing with getting up very early, but also with staying up late: on Wednesday, I was the “live from the control room” connection for a San Francisco Bay Area party to celebrate the start of the LHC.  The party was in the evening there, which meant the middle of the night here, and so for me, Circulation Day stretched from 9 AM until 6 AM the following morning, when I finally left work.  That made for a very abbreviated Thursday, because I had a shift yesterday (Friday) at 7 AM as well.

Anyway, I’ve actually had it easy with the shifts so far, because training shifts are all day shifts.  I’m (probably) almost done with them though, and ready to start running the station on my own. (There are experts on call if something happens that I’ve never seen, thankfully!)  I’ve just been asked to submit my shift availability for October, and here it is:

Seth's shift availability for October

Green means I’m willing to take a shift at that time, red means I can’t; the horizontal axis is the 31 days of the month, while the three vertical entries are the 7-3 day shift, the 3-11 evening shift, and the 11-7 night shift.  There are two things you should note:

  1. You can probably guess which weekend I’m meeting a friend in Zagreb, Croatia.
  2. I’m willing to take as many night shifts as day shifts — which means that I can be put on as many night shifts as the shift scheduler thinks is reasonable.  Three or four nights in a row is not unusual as all.

Fortunately, as a new shifter I’ll still be on the day shift for a bit, so I at least won’t be waking the experts up when I screw up and have to call them.  But there’s work to be done, and I have to be willing to work (almost) all the time.  And you know what?  I’m thrilled to be doing it.

Share

Training Shift Liveblog

Thursday, September 4th, 2008

It may be bedtime back in the United States, but here in Geneva it’s six in the morning, and I’ve just dragged myself out of bed.  That’s because I have a “day” shift, which for some bizarre reason begins at 7AM; thus I’ll have to leave my apartment in downtown Geneva in complete darkness.  This is actually only a training shift, but I’m still very excited; I’ve spent a long time writing various analysis software, and it will be exciting to really get my hands on the detector!

I was actually in the control room for a few hours yesterday evening, watching one of the first times our pixel detector has been integrated with the whole “combined run,” and hoping to see a track.  It was very crowded then; we’ll see how things look at 7 AM.

(more…)

Share

Getting Ready

Wednesday, September 3rd, 2008

I’m usually fairly reserved about my enthusiasm, but I have to admit that now even I am getting excited about first beam.

The ATLAS pixel detector is up and running in the pit, and I’ve been working hard this week on looking at the data from calibration scans. Since I wrote a lot of the tools for looking at large quantities of pixel calibration data in a systematic way, I’m the most up-to-speed on using them; and since we have to be calibrated and ready to run very soon, there’s a lot of demand for those skills. Being useful, and having a lot to do, makes me happy. I get up early in the morning ready to come to work, and leave only reluctantly in the evening when I’m too tired to get anything done.

I’ve also been trying hard to get all the training I need to run pixel detector shifts, and it looks like my efforts have borne fruit. I have “training shifts” on Friday and Monday, and hopefully after that I’ll be able to do things on my own. The only downside is that the day shifts now start at 7 AM—it’s a good thing I’ve been getting up early ready to come to work!

Share