• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for May, 2008

In Final Position

Saturday, May 31st, 2008

It is a moment both longed for and dreaded: The closing of the calorimeters. To us on TileCal the calorimeters closing says two things. On one hand it says, ‘What a relief! Finally no more repair work’. And on the other it says, ‘What! We are closed? Now there is no more repair work.’ The thing about hardware people is that we get very, very uncomfortable when we can’t actually touch the hardware. So the closing is a very painful but necessary transition. While we are all very glad that the calorimeters have finally closed, we are all still a bit nervous about it. But this will fade.

The Tile calorimeter is divided into three parts, two of which are movable. There is one ‘barrel’ section of about six meters in length and two ‘extended barrels’ (on each side of the barrel) of about three meters long. In the ‘open’ position, the two extended barrels can be moved about three meters apart from the barrel section. The movement of the extended barrels back into their usual position (next to the barrel) is an engineering feat on its own. Not only is it big and heavy, it also has thousand of cables connected to it. As it took years to connect all the cables going to the extended barrel, it is not possible to disconnect them before moving. Instead the cables were made longer (for some slack) and then put in ‘flexible trays’, so that when the extended barrels move the flexible trays can move with it. Hence no recabling!

This picture was taken during the movement of one of the extended barrels. The perspective is from the bottom of the extended barrel looking up. The blue boxes on the outside are where Tile’s power supplies are located. But despite our unhappiness with no longer being able to touch the hardware, now that the extended barrels are in their final position, we can certainly breath easier. The movement went very smoothly, nothing crashed or crushed or squished or squeezed. And now we are one step closer to being ready for beam!

Atlas Calorimeter

Share

http://www.ignorancia.org/ )

Like most other large particle physics experiments, CMS has a lot of management structure, physicists who effectively are just managers. As you can see these organizational charts are usually represented with a lot of inter-connected boxes. Which is why positions like this are sometimes referred to as boxes. Most of the important boxes, like the spokesman, our representative to the rest of the world, are elected by the collaboration. In the case of mere lower convenors, a team of wise senior physicists typically just finds you worthy, then nominates you and if you accept you have the job. Particularly for post-doctoral researchers these positions are quite coveted, as it proves (if you do your job well) that you have some form of leadership capabilities, one of the alleged requirements for a tenure track job.

Today is a special day for me, as I have accepted to help run the CMS pixel detector software group for a year (at least). I find this all highly exciting, as I suspect I will be learning a lot in this time, not only about our detector but also about how particle physics experiments, or at least CMS, are run behind the scenes. I even have a title, as I now am a Detector Performance Group convenor for the CMS pixel offline software. My own acronym and a box to put it on, whoo whoo! Essentially the title means that I have to make sure the software that is used to analyze and reconstruct pixel data is in a good state. And that means keeping track of all different actitivities that go on in the development, making sure things stay up to date, etc. And that means… guess what: meetings.

So, I got my little (and yes this really is quite a minute) box. I wonder what’s next. I suspect many more meetings.

Share

I had the idea to live blog from the ATLAS control room this past weekend since I was going to be there on shift. But since there was going to be actual work to do and I should be doing it instead of blogging, I decided to not post it live but instead to just type notes into my laptop as I had a chance. I cleaned up the notes a little today, but below is basically what I typed while I was there.

I was supposed to be on shift from 3pm-9pm Geneva time in the ATLAS control room at the Liquid Argon Calorimeter desk. The plan was to detect and record data with the ATLAS detector on muons from cosmic rays. Muons are constantly created in the atmosphere through collisions of cosmic ray particles from space with particles in the atmosphere. The muons travel from the atmosphere all the way down underground to the ATLAS detector and we can detect them. We have been doing this for many months now, with more and more of the ATLAS detector as it is installed. It is a nice test for our equipment before the LHC starts colliding protons inside our detector in a few months, and nice practice for everyone here to operate the detector. Anyway, without further ado…

Saturday, May 24, 2008:

3pm: Start of shift. There are 3 shifters working together which is probably too many in the long term, but for now it’s not so bad for training purposes since a lot of people have little/no experience in the control room.
The 3 of us arrive and meet the 3 people that have been there since 9am on the previous shift. So actually there are 6 people here for a little while.

3:01pm: There is an ongoing problem and the 6 of us will try to figure it out. The problem is that we can see, on one of the monitoring displays, that there is no data coming from one of the parts of the detector. It shows up as a blank spot in a plot that shows the average energy recorded in every channel.

3:10pm: After some investigation it looks like everything was okay yesterday, and some time between midnight and 4 am the data started to be missing. Nobody was here overnight, and as far as we know nobody was working at the time to mess things up, so it’s a mystery.

(more…)

Share

Swiss Wine

Monday, May 26th, 2008

Never heard of great Swiss wine, but couldn’t refuse when some friends were heading out to the apparently famous “cave ouvert”. All the wineries around CERN (didn’t know there were so many) are open for tasting over the weekend. Although, it started out with the typical gray weather, the sun managed its way out and was a beautiful Saturday afternoon. The Geneva transportation board had arranged shuttles to drive you around so drunks are not running over wine tasters in the narrow hilly roads of Satigny and neighborhood. Many were even brave enough to bike but the thought of uphill road after a few glasses makes me nauseous. But I would recommend taking your bike out there if you are not drinking, it is quite beautiful.

So the typical wines from what I gathered were chasselas, gamay, gameret and pinot noir. There were some Merlot and others (btw I am not a wine expert) but nothing came across to be spectacular. I found myself thinking that the 4 Euro Bordeaux from Champion last week was more satisfying. I am not really qualified to say something intelligent about wines and perhaps I didn’t manage to traverse all the fine Swiss wineries, but I was a bit disappointed, not for the free wine off course.

However, I did learn that they was a very good place for RIBS somewhere in Dardingy, have to check it out soon.

Share

Big Explosions

Sunday, May 25th, 2008

Hubble Telescope image of the Crab NebulaNow that I’ve gotten your attention with the entry title, I of course have to admit that there are no big explosions at CERN. That’s a good thing, too, because I’m talking about really big explosions.

CERN, like any big laboratory or university, has a fair number of lectures and colloquia on various topics in physics. One of the great things about being a physicist, and a physics student in particular, is that going to these lectures counts as work, at least if it doesn’t get in the way of things that have to be done. Since my work this week was mostly meetings about getting a new project and passing the old one off to another person, along with writing an ATLAS Infernal Internal Note on the old project, I had the opportunity and need for any educational breaks I could find.

As it happened, there were three very interesting talks by Princeton Professor Adam Burrows. Their nominal subject was “Black Holes and Neutron Stars,” but what he really wanted to show was stars exploding. The first talk, which was definitely my favorite, had a lot of movies and simulations of exactly that. A particularly pretty example is this movie of a Type Ia Supernova:

The neat thing about that video is that, not only does it look good, it’s also a real simulation. One of the main things I learned from the talks is that a substantial obstacle to understanding the details of supernovae is a lack of computing power: there are a lot of ideas about how they work exactly, but none of them come out quite right in simplified simulations. For example, Type II Supernovae probably need to lose their spherical symmetry so that the explosion can spread along one axis while new material collapses into the core from other directions, but it’s not clear exactly how this happens, and it can’t be simulated properly in only two dimensions.

Jokes about avoiding real work aside, it’s quite valuable for physicists to keep up with work in fields that are somewhat removed from our own work; you never know what interesting connections might come up. The details of supernovae have a lot of particle physics in them; for example, there are a tremendous number of neutrinos produced. In fact, neutrino detectors were the first instruments to “see” Supernova 1987a, because the weakly-interacting neutrinos escaped from the star a few hours ahead of the rest of the explosion.

[Image credit: NASA, ESA, J. Hester and A. Loll (Arizona State University)]

Share

Tiers on my pillow

Friday, May 23rd, 2008

And now, the long-promised explanation of the CMS distributed computing system. (I know, you have been on the edge of your seats all this time.)

Let’s start by considering boundary conditions. First, the LHC will produce a lot of data. Every year, the CMS detector will produce something like a petabyte of raw data. A petabyte is a million gigabytes, and if I did the calculation right, if stored on a set of DVD’s, they would stack up twice as high as the Nebraska state capitol, a famously tall building (if you know your Nebraska). This data needs to be processed (which usually means adding more information to it, making it bigger), stored and analyzed. On top of that there is an even larger amount of simulated data — if you are looking for new physics, you have to simulate it first so you know exactly what detector signatures you are looking for. Thus, we are talking many petabytes of data per year that we must work with.

Second, you may not notice this while tapping on your laptop, but computers require a significant amount of power and cooling for their operation. This has become a constraint on operating data centers; last year I went to a conference on computing in high-energy physics, and the whole week ended up being about power and cooling. (Yes, I was able to stay awake.) No single site can deploy enough power and cooling to support all of the computing needed for CMS data processing and analysis.

So, our answer is to run a highly-distributed computing system, with centers distributed around the globe. Now, this does present significant organizational challenges, but it also allows us to make use of computing expertise in every CMS country, and also gives people a sense of ownership — my vice-chancellor for research was much more interested in helping to pay for computers in Nebraska than he would have been to send computers to Switzerland.

To keep the system manageable, we’ve imposed a tiered hierarchy on it. Different computing centers are given different responsibilities, and are designed to meet those responsibilities. (“Design” here means how much CPU or disk they have, and what sort of networking requirements, etc.) A too-cool-for-school graphic showing how the whole thing works can be found here. The Tier-0 facility at CERN receives data directly from the detector, and it reconstructs events and writes a copy of the output to tape. This may not sound like much, but it saturates the resources that are available at CERN.

Data is then transferred to Tier-1 centers. CMS has seven of these, in the US (at Fermilab), the UK, France, Spain, Italy, Germany and Taiwan. These centers store some fraction of the data that come from CERN, and as we gain a better understanding of our detector behavior and of how we want to reconstruct the data, they also re-reconstruct their fraction of the data every now and then. They also make “skims” of these events — a particular physics measurement typically relies on only a portion of all the collisions that we record, so we split the data into different subsamples that will each be enriched in certain kinds of events.

Note that in all this no one has yet made a plot that will appear in a journal publication! This starts to happen at Tier-2 sites; that’s where skims get placed for general users to analyze them. There are about forty of these sites spread over five continents, and they are also responsible for generating all of that simulated data mentioned earlier. This makes the Tier-2 sites very diverse and dynamic facilities — they are responsible to many different people trying to do many different things.

I have surely rambled on enough for a single posting, so some other time I will write about some of the particular challenges we face in making this system work. Suffice it to say that I spend a lot of time thinking about it. I try not to let it keep me up at night, but sometimes the title turns out to be true. Sorry, I needed to come up with a title for this post, and while “Trail of tiers” was more appropriate, it also has negative connotations in Native American history.

Share

On the Volga

Friday, May 23rd, 2008

For all the non-blogging I’ve been doing, I can’t say that I haven’t been giving most of my life to the upcoming LHC run. The main distraction was a recent trip to an ATLAS workshop in Dubna, Russia, on the Volga River, next to a huge reservoir (which someone there called the “Moscow Sea”). While I’ve heard of Dubna for years, as I’ve had collaborators on previous experiments hailing from there, I had never been there, much less to Russia in general. Can’t say that anymore.

The workshop (“Heavy Ion Physics with the ATLAS Detector”) was early last week, and took place in a conference center on Veksler Street, well outside the lab itself. It turns out that just as it’s getting harder and harder to get our non-US colleagues into our national labs, it’s getting equally as laborious to get us into foreign labs. So while I didn’t get to see their facilities, we did hear a nice talk about their planned new low energy heavy ion collider facility (NICA). And the workshop participants (half local, half international) presented a nice set of talks both on ATLAS capabilities for heavy ion physics, but also on Russian involvement the other heavy ion efforts at the LHC in CMS & ALICE. My talk, on bulk observables at RHIC, can be found here — for your enjoyment.

Finally, when the workshop was over we took a half day trip to Sergiyev Posad, home of the Troitse-Sergiyeva Lavra monatery — the spiritual center of the Russian Orthodox Church. Fascinating — especially the private tour of their collection of icons.

And if you’re really curious, you can check out my photos of the Dubna & Sergei Posad parts of my journey on my flickr page. I also spent a day in Moscow on either end, and that was amazing as well — more on that on my personal page.

Share

Time’s up and this time it’s serious ! All big experiments at the LHC are gearing up for collisions within the next month, and for ALICE the numbers are staggering. Assuming we are running about six months of proton-proton collisions and one month of heavy ion collisions per year (i.e. 30 weeks of continuous operation) , the commitment it takes from each and every member of the collaboration is substantial.

The ALICE experiment consists of 18 detectors and 6 so-called general systems (experiment control, detector control, central trigger processor, high level trigger, data acquisition and offline monitoring). In the start-up phase, which is scheduled to last at least the remainder of 2008 and maybe most of the 2009 run, the experiment requires not only a steady 24/7 shift crew but also a substantial number of on-call experts. At this moment the conservative estimates are that at any given time 24 persons need to be on shift and 41 persons need to be on-call experts. In 2009 the on-site shift crew is supposed to reduce to 17 persons with the goal of reaching steady-state operation with a 10 person shift crew by 2010. The counting house is laid out accordingly, but at least for 2008 and most of 2009 it will get very crowded.

Now ALICE is a big collaboration with more than a 1000 Ph.D.’s at this moment, so these resource requirements should be easy to distribute across the whole collaboration, right ? Well, even with so many people the number of eight hour shifts for each individual Ph.D. are still daunting. My institute, Wayne State University, is one of the larger U.S. participants in ALICE, but even with four Ph.D.’s our responsibility comes up to only 0.882% of the total shifts. Still with a total shift allotment of 17,490 shifts in 2008 and 16,185 in 2009, each of our four Ph.D. needs to take around 40 shifts per year, and assuming we take one shift per day we will be at ALICE at least around 1.5 months per year.

Graduate students will carry a big load of these shifts in the coming years, but the early startup phase will likely have to be covered by the existing Ph.D.’s. This is a major commitment which requires substantial travel funds and time allotments for university folks like myself. It is definitely not cheap to do physics abroad. Besides the bad exchange course of the American dollar, the housing situation in and around Geneva is a major headache for many of us. A whole trek of people will steadily have to commute between the U.S. and Geneva from now on. The total commitment of the U.S. institutions to the ALICE shift total is presently around 5%, which is equivalent to about 850 shifts in 2008. But I would assume the shift load for the U.S. in ATLAS and CMS is considerably higher.

For many students this is a great opportunity to see the world and learn about different cultures besides just doing science within an international community. But all of it needs to be well planned. Apartments need to be rented, transportation needs to be provided etc. etc. So it takes a BIG effort to do BIG science, and if you do it from abroad it might even take a little more.

Share

Event Viewing

Thursday, May 22nd, 2008

Being able to visualize events in the detector is critical to understanding whether everything is functioning properly. But creating a program to display events in practice is incredibly difficult. I have the utmost respect for people who attempt it.

Obviously the big hurdle to event viewing is trying to display a three-dimensional detector on a two-dimensional screen. ATLAS has two solutions to this. One is Atlantis, the tried-and-true event viewer. The philosophy of Atlantis is to try and present the ATLAS detector in every two-dimensional slice possible. Such as this picture here.

Atlantis Event Viewer

From top left going clockwise, you see the full detector as if you were looking down the beam pipe, then the same cross section zoomed in on the calorimeters, then again the same cross section showing the inner detector, then a ‘bird’s eye’ view looking down on the beam pipe, and lastly a side profile of the detector (where the beam pipe is now the horizontal plane).

Atlantis as a tool is very useful but as for style… hmmm, not so much. It does have that retro look and while retro in fashion is considered acceptable, retro in computing is generally not.

Our second option is Visual Point 1 or VP1. VP1 takes the opposite approach. Going totally 3-dimensional, allowing the users to to place himself/herself at any point in the detector. In this picture, the view point is outside the calorimeter.

Atlas VP1 Viewer

The detector is just a shadow, barely seen in the picture and only the hits are shown (in yellow here). While VP1 definitely has that more modern feel, the jury is still out for me. It kind of reminds me of Tron. And it is too touchy. You accidentally hold the mouse button down too long and you are transported to some strange view point. And then you have no idea where you are, or what you are looking at.

It is a thankless job that is for sure!

Share

Whacking Moles at the LHC

Tuesday, May 20th, 2008

When I was in undergraduate school at UC-Irvine, I lived in a Newport Beach summer rental during the winter, so it was fairly cheap for the area. It was next to the beach, so I could fall asleep to the sound of the ocean. Nearby, there was an entertainment area, Balboa Fun Zone, with an arcade (the area was in an INXS video “Devil Inside”). It was full of video games (late 1980’s) which I am generally bad at. However, it did have Skee-Ball, where you roll a ball into a series of rings, the smallest at the center giving the most points. You collected tickets as you played, and could redeem them for a prize at the end. I loved the Skee-Ball, and would play for quite a while, redeeming my tickets for some useless trinket at the end.

At the same arcade, there was a game called Whac-a-Mole. This consisted of little mole heads that popped up and you hit them back down again (with a mallet that looks like a giant marshmallow on a stick). I tried this once or twice, but it was too close to video games for me. I am not great at the hand-eye coordination exercises.

Today we are doing studies with the trigger again. I am using this period of time to check and see if two fixes I made worked. They seem to have worked, but two more popped up! I was just reminded of this game. I take my (soft) mallet and whack the moles down, and then they just pop up again, somewhere else. I hope when the game is done, and the moles are gone, I get enough tickets to redeem them for a really nice prize.

Share