• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Ken Bloom | USLHC | USA

Read Bio

Another day at the office

Tuesday, October 8th, 2013

I suppose that my grandchildren might ask me, “Where were you when the Nobel Prize for the Higgs boson was announced?” I was at CERN, where the boson was discovered, thus giving the observational support required for the prize. And was I in the atrium of Building 40, where CERN Director General Rolf Heuer and hundreds of physicists had gathered to watch the broadcast of the announcement? Well no; I was in a small, stuffy conference room with about twenty other people.

We were in the midst of a meeting where we were hammering out the possible architecture of the submission system that physicists will be using to submit computing jobs for analyzing the data in the next LHC run and beyond. Not at all glamorous, I know. But that’s my point: the work that is needed to make big scientific discoveries, be it the Higgs or whatever might come next (we hope!) usually not the least bit glamorous. It’s a slog, where you have to work with a lot of other people to figure out all the difficult little details. And you really have to do this day after day, to make the science work. And there are many aspects of making science work — building advanced scientific instruments, harnessing the power of computers, coming up with clever ways to look at the data (and not making mistakes while at it), and working with colleagues to build confidence in a measurement. Each one of them takes time, effort and patience.

So in the end, today was just another day at the office — where we did the same things we’ve been doing for years to make this Nobel Prize possible, and are laying the groundwork for the next one.

Share

CERN’s universe is ours!

Sunday, September 29th, 2013

This past weekend, CERN held its first open days for the public in about five years. This was a big, big deal. I haven’t heard any final statistics, but the lab was expecting about 50,000 visitors on each of the two days. (Some rain on Sunday might have held down attendance.) Thus, the open days were a huge operation — roads were shut down, and Transports Publics Genevois was running special shuttle buses amongst the Meyrin and Previssen sites and the access points on the LHC ring. The tunnels were open to people who had reserved tickets in advance — a rare opportunity, and one that is only possible during a long shutdown such as the one currently underway.

A better CERN user than me would have volunteered for the open days. Instead, I took my kids to see the activities. We thought that the event went really well. I was bracing for it to be a mob scene, but in the end the Meyrin site was busy but not overrun. (Because the children are too small, we couldn’t go to any of the underground areas.) There were many eager orange-shirted volunteers at our service, as we visited open areas around the campus. We got to see a number of demonstrations, such as the effects of liquid-nitrogen temperatures on different materials. There were hands-on activities for kids, such as assembling your own LHC and trying to use a scientific approach to guessing what was inside a closed box. Pieces of particle detectors and LHC magnets were on display for all to see.

But I have to say, what really got my kids excited was the Transport and Handling exhibit, which featured CERN’s heavy lifting equipment. They rode a scissors lift that took them to a height of several stories, and got to operate a giant crane. Such a thing would never, ever happen in the US, which has a very different culture of legal liability.

I hope that all of the visitors had a great time too! I anticipate that the next open days won’t be until the next long shutdown, which is some years away, but it will be well worth the trip.

Share

Aces high

Thursday, September 19th, 2013

Much as I love living in Lincoln, Nebraska, having a long residence at CERN has some advantages. For instance, we do get much better traffic of seminar and colloquium speakers here. (I know, you were thinking about chocolate.) Today’s colloquium in particular really got me thinking about how we do, or don’t, understand particle physics today.

The speaker was George Zweig of MIT. Zweig has been to CERN before — almost fifty years ago, when he was a postdoctoral fellow. (This was his first return visit since then.) He had just gotten his PhD at Caltech under Richard Feynman, and was busy trying to understand the “zoo” of hadronic particles that were being discovered in the 1960’s. (Side note: Zweig pointed out today that at the time there were 26 known hadronic particles…19 of which are no longer believed to exist.) Zweig developed a theory that explained the observations of the time by positing a set of hadronic constituents that he called “aces”. (He thought there might be four of them, hence the name.) Some particles were made of two aces (and thus called “deuces”) and others were made of three (and called “trays”). This theory successfully explained why some expected particle decays didn’t actually happen in nature, and gave an explanation for differences in masses between various sets of particles.

Now, reading this far along, you might think that this sounds like the theory of quarks. Yes and no — it was Murray Gell-Mann who first proposed quarks, and had similar successful predictions in his model. But there was a critical difference between the two theories. Zweig’s aces were meant to be true physical particles — concrete quarks, as he referred to them. Gell-Mann’s quarks, by contrast, were merely mathematical constructs whose physical reality was not required for the success of the theory. At the time, Gell-Mann’s thinking held sway; I’m no expert on the history of this period of history in theoretical particle physics. But my understanding was that the Gell-Mann approach was more in line with the theory fashions of the day, and besides, if you could have a successful theory that didn’t have to introduce some new particles that were themselves sketchy (their electric charges had to be fractions of the electron charge, and they apparently couldn’t be observed anyway), why would you?

Of course, we now know that Zweig’s interpretation is more correct; this was even becoming apparent a few short years later, when deep-inelastic scattering experiments at SLAC in the late 1960’s discovered that nucleons had smaller constituents, but at that time it was controversial to actually associate those with the quarks (or aces). For whatever reason, Zweig left the field of particle physics and went on to a successful career as a faculty member at MIT, doing work in neurobiology that involved understanding the mechanisms of hearing.

I find it a fascinating tale of how science actually gets done. How might it apply to our science today? A theory like the standard model of particle physics has been so well tested by experiment that it is taken to be true without controversy. But theories of physics beyond the standard model, the sort of theories that we’re now trying to test at the LHC, are much less constrained. And, to be sure, some are more popular than others, because they are believed to have some certain inherent beauty to them, or because they fit well with patterns that we think we observe. I’m no theorist, but I’m sure that some theories are currently more fashionable than others. But in the absence of experimental data, we can’t know that they are right. Perhaps there are some voices that are not being heard as well as they need to be. Fifty years from now, will we identify another George Zweig?

Share

Prioritizing the future

Monday, September 9th, 2013

As I’ve discussed a number of times, the United States particle physics community has spent the last nine months trying to understand what the exciting research and discovery opportunities are for the next ten to twenty years, and what sort of facilities might be required to exploit them. But what comes next? How do we decide which of these avenues of research are the most attractive, and, perhaps most importantly, can be achieved given that we work within finite budgets, need the right enabling technologies to be available at the right times, and must be planned in partnership with researchers around the world?

In the United States, this is the job of the Particle Physics Project Prioritization Panel, or P5. What is this big mouthful? First, it is a sub-panel of the High Energy Physics Advisory Panel, or HEPAP. HEPAP is the official body that can advise the Department of Energy and the National Science Foundation (the primary funders of particle physics in the US, and also the sponsors of the US LHC blog) on programmatic direction of the field in the US. As an official Federal Advisory Committee, HEPAP operates in full public view, but it is allowed to appoint sub-panels that are under the control of and report to HEPAP but have more flexibility to deliberate in private. This particular sub-panel, P5, was first envisioned in a report of a previous HEPAP sub-panel in 2001 that looked at, among other things, the long-term planning process for the field. The original idea was that P5 would meet quite regularly and continually review the long-term roadmap for the field and adjust it according to current conditions and scientific knowledge. However, in reality P5’s have been short-lived and been re-formed every few years. The last P5 report dates from 2008, and obviously a lot has changed since then — in particular, we now know from the LHC that there is a Higgs boson that looks like the one predicted in the standard model, and there have been some important advances in our understanding of neutrino mixing. Thus the time is ripe to take another look at the plan.

And so it is that a new P5 was formed last week, tasked with coming up with a new strategic plan for the field “that can be executed over a 10 year timescale, in the context of a 20-year global vision for the field.” P5 is supposed to be taking into account the latest developments in the field, and use the Snowmass studies as inputs. The sub-panel is to consider what investments are needed to fulfill the scientific goals, what mix of small, medium and large experiments is appropriate, and how international partnerships can fit into the picture. Along the way, they are also being asked to provide a discussion of the scientific questions of the field that is accessible to non-specialists (along the lines of this lovely report from 2004) and articulate the value of particle-physics research to other sciences and society. Oh, and the sub-panel is supposed to have a final report by May 1. No problem at all, right?

Since HEPAP’s recommendations will drive the the plan for the field, it is very important that this panel does a good job! Fortunately, there are two good things going for it. First, the membership of the panel looks really great — talented and knowledgeable scientists who are representative of the demographics of the field and include representatives from outside the US. Second, they are being asked to make their recommendations in the context of fairly optimistic budget projections. Let us only hope that these come to pass!

Watch this space for more about the P5 process over the coming eight months.

Share

Snowmass: in Frontierland

Tuesday, August 6th, 2013

What an interesting but exhausting week it has been here at the Snowmass workshop in Minneapolis. I wrote last week about the opening of the workshop. In the following days, we followed what seemed to me like a pretty original schedule for a workshop. Each morning, we bifurcated (or multi-furcated, if that’s a word) into overlapping parallel sessions in which the various working groups were trying to finalize their studies. There were joint sessions between groups, in which, for instance, people studying some physics frontier were interacting with the people studying the facilities or instrumentation needed to realize the physics goals. Every afternoon we have gathered for plenary (or semi-plenary) sessions, featuring short talks on the theory and experimental work undergirding some physics topic, followed by a discussion of “tough questions” about the topic that challenged its importance in the grand scheme of things and the value of pursuing an experimental program on it. We would close each day with a panel discussion on broader policy questions, such as what is the proper balance between domestic and off-shore facilities, or how to make the case for long-term science.

It is a lot of work to put together and participate in a program like this, and overall everyone did a great job of giving well-prepared and thoughtful presentations. I should also take this opportunity to thank our hosts at the University of Minnesota for their successful management of a complicated and ever-evolving program that involved 700 physicists, most of whom registered at the last minute. (And special personal thanks to my Minneapolis in-laws, who made my visit easy!)

We’ve now gotten through the closing sessions, in which we heard summary reports from all the “frontier” working groups. I’m still digesting what everyone had to say, but here is one thing I think I know: there is general agreement that the frontiers that we are organizing our science around are not themselves science topics but approaches that can tell us about many different topics in different ways. For instance, I was quite taken with the news that cosmology can help us set bounds on the total mass of the different kinds of neutrinos; this will help us understand the neutrino spectrum with complementary information to that provided by accelerator-based neutrino experiments. Everyone is really looking to the other “frontiers” to see how we can create a program of research that can attack important physics questions in the most comprehensive possible way. And I think that a number of speakers have gone out of their way to point out that discoveries on someone else’s “frontier” may fundamentally change our understanding of the world.

(On a related note, it is also clear that we are all bothered by the tyranny of Venn diagrams. I am hoping to find time to write again about how many times a graphic of three intersecting circles appeared over the course of the week, and what amount of irony was implied each time.)

Since this the US LHC blog, I should also mention that the LHC came out well in the discussions. It is clear that there is a lot of potential for understanding and discovery at this machine, both when we increase the energy in 2015 and when we (hopefully) run in a high-luminosity mode later on in which we will attempt to increase the size of the dataset by a factor of ten. We expect to learn a tremendous amount about the newly-discovered, very strange Higgs boson, and hope to discover TeV-scale particles that make it possible for the Higgs to be what it is. From a more practical point of view, it is currently the only high-energy particle collider operating in this world, and it will stay this way for at least a decade. We must do everything we can to exploit the capabilities of this unique facility.

Where do we go from here? The results of the workshop, the handiwork of hundreds of physicists working over the course of a year, will get written up as a report that is meant to inform future deliberations. It is quite clear that we have more projects that have great physics potential, and that we really want to execute, than we have the resources to execute. In some ways, it is a good problem to have. But some hard choices will have to be made, and it won’t be long until we have convened a Particle Physics Project Prioritization Panel that will be charged with making recommendations on how we do this. I’m in no position to guess the outcome, but whatever it turns out to be, I suspect that our entire field is going to have to stand behind it and advocate it if we are to realize any, if not all, of our visions of the frontiers of particle physics.

Share

Snowmass: one big happy family

Monday, July 29th, 2013

Let me say this much about the Community Summer Study 2013, also known as “Snowmass on the Mississippi“: it feels like a family reunion. There are about 600 people registered for the meeting, and since in the end we are a small field, I know a lot of them. I’m surrounded by people I grew up with, people I’ve worked with before, people I work with now, and people with whom I’d really like to work someday. I find it a little overwhelming. Besides trying to learn some science, we’re all trying to catch up with each other’s lives and work.

As with any family, we have our differences on some issues. We know that there are diverse views on what the most important issues are and what are the most promising pathways to scientific discovery. But also, as with any family, there is a lot more that unites us than divides us. As Nigel Lockyer, the incoming director of Fermilab, put it, we will probably have little trouble finding consensus on what the important science questions are. Today’s speakers emphasized that we will need to approach these questions with multiple approaches, and there was mutual respect for the work being done in all of the study groups.

The challenge, of course, is how to accommodate all of these approaches within an envelope of finite resources, and how to strike a balance between near-term operations and long-term (if not very long-term) projects. As our speakers from the funding agencies pointed out, we are in a particularly challenging time for this due to national political and fiscal circumstances. Setting priorities will be a difficult job, and one that will only come after the Snowmass study has laid out all the possibilities.

The workshop continues for another eight days, and if you are interested in particle physics and the future of the field, I hope you’ll be keeping an eye on it. The agenda page linked above has a pointer to a live video stream, presentations are also being recorded for future viewing, and various people are tweeting their way along with hashtag #Snowmass or #Snowmass2013. There are a lot of exciting ideas being discussed this week, some of which can have a transformative effect on the field. Stay with us!

Share

Bs on the frontiers

Monday, July 22nd, 2013

At this week’s EPS conference we have seen the release of new results from both CMS and LHCb on a search for a very rare decay of the Bs meson, to a muon-antimuon pair. I’ve written about this before (yikes, two years ago!); this decay turns out to be amazingly sensitive to possible new physics processes. This is in part because the decay (which violates flavor conservation, to leading order) is highly suppressed in the standard model, and thus the presence of additional particles could have a big impact on the decay rate. Just what impact depends on the particle; depending on the model, the rate for this decay could be either increased or decreased. It’s a somewhat unusual situation — you have something interesting to say either if you see this decay when you don’t expect to (because you don’t yet expect to have sensitivity due to insufficient data), or you if you don’t see the decay when you do expect to.

But in this particular case, the standard model wins again. CMS and LHCb now have essentially identical results, both claiming observation of this process at the rate predicted by the standard model. (LHCb had shown the first clear evidence of this decay last November.) I’m not going to claim any great expertise on this topic, but this result should put stronger constraints on theories such as supersymmetry, as it will restrict the possible characteristics of possible SUSY particles. In addition, this observation is the culmination of years of searching for this decay. I reproduce the CMS plot of the history of the searches below; over the course of about 25 years, our ability to detect this decay has improved by a factor of about 10,000.

But here’s what’s really on my mind: I’m thinking about this measurement in the context of the Snowmass workshop, which begins one week from today in Minneapolis. The studies of the workshop have been divided up into categories of “frontiers”, where the physics can fall into Energy, Intensity or Cosmic Frontiers. This categorization arises from the 2008 report of a US HEP program planning committee. It is certainly a useful intellectual organization of the work that we do in particle physics that is easy to explain to people outside the field. The Department of Energy budget for particle physics is now also organized according to these frontiers.

But where exactly does this Bs measurement fit? The physics of quark flavors and the search for rare decays would be considered part of the Intensity Frontier. But the measurements are being done at the LHC, which is considered an Energy Frontier facility because it has the largest collision energy of any accelerator ever built, and the process is sensitive to the effects of putative particles of very high mass. This is just one example of physics measurements that cut across frontiers. Another that comes to mind is that the LHC experiments have sufficient sensitivity to the production of potential dark-matter particles that in some cases, they can be competitive with searches done in non-accelerator experiments that are classified as being in the Cosmic Frontier.

Heading into next week’s workshop, I am hoping that we will be cognizant of the interconnected work of all the research that we do, regardless of how they might get classified for accounting purposes. We have many ways to explore each of our physics questions, and we need to figure out how to pursue as many of them as possible within the resources that are available.

Share

Your summer travel options

Friday, June 14th, 2013

Now that summer is fully here, are you feeling that old wanderlust, the desire to hit the open road? Well then, there are a lot of interesting places to go on the physics conference circuit between now and Labor Day. There are many fabulous locations on the menu, and who knows, you might get to hear the first public presentation of an exciting new physics result. While it’s true that what many would consider the most glamorous stuff from the LHC has already been pushed out (at the highest priority), you can be assured that scientists are hard at work on new results, and of course there are many other particle-physics experiments that are doing important work. So, find your frequent-flyer card and make sure you’ve changed the oil, and let’s see where you might be headed this summer:

  • 2013 Lepton Photon Conference, San Francisco, CA, June 24-29, hosted by SLAC. This is definitely the most prestigious conference this year; it is the international conference that is the odd-numbered year complement to the ICHEP meetings that are held in even-numbered years. Last year’s ICHEP saw the announcement of the observation of the Higgs boson, and if someone wants to make a big splash this year, they will do it at Lepton Photon. I have previously discussed how ICHEP works; the Lepton Photon series has a similarly storied history, but is slightly different in format, in that there are only plenary overview talks rather than a series of shorter, more focused presentations. San Francisco is always a great destination, and a fine place to consider the physics of the cable car and plate tectonics.
  • 2013 European Physical Society Conference on High Energy Physics, Stockholm, Sweden, July 18-24. If results aren’t ready in time for Lepton Photon, they could be ready in time for EPS. This conference also appears in odd-numbered years, and with a format that has both parallel and plenary sessions, there are many opportunities for younger people to present their work. It is probably the premier particle-physics conference in Europe this year. Thanks to the tilted axis of the earth, and the position of Stockholm at 59 degrees north of the equator, you’ll be able to enjoy 17 hours and 40 minutes of daylight each day at this conference…starting at 4 AM each morning.
  • Community Summer Study 2013, aka Snowmass on the Mississippi, Minneapolis, MN, July 29-August 6. This isn’t really a conference, but it is the culmination of the year-long effort of the US particle-physics community to define its long-range plan. With the discovery of the Higgs boson and important developments neutrino physics, we have better clues on what we should be trying to study in the future. Now we have to understand what facilities are best for this science, and what the technical barriers are to building and exploiting them. But we have to realize that we’re working with a finite budget, and we’ll have to do some hard thinking to understand how to set priorities. You might think that Minneapolis doesn’t have much on San Francisco or Stockholm, but my wife is from there, so I have traveled there many times and I think it’s a great place to visit. You can contemplate the balancing forces and torques on the “Spoonbridge and Cherry” sculpture at the Walker Art Center, or the aerodynamics of Mary Tyler Moore’s hat on the Nicollet Mall.
  • 2013 Meeting of the American Physical Society Division of Particles and Fields, Santa Cruz, CA, August 13-17. Like the EPS conference, DPF also meets in odd-numbered years and is a chance for the US particle physics community to gather. It’s one of my favorite conferences, with a broad program of particle physics and neither too big or too small. It is especially friendly to younger people presenting their own work. Measurements that weren’t ready for the earlier conferences could still get a good audience here. Yes, you might have gone to nearby San Francisco in June, but Santa Cruz has a totally different feel, and you can study the hydrodynamics that power the redwood trees that are all over the campus.

    And you might ask, where am I going this summer? I’d love to get to all of these, but I have another destination this summer — I will be moving my family to Geneva for a sabbatical year at CERN in July. It’s a little disappointing to be missing some of the action in the US, but I’m looking forward to an exciting year. I will be returning to the US for the Snowmass workshop, where I’m co-leading a working group, but that’s about it for conferences for me this summer. That will still be plenty exciting, and I’ll do my best to report all the news about it here.

    Share
  • Place your bets: 25 or 50?

    Thursday, May 23rd, 2013

    Note to readers: this is my best attempt to describe some issues in accelerator operations; I welcome comments from people more expert than me if you think I don’t have things quite right.

    The operators of the Large Hadron Collider seek to collide as many protons as possible. The experimenters who study these collisions seek to observe as many proton collisions as possible. Everyone can agree on the goal of maximizing the number of collisions that can be used to make discoveries. But where the accelerator physicists and particle physicists might part ways over just how those collisions might best be delivered.

    Let’s remember that the proton beams that circulate in the LHC are not a continuous current like you might imagine running through your electric appliances. Instead, the beam is bunched — about 1011 protons are gathered in a formation that is about as long as a sewing needle, and each proton beam is made up of 1380 such bunches. As the bunches travel around the LHC ring, they are separated by 50 nanoseconds in time. This bunching is necessary for the operation of the experiments — it ensures that collisions occur only at certain spots along the ring (where the detectors are) and the experiments can know exactly when the collisions are occurring and synchronize the response of the detector to that time. Note that because there are so many protons in each beam, there can be multiple collisions each time two bunches pass by each other. At the end of the last LHC run, there were typically 30 collisions that occurred per bunch crossing.

    There are several ways to maximize the number of collisions that occur. Increasing the number of protons in each bunch crossing will certainly increase the number of collisions. Or, one could imagine increasing the total number of bunches per beam, and thus the number of bunch crossings. The collision rate increases like the square of the number of particles per bunch, but only linearly with the number of bunches. On the face of it, then, it would make more sense to add more particles to each bunch rather than to increase the number of bunches if one wanted to maximize the total number of collisions.

    But the issue is slightly more subtle than that. The more collisions that occur per beam crossing, the harder the collisions are to interpret. With 30 collisions happening at the same time, one must contend with hundreds, if not thousands, of charged particle tracks that cross each other and are harder to reconstruct, which means more computing time to process the event. With more stuff going on each event, the most important parts of the event are increasingly obscured by everything else that is going on, degrading the energy and momentum resolution that are needed to help identify the decay products of particles like the Higgs boson. So from the perspective of an experimenter at the LHC, one wants to maximize the number of collisions while having as few collisions per bunch crossing as possible, to keep the interpretation of each bunch crossing simple. This argument favors increasing the number of bunches, even if this might ultimately mean having fewer total collisions than could be obtained by increasing the number of protons per bunch. It’s not very useful to record collisions that you can’t interpret because the events are just too busy.

    This is the dilemma that the LHC and the experiments will face as we get ready to run in 2015. In the current jargon, the question is whether to run with 50 ns between collisions, as we did in 2010-12, or 25 ns between collisions. For the reasons given above, the experiments generally prefer to run with a 25 ns spacing. At peak collision rates, the number of collisions per crossing is expected to be about 25, a number that we know we can handle on the basis of previous experience. In contrast, the LHC operators generally to prefer the 50 ns spacing, for a variety of operational reasons, including being able to focus the beams better. The total number of collisions delivered per year could be about twice as large with 50 ns spacing…but with many more collisions per bunch crossing, perhaps by a factor of three. This is possibly more than the experiments could handle, and it could well be necessary to limit the peak beam intensities, and thus the total number of collisions, to allow the experiment to operate.

    So how will the LHC operate in 2015 — at 25 ns or 50 ns spacing? One factor in this is that the machine has only done test runs at 25 ns spacing, to understand what issues might be faced. The LHC operators will re-commission the machine with 50 ns spacing, with the intention of switching to 25 ns spacing later, as soon as a couple of months later if all goes well. But then imagine that 50 ns running works very well outset. Would the collision pileup issues motivate the LHC to change the bunch spacing? Or would the machine operators just like to keep going with a machine that is operating well?

    In ancient history I worked on the CDF experiment at the Tevatron, which was preparing to start running again in 2001 after some major reconfigurations. It was anticipated that the Tevatron was going to start out with a 396 ns bunch spacing and then eventually switch over to 132 ns, just like we’re imagining for the LHC in 2015. We designed all of the experiment’s electronics to be able to function in either mode. But in the end, 132 ns running never happened; increases in collision rates were achieved by increasing beam currents. This was less of an issue at the Tevatron, as the overall collision rate was much smaller, but the detectors still ended up operating with numbers of collisions per bunch crossing much larger than they were designed for.

    In light of that, I find myself asking — will the LHC ever operate in 25 ns mode? What do you think? If anyone would like to make an informal wager (as much as is permitted by law) on the matter, let me know. We’ll pay out at the start of the next long shutdown at the end of 2017.

    Share

    Shutdown? What shutdown?

    Sunday, March 24th, 2013

    I must apologize for being a bad blogger; it has been too long since I have found the time to write. Sometimes it is hard to understand where the time goes, but I know that I have been busy with helping to get results out for the ski conferences, preparing for various reviews (of both my department and the US CMS operations program), and of course the usual day-to-day activities like teaching.

    The LHC has been shut down for about two months now, but that really hasn’t made anyone less busy. It is true that we don’t have to run the detector now, but the CMS operations crew is now busy taking it apart for various refurbishing and maintenance tasks. There is a detailed schedule for what needs to be done in the next two years, and it has to be observed pretty carefully; there is a lot of coordination required to make sure that the necessary parts of the detector are accessible as needed, and of course to make sure that everyone is working in a safe environment (always our top priority).

    A lot of my effort on CMS goes into computing, and over in that sector things in many ways aren’t all that different from how they were during the run. We still have to keep the computing facilities operating all the time. Data analysis continues, and we continue to set records for the level of activity from physicists who are preparing measurements and searches for new phenomena. We are also in the midst of a major reprocessing of all the data that we recorded during 2012, making use of our best knowledge of the detector and how it responds to particle collisions. This started shortly after the LHC run finished, and will probably take another couple of months.

    There is also some data that we are processing for the very first time. Knowing that we had a two-year shutdown ahead of us, we recorded extra events last year that we didn’t have the computing capacity to process in real time, but could save for later analysis during the shutdown. This ended up essentially doubling the number of events we recorded during the last few months of 2012, which gives us a lot to do. Fortunately, we caught a break on this — our friends at the San Diego Supercomputer Center offered us some time on their facility. We had to scramble a bit to figure out how to include it into the CMS computing system, but now things are happily churning away with 5000 processors in use.

    The shutdown also gives us a chance to make relatively invasive changes to how we organize the computing without potentially disrupting critical operations. Our big goal during this period is to make all of the computing facilities more flexible and generic. For the past few years, particular tasks have often been bound to particular facilities, in particular those that host large tape archives. But that can lead to inefficiencies; you don’t want to let computers remain idle at one site just while another site is backed up because it has particular features that are in demand. For instance, since we are reprocessing all of the data events from 2012, we also need to reprocess all of the simulated events, so that they match the real data. This has typically been done at the Tier-1 centers, where the simulated events are archived on tape. But recently we have shifted this work to the Tier-2 centers; the input datasets are still at the Tier 1’s, but we read them over the Internet using the “Any Data, Anytime, Anywhere” technology that I’ve discussed before. That lets us use the Tier 2’s effectively when they might have been otherwise idle.

    Indeed, we’re trying to figure out how to use any available computing resource out there effectively. Some of these resources may only be available to us on an opportunistic basis, and taken away from us quickly when they are needed by their owner, on the timescale of perhaps a few minutes. This is different from our usual paradigm, in which we assume that we will be able to compute for many hours at a time. Making use of short-lived resources requires figuring out how to break up our computing work into smaller chunks that can be easily cleaned up when we have to evacuate a site.

    But computing resources include both processors and disks, and we’re trying to find ways to use our disk space more efficiently too. This problem is a bit harder — with a processor, when a computing job is done with it, the processor is freed up for someone else to use, but with disk space, someone needs to actively go and delete files that aren’t being used anymore. And people are paranoid about cleaning up their files, in fear of deleting something they might need at an arbitrary time in the future! We’re going to be trying to convince people that many files on disk aren’t getting accessed, and it’s in our interest to automatically clean them up to make room for data that is of greater interest, with the understanding that the deleted data can be restored if necessary.

    In short, there is a lot to do in computing before the LHC starts running again in 24 months, especially if you consider that we really want to have it done in 12 months, so that we have time to fully commission new systems and let people get used to them. Just like the detector, the computing has to be ready to make discoveries on the first day of the run!

    Share