• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Posts Tagged ‘computing’

This story appeared in Fermilab Today March 3.

The Linux operating system produced at Fermilab enabled the laboratory, and other high-energy physics institutions to build large physics data analysis clusters using affordable, commercially available computers. The photo shows computer clusters in the laboratory's Grid Computing Center. Credit: Fermilab

The Linux operating system produced at Fermilab enabled the laboratory, and other high-energy physics institutions to build large physics data analysis clusters using affordable, commercially available computers. The photo shows computer clusters in the laboratory’s Grid Computing Center.

For more than 12 years, Fermilab has supplied thousands of individuals in the scientific community with the operating system that forms the foundation for their exploration of the universe’s secrets. The Linux operating system produced at Fermilab enabled the laboratory, and other high-energy physics institutions to build large physics data analysis clusters using affordable, commercially available computers.

The newest version of the Scientific Linux is now available.

Fermilab began packaging and distributing Scientific Linux in 2004 to the broad high-energy physics community. At that time, it was used on only 1,500 machines. Today, Scientific Linux is run on tens of thousands of machines and is the operating system that powers some of the world’s largest physics experiments, including some experiments at the Large Hadron Collider. The newest version, Scientific Linux 6, is put together by the Fermilab Computing Division, specifically the Fermilab Experiments Facilities Department, and by DESY, CERN and other laboratories and universities across the world.

“This version of Scientific Linux continues a tradition of technical excellence,” said Jason Allen, head of Fermilab Experiments Facilities Department in the laboratory’s Computing Division. “This product is the result of users worldwide who have contributed, tested and provided feedback for this release.”

Fermilab modifies Scientific Linux, the base product, to include security measures and other laboratory-specific elements to create Scientific Linux Fermi. The newest version of Scientific Linux Fermi 6 will be released at Fermilab later this year.

 – Kimberly Myles and Edward Simmonds

Share

Top left image shows SDSS-III's view of a small part of the sky, centered on the galaxy Messier 33. The middle top picture is a zoomed-in image on M33, showing the spiral arms of this galaxy, including the blue knots of intense star formation. The top right-hand image shows a further zoomed-in image of M33 highlighting one of the largest areas of intense star formation in that galaxy. Credit: SDSS

The world’s largest, digital, color image of the night sky became public this month. It provides a stunning image and research fodder for scientists and science enthusiasts, thanks to the Sloan Digital Sky Survey, which has a long connection to Fermilab.

Oh, yeah, and the image is  free.

The image, which would require 500,000 high-definition TVs to view in its full resolution, is comprised of data collected since the start of the survey in 1998.

“This image provides opportunities for many new scientific  discoveries in the years to come,” said Bob Nichol, SDSS-III scientific spokesperson and professor at University of Portsmouth.

Fermilab oversaw all image processing and distribution of data to researchers and the public from 1998 through 2008, for the first seven batches of data. These batches make up a large chunk of the ground-breaking more than a trillion-pixel image. The eighth batch of raw, reduced data, which was released along with the image at the 17th annual meeting of the American Astronomical Society in Seattle was processed by Lawrence Berkley National Laboratory. LBNL, New York University and Johns Hopkins University distributed that data. Fermilab’s SDSS collaboration members now focus solely on analysis.

“This is one of the biggest bounties in the history of science,” said Mike Blanton, professor from New York University and leader of the data archive work in SDSS-III, the third phase of SDSS.  “This data will be a legacy for the ages, as previous ambitious sky surveys like the Palomar Sky Survey of the 1950s are still being used today. We expect the SDSS data to have that sort of shelf life.”

The release expands the sky coverage of SDSS to include a  sizable view of the south galactic pole. Previously, SDSS only imaged small, spread out slivers of the southern sky. Increasing coverage of the southern sky will aid the Dark Energy Survey and the Large Synoptic Survey Telescope both southern sky surveys that Fermilab participates in.

Comparing the two portions of the sky also will help astrophysicists pinpoint any asymmetries in the type or number of large structures, such as galaxies. Cosmic-scale solutions to Albert Einstein’s equations of general
relativity assume that the universe is spherically symmetric, meaning that on a large enough scale, the universe would look the same in every direction.

Finding asymmetry would mean the current understanding of the universe is wrong and turn the study of cosmology on its head, much as the discovery of particles not included in the Standard Model would do for collider physics.

“We would have to rethink our understanding of cosmology,” said Brian Yanny, Fermilab’s lead scientists on SDSS-III. So far the universe seems symmetric.

Whether the SDSS data reveals asymmetry or not it undoubtedly will continue to provide valuable insight into our universe and fascinate amateur astronomers and researchers.

Every year since the start of the survey, at least one paper about the SDSS has made it in the list of the top 10 astronomy papers of the year. More than 200,000 people have classified galaxies from their home computers using SDSS data and projects including Galaxy Zoo and Galaxy Zoo 2.

In the three months leading up to the image’s release a record number of queries, akin to click counts on a Web page,  occurred on the seventh batch of data. During that time, 90 terabytes of pictures and sky catalogues were down loaded by  scientists and the public. That equates to about 150,000 one-hour long CDs.

Scientists will continue to use the old data and produce papers from it for years to come. Early data also works as a check on the new data to make sure camera or processing flaws didn’t produce data anomalies.

“We still see, for instance, data release six gets considerable hits and papers still come out on that in 100s per year,” Yanny said.

So far, SDSS data has been used to discover nearly half a billion astronomical objects, including asteroids, stars, galaxies and distant quasars. This new  eighth batch of data promises even more discoveries.

Fermilab passed the job of data processing and distribution on to others in 2008. The eight batch of data was processed by Lawrence Berkley National Laboratory and distributed by LBNL, New York University and Johns Hopkins University.

Fermilab’s four remaining SDSS collaboration members now focuses solely

illustration of the concept of baryon acoustic oscillations, which are imprinted in the early universe and can still be seen today in galaxy surveys like BOSS. Credit: Chris Blake and Sam Moorfield and SDSS.

on analysis. They are expected to produce a couple dozen papers during the next few years. The group touches on all of SDSS-III’s four sky surveys but focus mainly on the Baryon Oscillation Spectroscopic Survey, or BOSS, which will map the 3-D distribution of 1.5 million luminous red galaxies.

“BOSS is closest to our scientists’ interests because its science goals are to understand dark energy and dark matter and the evolution of the universe,” Yanny said.

For more information see the following:

* Larger images of the SDSS maps in the northern and southern galactic hemispheres are available here and here.

*Sloan’s YouTube channel provides a 3-D visualization of the universe.

*Technical journal papers describing DR8
and the SDSS-III project can be found on the arXiv e-Print server.

*EarthSky has a good explanation of what the colors in the images represent and how SDSS part of an on-going tradition of sky surveys.

*The Guardian newspaper has a nice article explaining all the detail that can be seen in the image.

— Tona Kunz

Share

 To celebrate its 30th anniversary, Discover magazine created a list of the The 12 Most Important Trends in Science Over the Past 30 Years. High-energy particle physics and Fermilab played a part in three of these 12 game-changing research break throughs. Here’s a look at these Discover-selected trends and Fermilab’s contributions to them.

 Trend: The Web Takes Over

Pictured is Fermilab's 2001 home page, which was designed in 1996. Twenty years ago, Fermilab helped to pioneer the URL. It launched one of the first Web sites in the country in 1992. Credit: Fermilab

The first concept for what would become the World Wide Web was proposed by a high-energy particle physicist in 1989 to help physicists on international collaborations share large amounts of data. The first WWW system was created for high-energy physicists in 1991 under the guidance of CERN. 

A year later, Fermilab became the second institution in the United States to launch a website. It also helped initiate the switch easy-to-remember domain name addresses rather than Internet Protocol addresses, which are a string of numbers. This switch helped spur the growth of the Internet and WWW.

Particle physics also secured a place in sports history through its computing savvy. A softball club at CERN, composed of mostly visiting European and American physicists, many connected to Fermilab, was the first ball club in the world to have a page on the World Wide Web, beating out any team from Major League Baseball.

Trend: Universe on a Scale

The field of cosmology has advanced and created a more precise understanding of the evolution and nature of the universe. This has brought high-energy particle physics, cosmology and astronomy closer together. They have begun to overlap in the key areas of dark energy, dark matter and the evolution of the universe.  Discover magazine cites as being particularly noteworthy in these areas the first precise measurement of cosmic microwave background, or CMB, radiation left over from the Big Bang and the discovery with the aid of supernovas that the  expansion of the universe is accelerating.

Dark Energy Camera under construction at Fermilab. Credit: Fermilab

Fermilab physicists study the CMB with the Q/A Imaging Experiment, or QUIET. They study dark energy with several experiments, most notably the long-running Sloan Digital Sky Survey , the Dark Energy Survey, which will be operational at the end of the year, and the Large Synoptic Survey Telescope, potentially operating at the end of the decade or mid-next decade.  

Trend: Physics Seeks the One

During the last few decades the particle physics community has sought to build a mammoth international machine that can probe the tiniest particles of matter not seen in nature since just after the time of the Big Bang.

Initially, this machine was planned for the United States and named the Superconducting Super Collider. Scientists and engineers from Fermilab help with the design and science suite of experiments for the SSC, which was under construction in Texas until it was canceled in 1993.

A similar machine, the Large Hadron Collider in Switzerland, did take shape, starting operation in 2008. Fermilab played a key role in the design, construction and R&D of the accelerator with expertise garnered through the Tevatron accelerator construction, cutting-edge superconducting magnet technology and project managers.

The U.S. CMS remote operation center at Fermilab. Credit: Fermilab

Fermilab now serves as a remote operation center for CMS, one of the two largest experiments at the LHC. Many physicists work on CMS as well as one of the Tevatron’s detector teams, DZero and CDF.  The United States has the largest national contingent within CMS, accounting for more than 900 physicists in the 3,600-member collaboration.

 Fermilab’s computing division serves as one of two “Tier-1” computing distributions centers in the United States for LHC data. In this capacity, Fermilab provides storage and processing capacity for data collected at the LHC that is analyzed by physicists at Fermilab and sent to U.S. universities for analysis there.

Discover magazine cited as a goal of the LHC the search for the Higgs boson, a theorized particle thought to endow other particles with mass, which allows gravity to act upon them so they can form together to create everything in the visible world, such as people, planets and plants. The LHC and the Tevatron are racing to find the Higgs first. The Tevatron has an advantage searching in the lower mass range and the LHC in the higher mass range. Theorists suspect the Higgs lives in the lower mass range. So far, the Tevatron has greatly narrowed the possible hiding places for the Higgs in this range.

— Tona Kunz

Share

Millions of Simulations

Thursday, June 17th, 2010

Proton-Proton collision simulation "jobs" for the CMS detector running on the grid.

To compare with the data we record from our detector (CMS), we need to run a few simulations…well more like billions of simulations.

Each “job” in the plot above is actually a program running on a computer at a university.  Each program typically simulates a few hundred, or a few thousand, proton-proton collisions.  Each individual “collision simulation” calculates what a certain kind of collision would look like in our 12,500-ton detector.

And I don’t mean they just make pretty pictures.  A single simulation really consists of: some particles within each proton interact with some probability, they produce other particles with some probability, those particles decay to other particles with some probability, and so on…  Eventually, stable particles are made and the passage of those particles through the detector are also simulated.

As you can imagine, this requires a lot of random numbers.  One mistake that happens sometimes is that different jobs have the same initial ‘seed’ for the random numbers, and this results in duplication of simulations.  Not only is that a waste of CPU-cycles, but it also means a fuller range of collision possibilities doesn’t get simulated.

My job at times is to herd thousands of simulation jobs at a time to various places and monitor them, make sure they don’t crash, and finish in a timely fashion to return the needed data.

By the way, when I wrote the job monitoring script that makes plots like the one above (written in Python and using matplotlib), I tried using their school colors when I could, but sometimes that resulted in colors that were too similar or confusing.

Share

Feeling squeezed

Tuesday, April 27th, 2010

Here I am at CERN, after fairly smooth travels. (At least this time I didn’t show up with the flu.) The weather here is very nice for this time of the year, and the only evidence I can see for the eruption of Eyjafjallajokull (I love that name!) is somewhat lower attendance than usual for the semiannual CMS computing and software workshop. A number of people who had planned on flying here last week had their flights rescheduled far enough into the future such that it was not worthwhile for them to come.

While changing planes at Washington Dulles, I ran into a colleague (headed in the other direction, back to Chicago from CERN) who had some very good news to report. Over the weekend, LHC operators tried “squeezing” the beams for the first time, as Mike had alluded to last week. This is a focusing of the beams that, like the name says, squeezes them so that all the particles are closer together. A greater density of beam particles means that there is a greater chance that the particles in opposing bunches will actually collide. And that was in fact what happened — the observed collision rate went up, by about a factor of ten. It’s not every day that you gain a factor of ten! As a result, more collisions were recorded in a single day than had been recorded in the entire month beforehand.

The next steps include things like adding more protons to each bunch, and adding more bunches to each beam. We hope to get another four factors of ten in collision rate yet this year. The big question is how quickly they will come. But in any case, it is very encouraging to see such progress.

Share

This past Monday we had our annual US CMS Tier-2 computing workshop. Once again, we held our workshop as part of the Open Science Grid All-Hands Meeting. Those of you who have been reading the blog for more than a year will remember that last year this meeting was held at the totally neat LIGO facility in Louisiana. This year the meeting was at totally neat…Fermilab! OK, I’ve been to Fermilab before, so no travelogue this time, but as usual it was good to meet so many collaborators face to face.

I don’t want to jinx ourselves, but I’m feeling pretty good about the state of the computing for the experiment right now. As we reviewed the status of the seven CMS Tier-2 sites in the United States and two in Brazil, we generally saw that everyone is operating pretty stably and happily. A year ago, there was a lot of discontent with existing large-scale disk storage systems. But since then we’ve developed and implemented some new systems, and there have been a lot of improvements in the existing systems, so it all just looks a lot better.

That being said, this all just dress rehearsal — we’ll see how it really goes when thousands of physicists start using the system to do hundreds of data analyses. Now that the LHC running schedule has been defined for the coming three years, we have a much better handle on the needed computing resources for for this period. Overall, we’re going to be running at lower collision rates than previously anticipated, but with pretty much the same livetime. This means that we’ll be recording the same number of events we would have at higher collision rates, implying that the density of interesting physics will be smaller. It creates a more challenging situation for the computing, but at least we now know what has to be done, and have a reasonably good idea of how to get there.

As for the second half of the title — the real excitement was on my trip home. I had an 8:10 AM flight out of O’Hare, which would arrive in Lincoln around 9:40, giving me plenty of time to be ready for my 12:30 PM class. But there was fog in Chicago, and an aircraft was late, and then the crew was swapped, and then the aircraft was sent to Peoria instead while we waited for the crew, and in the end we didn’t leave until around 10:45. The plane touched down on the runway in Lincoln at 11:57. And I was in my classroom just on time. Ah, lovely Lincoln, where the airport is small, you park right next to the airport, and you can drive to campus in minutes!

Share

Local news

Friday, November 27th, 2009

Admittedly, it is a little harder to follow all the LHC excitement if you are here in the US rather than at CERN.  The announcement of first collisions on Monday came while I was teaching my class, and I’ve been trying to piece together the whole story by talking to our people over there and reading the slides from various meetings.  Of note was a public meeting at CERN yesterday (yes, Thanksgiving Day, another impediment if you are in the US) with presentations from Steve Meyers, CERN’s director for accelerators, and the four LHC experiments.  See the slides and video here.  As everyone else has been saying, the past week has been a thrill (or at least a vicarious one!) for the LHC, the four experiments on the ring, and really all of HEP.  Check out Meyers’s slides in particular, where he documents just how far we have come in the past fourteen months.  The experiments have turned around information from these first few collisions very quickly; some detectors are already able to reconstruct decays of the neutral pion, for instance.  We have huge expectations for the next set of collisions and then for the increases in collision energy that will follow.

My particular contribution to CMS has been in computing, and I’m happy to say that all of that has gone quite smoothly so far.  The prompt reconstruction of events went off without a hitch, and data was flowing very quickly out of CERN to the Tier-1 and Tier-2 sites.  We soon lost track of how many sites had copies of the collision data, and now we’re seeing plenty of people use the distributed computing system to analyze it.  When the next round of collisions comes, we’ll be ready to do it all again.

So while it’s hard to follow the news up to the minute, I’m still connected to the start of a great particle physics adventure.  I’m trying to drag the rest of Nebraska along with me — we managed to get a release placed in the local paper, and if you read this post soon enough, you can hear me at 8:30 AM Central time on Saturday 11/28 on KZUM, Lincoln’s community radio station.  I’ve already taped the interview; let’s hope I didn’t sound incoherent!  (At least when I type the blog posts, there is a backspace key….).

Share

October, exercised

Friday, October 9th, 2009

Here at CMS, we are in the midst of something that, I guess for lack of a better name, has been dubbed the “October exercise.” For the past week and the week to come, we have been trying to get as many people as possible to use the distributed computing system just as they would if they were doing a real analysis with real data. A new set of simulations have been released, and people are trying to work them through the system and their data analyses as quickly as possible, to demonstrate the turnaround time and the scale at which we will be hammering the computing clusters that are distributed around the world.

Halfway through, I would have to consider this at least something of a success. I don’t have anything resembling an accurate count of how many people have gotten involved, but it seems that we are seeing lots of people who had been just been doing their data-analysis work on local computing clusters now trying to use the grid for the first time. Tens of individual exercises have been designed by the dozen-ish CMS physics groups, each with multiple steps involving processing, writing and transferring data. As someone who has been working on the distributed computing for some years now, it is encouraging to see so many new people try out the system, and be successful more often than not.

On the other hand, it’s not as if everything has gone perfectly. A number of new tools and rules were developed just in advance of the exercise, and running these things out of the box at scale has been a bit bumpy. We were certainly aware of the weaknesses in the system, but now they are on full display. One thing that has proved particularly challenging is the “staging out” of outputs made by users in their processing jobs. In CMS computing, different datasets get distributed to different computing sites, and physicists who want to run on those datasets send their jobs to those sites. But everyone has a “home” site, and the output of the jobs has to be returned to the home site. This means that the data must be transferred from a somewhat random site X to the user’s site Y, and not every site Y can handle the volume of transfers that might be coming in. We’re keeping an eye on this and thinking about how we can improve it in the future.

After a week of this, I’d have to say that it’s somewhat exhausting to try to keep up with all that’s going on. And we don’t even have data yet — how exhausted will I be then? But on the flip side, I’m glad that we’re learning all of these lessons now, rather than a month or two from now.

Share

Not a day at the beach

Saturday, April 25th, 2009

Only two weeks left until the end of the academic year! This is always a very busy period, which is my excuse for not writing anything recently. Very little academic business gets done around the university during the summer, so all sorts of things need to get wrapped up before we get to the end of the term, and there are always so many year-end events for our students too. And of course I still have my class to teach; this is going farily smoothly, but I will probably need every last minute in the next two weeks (or at least until I have prepared the final exam) to bring it to a happy ending.
As it happens, I also have a cluster of research-related travel right now — not helpful for getting my teaching done, but it gives me something to write about. I spent some of this week in San Diego, where those of us working on CMS software and computing gathered to discuss the state of the world. These meetings are more typically at CERN, but someone (I’m not even sure who, actually) came up with the brilliant idea of doing them next to an ocean this time instead. That’s great for me — not the ocean part, so much, but it’s always a challenge for me to get to CERN, what with the long distance and the fact that it’s hard to go for less than a week. For these meetings, I was able to teach on Tuesday morning and catch a flight here that night, and still attend most of the workshop.
As has been true for some time, the question we have been struggling with is are we ready for the start of the LHC, and if not what do we have to do to get there. I think that the greatest value of this meeting (heck, any meeting, I suppose) was to bring together groups of people who don’t usually talk. It turns out that there were cases of people working on different aspects of particular problems who had very different understandings of some of the issues. For instance, there was a dispute over whether “24 hours” actually meant 24 hours, or something more like 48 hours. And in some cases, one group of people didn’t know about work that another group was doing that could in fact be very useful to the first group. In short, there’s nothing like actually getting people in the same room to explain themselves to each other.
But once again, I was struck by just how complicated this experiment will be. The challenge from the computing perspective is how interconnected everything is. We want to make sure that a user can’t do anything that could essentially knock over a site (or possibly the whole distributed computing system) by accident. Certainly there were times in the meetings when someone would ask, “why do we have to make it so hard?” but honestly, sometimes it just is that hard.
Anyhow, next week I’ll be in Denver for the April general meeting of the American Physical Society. I’ll write about it then…much more physics content, I promise!

Share

Why is computing interesting?

Friday, July 11th, 2008

Given the tedium of what I need to deal with day to day on the computing, what is it that makes computing interesting?  Let me make a comparison with what is going on in the collision halls.  My colleagues underground at CERN are working very hard as we head towards LHC startup.  There are some very tight time constraints at this point, and they are working with very complex systems that are pushing the limits of their technologies.  And as we head into these final weeks, the separate systems that have been under development for years must be integrated into one large experiment.  It’s a tremendous task, and I don’t want to take anything away from what they are doing.

However, they are starting to get out of the woods.  The door to the collision hall will be shut at some point, and very little can be changed after that.  And the number of people who will interact directly with those systems is relatively small; a team of experts, who will continue to make a lot of effort to make their hardware work and keep it running happily.  Most of their work will be hidden to the world; physicists will be happy to see lots of silicon hits on tracks, but they will only have a vague idea of how much labor went into that.  (I’ll say again, the hardware guys are under-appreciated!)

In contrast, just about everyone on CMS will interact with the computing in some way, which means that my problems are just beginning.  Everyone will want to know where the datasets are.  Everyone will be trying to submit jobs.  Everyone will be trying to make plots.  Performance will be documented and updated regularly on Web pages.  This means that everyone will have an opinion on what works well and what doesn’t, and they won’t hesitate to voice it.  And all the computers are above ground, and software can be modified with a few keystrokes; we can tweak things endlessly, and we might well be called upon to do so.

So in fact this is a very human enterprise — we are building a system that 2000 motivated, smart and creative people will be using every day.  We need to make it work for each of them as individuals, while also making sure that the group as a whole is not harmed.  And while ultimately we have to build good systems, there is a lot of psychology and sociology involved too.  Everyone needs to actually buy in to the idea of distributed computing for it to work, which might be hard while we still work through all the kinks, and everyone will need to trust that they are being treated fairly.  One of my mentors said to me once, “If all of our problems were physics problems, this job would be easy.”  She was of course referring to the fact that we must work with people every step of the way.  Physics equations and plots are interesting, but the human aspect of the work adds an extra dimension.

It is on my mind today because I have been corresponding with some users who are having trouble running jobs on our site.  It sounds like there could be any number of things going on…many of which may have nothing to do with the performance of the cluster here.  But it doesn’t matter; I’m invested in getting the entire chain working, because we have to build confidence.  More to come, I’m sure.

Share