• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Mike Anderson | USLHC | USA

View Blog | Read Bio

Millions of Simulations

Proton-Proton collision simulation "jobs" for the CMS detector running on the grid.

To compare with the data we record from our detector (CMS), we need to run a few simulations…well more like billions of simulations.

Each “job” in the plot above is actually a program running on a computer at a university.  Each program typically simulates a few hundred, or a few thousand, proton-proton collisions.  Each individual “collision simulation” calculates what a certain kind of collision would look like in our 12,500-ton detector.

And I don’t mean they just make pretty pictures.  A single simulation really consists of: some particles within each proton interact with some probability, they produce other particles with some probability, those particles decay to other particles with some probability, and so on…  Eventually, stable particles are made and the passage of those particles through the detector are also simulated.

As you can imagine, this requires a lot of random numbers.  One mistake that happens sometimes is that different jobs have the same initial ‘seed’ for the random numbers, and this results in duplication of simulations.  Not only is that a waste of CPU-cycles, but it also means a fuller range of collision possibilities doesn’t get simulated.

My job at times is to herd thousands of simulation jobs at a time to various places and monitor them, make sure they don’t crash, and finish in a timely fashion to return the needed data.

By the way, when I wrote the job monitoring script that makes plots like the one above (written in Python and using matplotlib), I tried using their school colors when I could, but sometimes that resulted in colors that were too similar or confusing.

Share

Tags:

One Response to “Millions of Simulations”

  1. Those of us who “crunch” for LHC@home are ready willing and able to do simulaions as we have done in the past.

    While we have been told that the magnitude of the data precludes our participation, I have received recently several work units and completed the successfully on two of my five PC’s.

    Send us some work.

Leave a Reply

Commenting Policy