Last week Cornell hosted the sixth “Monte Carlo Tools for Beyond the Standard Model” mini-workshop and I thought it was a terrific success. Here “Monte Carlo” refers to the computer simulation techniques used to solve difficult problems such as the behavior of high energy particles at the LHC. The name is a reference to the famous casino in Monaco since these methods are based on random sampling.
Playing Dice with the LHC?
It’s not what it sounds like. At first glance talking about ‘random sampling’ might make it sound like someone doesn’t know what they’re doing. It’s actually quite the opposite. The theory of hadron colliders (which is mostly quantum chromodynamics) is well established, but actual calculation requires compromise.
I won’t go into details, but the rough sketch is that a high energy collision at the LHC does not look like the nice Feynman diagrams that we’ve been drawing (and that we can calculate easily):
Nope. In fact, the events look much, much more complicated (from S. Hoeche’s talk):
Needless to say, this is very difficult to calculate using pen and paper. In fact, the situation is even more difficult than it looks: many of the steps in this calculation require systematic approximations and are non-perturbative (hopeless to calculate in the usual method). There’s more: the above picture is just what happens when there’s a high energy collision in vacuum. We also have to model how all of that interacts with the detector to give a picture more like this:
It is practically impossible to calculate a closed form expression for the Standard Model prediction for the distribution of detector signatures. On the other hand, what we can do is imagine actually simulate particle production and decay at each step of the process so that the random evolution of the initial collision into the final detector signature follows the probability distribution of the “closed form expression” that we can’t write down. By doing this many times, we can determine the probability distribution simply by looking at the distribution of the Monte Carlo simulation.
This probably still sounds a little abstract—but it’s the analog of determining the interference pattern of the double slit system by actually doing the experiment with electrons and looking at the distribution of electron hits on the screen. Another nice example is to determine the area of a circle (or the value of π) by Monte Carlo.
Tools of the Trade
Suffice it to say that Monte Carlo is a very important tool in high energy physics. For example, the results of Monte Carlo studies are used to determine what sorts of events we should be looking at to find the cleanest signals for a Higgs or for new physics—this sort of thing is especially relevant since the rate of high energy events at the LHC is actually much larger than our bandwidth for recording data, so we need to be able to trigger on particular events that we think are worth a closer look.
On the more theoretical side, Monte Carlo gives us a handle for mapping models of new physics to experimental signatures. For example, if we see a definitive signal of a new particle outside of the Standard Model, how can we begin to determine whether it is a supersymmetric partner, an extra dimensional resonance, or something else?
In between theory and experiment, there’s a lot of hard work done ‘in the trenches’ to develop better tools (both theoretical and computational) to model quantum chromodynamics. Often times this is under appreciated in the field since the work is not glamorous enough to land in one of Dennis Overbye’s New York Times articles, but recently three of the leaders of this field received the 2012 Sakurai prize—congrats to Altarelli, Sjostrand, and Webber!
Theorists will wax poetic about espresso machines and long nights at a chalkboard, experimentalists will tell you what it’s like to jump into the world’s largest scientific apparatus (armed with a vacuum cleaner), but the truth of the matter is that we spent a lot of time running computer simulations. We rely on the subset of the community that develops and maintains these tools, and occasionally we hold workshops (such as MC4BSM) to learn the latest and greatest tools.
MC4BSM 2012
Probably the first mystery of the MC4BSM series of workshops is the strange logo:
Apparently the illustration was done by a professional artist who is a friend of one of the organizers. The interpretation still isn’t clear to me—though it’s been suggested that it represents the “elephant in the room” associated with the lack of training opportunities to learn Monte Carlo techniques. Alternately, it was also pointed out that it’s a different kind of “pink elephant.”
The workshops are geared toward an audience of theorists who don’t necessarily have a background in Monte Carlo methods. The “big idea” is connecting our models of new physics to experimental data (image from M. Perelstein’s slides):
The key to doing this efficiently has been to develop a pipeline of Monte Carlo tools which interface with one another and take a theorist’s model to something that can be compared to real data; one example of such a pipeline is (image from C. Duhr’s talk):
The ovals are different stages of computational tools—the first two or three stages can usually be done by hand by a careful graduate student. From there on out we really rely on the Monte Carlo tools available to us. The red text highlights common programs used to connect each step, while the green-ish text are common languages that are used to provide a standardized language for program to communicate with one another.
All of these programs are open source (though some depend on commercial software like Mathematica) and are developed by high energy physicists for high energy physicists.
Tutorial
The real highlight of the workshop were the two tutorial sessions where attendees had a chance to play with various programs in a hands-on environment. The whole point of the meeting, after all, is to learn how to use these tools. The tutorial session allow attendees to ask questions directly to the program developers and to build their own templates by solving a simple toy problem.
Unlike previous MC4BSM workshops, the organizers adopted a novel format for the tutorial sessions which I thought worked very well. Each participant brought their own laptop and had a choice choice of which chain of programs they would use to solve the toy problem:
Instead of having representatives from each program give a short talk about how to install and run their code, users were left to themselves to jump in head first with their colleagues and then flag own experts as needed. (The night before the workshop there was also a group installation session where people could work out kinks in getting specific programs to compile on specific operating systems.)
Several of the graduate students there got their first taste in going through the entire series of programs, while more senior researchers learned how to use alternate tools than the ones they’re used to.
The tutorial information is all available online for anyone who wants to follow along on their own. The material will eventually be made available as proceedings for the workshop; I think it will be a valuable resource for anyone interested in learning to use these tools.
Human vs. Machine
One of the running jokes at the workshop was that eventually we’d be able to select a few options in a smart phone app to cook up a model of new physics and then send it to a computing cluster to work out the detailed phenomenology—perhaps obviating the need for graduate students. However, one thing that computers cannot yet replace is the value of having face-to-face interactions with one’s colleagues.
I’ve said many times that physics is a social activity and the field progresses from the collaborative efforts of the entire community. Meetings like MC4BSM are more than just ways to learn new tools, but are also ways to catch up with friends and colleagues and bounce new ideas off one another.
One bright idea that was promptly shot down was a request to hold the next MC4BSM meeting at the Monte Carlo casino in Monaco.