We’ve gone pretty far with our series of posts about learning particle physics through Feynman diagrams. In our last post we summarized the Feynman rules for all of the known particles of the Standard Model. Now it’s time to fess up a little about the shortcomings of the Feynman diagram approach to calculations; in doing so, we’ll learn a little more about what Feynman diagrams actually represent as well as the kinds of physics that we must work with at a machine like the LHC.
When one diagram isn’t enough
Recall that mesons are bound states of quarks and anti-quarks which are confined by the strong force. This binding force is very non-perturbative; in other words, the math behind our Feynman diagrams is not the right tool to analyze it. Let’s go into more detail about what this means. Consider the simplest Feynman diagram one might draw to describe the gluon-mediated interaction between a quark and an anti-quark:
Easy, right? Well, one thing that we have glossed over in our discussions of Feynman diagrams so far is that we can also draw much more complicated diagrams. For example, using the QCD Feynman rules we can draw something much uglier:
This is another physical contribution to the interaction between a quark and an anti-quark. It should be clear that one can draw arbitrarily many diagrams of this form, each more and more complicated than the last. What does this all mean?
Each Feynman diagram represents a term in a mathematical expression. The sum of these terms gives the complete probability amplitude for the process to occur. The really complicated diagrams usually give a much smaller contribution than the simple diagrams. For example, each photon vertex additional internal photon line (edit Dec 11, thanks ChriSp and Lubos) gives a factor of roughly α=1/137 to the diagram’s contribution to the overall probability. (There are some subtleties here that are mentioned in the comments.) Thus it is usually fine to just take the simplest diagrams and calculate those. The contribution from more complicated diagrams are then very small corrections that are only important to calculate when experiments reach that level of precision. For those with some calculus background, this should sound familiar: it is simply a Taylor expansion. (In fact, most of physics is about making the right Tayor expansion.)
However, QCD defies this approximation. It turns out that the simplest diagrams do not give the dominant contribution! It turns out that both the simple diagram and the complicated diagram above give roughly the same contribution. One has to include many complicated diagrams to obtain a good approximate calculation. And by “many,” I mean almost all of them… and “almost all” of an infinite number of diagrams is quite a lot. For various reasons, these complicated diagrams are very difficult to calculate and at the moment our normal approach is useless.
There’s a lot of current research pushing this direction (e.g. so-called holographic techniques and recent progress on scattering amplitudes), but let’s move on to what we can do.
QCD and the lattice
`Surely,’ said I, `surely that is something at my window lattice;
Let me see then, what thereat is, and this mystery explore –
— Edgar Allen Poe, “The Raven”
A different tool that we can use is called Lattice QCD. I can’t go into much detail about this since it’s rather far from my area of expertise, but the idea is that instead using Feynman diagrams to calculate processes perturbatively—i.e. only taking the simplest diagrams—we can use computers to numerically solve for a related quantity. This related quantity is called the partition function and is a mathematical object from which one can straightforwardly calculate probability amplitudes. (I only mention the fancy name because it is completely analogous to an object of the same name that one meets in thermodynamics.)
The point is that the lattice techniques are non-perturbative in the sense that we don’t calculate individual diagrams, we simultaneously calculate all diagrams. The trade-off is that one has to put spacetime on a lattice so that the calculations are actually done on a four-dimensional hyper-cube. The accuracy of this approximation depend on the lattice size and spacing relative to the physics that you want to study. (Engineers will be familiar with this idea from the use of Fourier transforms.) As usual, a picture is worth a thousand words; suppose we wanted to study the Mona Lisa:
The first image is the original. The second image comes from putting the image on a lattice, you see that we lose details about small things. Because things with small wavelengths have high energies, we call this an ultraviolet (UV) cutoff. The third image comes from having a smaller canvas size so that we cannot see the big picture of the entire image. Because things with big wavelengths have low energies, we call this an IR cutoff. The final image is meant to convey the limitations imposed by the combination of the UV and IR cutoffs; in other words, the restrictions from using a lattice of finite size and finite lattice spacing.
If you’re interested in only the broad features the Mona Lisa’s face, then the lattice depiction above isn’t so bad. Of course, if you are a fine art critic, then the loss of small and large scale information is unforgiveable. Currently, lattice techniques have a UV cutoff of around 3 GeV and an IR cutoff of about 30 MeV; this makes them very useful for calculating information about transitions between charm (mass = 1.2 GeV) and strange quarks (mass = 100 MeV).
Translating from theory to experiment (and back)
When I was an undergraduate, I always used to be flummoxed that theorists would always draw these deceptively simple looking Feynman diagrams on their chalkboards, while experimentalists had very complicated plots and graphs to represent the same physics. Indeed, you can tell whether a scientific paper or talk has been written by a theorist or an experimentalist based on whether it includes more Feynman diagrams or histograms. (This seems to be changing a bit as the theory community has made a concerted effort over the past decade to learn to the lingo of the LHC. As Seth pointed out, this is an ongoing process.)
There’s an reason for this: experimental data is very different from writing down new models of particle interactions. I encourage you to go check out the sample event displays from CMS and ATLAS on the Symmetry Breaking blog for a fantastic and accessible discussion of what it all means. I can imagine fellow bloggers Jim and Burton spending a lot of time looking at similar event displays! (Or maybe not; I suspect that an actual analysis focuses more on accumulated data over many events rather than individual events.) As a theorist, on the other hand, I seem to be left with my chalkboard connecting squiggly lines to one another. 🙂
Once again, part of the reason why we speak such different languages is non-perturbativity. One cannot take the straightforward Feynman diagram approach and use it when there is all sorts of strongly-coupled gunk flying around. For example, here’s a diagram for electron–positron scattering from Dieter Zeppenfeld’s PiTP 2005 lectures:
The part in black, which is labeled “hard scattering,” is what a theorist would draw. As a test of your Feynman diagram see if you can “read” the following: This diagram represents an electron and positron annihilating into a Z boson, which then decays into a top–anti-top pair. The brown lines also show the subsequent decay of each top into a W and (anti-)bottom quark.
Great, that much we’ve learned from our previous posts. The big question is: what’s all that other junk?! That, my friend, is the result of QCD. You can see that the pink lines are gluons which are emitted from the final state quarks. These gluons can sprout off other gluons or quark–anti-quark pairs. All of these quarks and gluons must then hadronize into color-neutral hadron states, mostly mesons. These are shown as the grey blobs. These hadrons can in turn decay into other hadrons, depicted by yellow blobs. Most of all of this happens before any of the particles reach the detector. Needless to say, there are many, many similar diagrams which should all be calculated to give an accurate prediction.
In fact, for the LHC it’s even more complicated since even the initial states are colored and so they also spit off gluons (“hadronic junk”). Here’s a picture just to show how ridiculous these process look at a particle-by-particle level:
Let me just remark that the two dark gray blobs are the incoming protons. The big red blob represents all of the gluons that these protons emit. Note that the actual “hard interaction,” i.e. the “core process” is gluon-gluon scattering. This is a bit of a subtle point, but at very high energies, the actual point-like objects which are interacting are gluons, not the quarks that make up the proton!
All of this hadronic junk ends up being sprayed through the experiments’ detectors. If the origin of some of the hadronic junk comes from a high-energy colored particle (e.g. a quark that came from the decay of a new heavy TeV-scale particle), then they are collimated into cones that are pointing in roughly the same direction called a jet, (image from Gavin Salam’s 2010 lectures at Cargese)
Some terminology: parton refers to either a quark or gluon, LO means “leading-order”, NLO means “next-to-leading order.” The parton shower is the stage in which partons can radiate more low-energy partons, which then confine into hadrons. Now one can start to see how to connect our simple Feynman diagrams to the neat looking event reconstructions at the LHC: (image from Gavin Salam’s lectures again)
Everything except for the black lines are examples of what one would actually read off of an event display. This is meant to be a cross-section of the interaction point of the beamline. The blue lines come from a tracking chamber, basically layers of silicon chips that detect the passage of charged particles. The yellow and pink bars are readings from the calorimeters, which tell how much energy is deposited into chunks of dense material. Note how ‘messy’ the event looks experimentally: all of those hadrons obscure the so-called hard scattering underlying event (edit Dec 11, thanks to ChriSp), which is what we draw with Feynman diagrams.
So here’s the situation: theorists can calculate the “hard scattering” or “underlying event” (black lines in the two diagrams above), but all of the QCD-induced stuff that happens after the hard scattering is beyond our Feynman diagram techniques and cannot be calculated from first principles. Fortunately, most of the non-peturbative effects can again be accounted for using computers. The real question is given an underlying event (a Feynman diagram), how many times will the final state particles turn into a range of different hadrons configurations. This time one uses Monte-Carlo techniques where instead of calculating the probabilities of each hadronic final state, the computer randomly generates these final states according to some pre-defined probability distribution. If we run such a simulation over and over again, then we end up with a simulated distribution of events which should match experiments relatively well.
One might wonder why this technique should work. It seems like we’re cheating—where did these “pre-defined” probability distributions come from? Aren’t these what we want to calculate in the first place? The answer is that these probability distributions come from experiments themselves. This isn’t cheating since the experiments reflect data about low-energy physics. This is well known territory that we really understand. In fact, everything in this business of hadronic junk is low-energy physics. The whole point is that the only missing information is the high-energy underlying event hard scattering (ed. Dec 11)—but fortunately that’s the part that we can calculate! The fact this works is a straightforward result of “decoupling,” or the idea that physics at different scales shouldn’t affect one another. (In this case physicists often say that the hadronic part of the calculation “factorizes.”)
To summarize: theorists can calculate the underlying event hard scattering (ed. Dec 11) for their favorite pet models of new physics. This is not the whole story, since it doesn’t reflect what’s actually observed at a hadron collider. It’s not possible to calculate what happens next from first principles, but fortunately this isn’t necessary, we can just use well-known probability distributions to simulate many events and predict what the model of new physics would predict in a large data set from an actual experiment. Now that we’re working our way into the LHC era, clever theorists and experimentalists are working on new ways to go the other way around and take the experimental signatures to try to recreate the underlying model.
As a kid I remember learning over and over again how a bill becomes a law. What we’ve shown here is how a model of physics (a bunch of Feynman rules) becomes a prediction at a hadron collider! (And along the way we’ve hopefully learned a lot about what Feynman diagrams are and how we deal with physics that can’t be described by them.)