We’ve gone pretty far with our series of posts about learning particle physics through Feynman diagrams. In our last post we summarized the Feynman rules for all of the known particles of the Standard Model. Now it’s time to fess up a little about the shortcomings of the Feynman diagram approach to calculations; in doing so, we’ll learn a little more about what Feynman diagrams actually represent as well as the kinds of physics that we must work with at a machine like the LHC.

## When one diagram isn’t enough

Recall that mesons are bound states of quarks and anti-quarks which are confined by the strong force. This binding force is very *non-perturbative*; in other words, the math behind our Feynman diagrams is not the right tool to analyze it. Let’s go into more detail about what this means. Consider the simplest Feynman diagram one might draw to describe the gluon-mediated interaction between a quark and an anti-quark:

Easy, right? Well, one thing that we have glossed over in our discussions of Feynman diagrams so far is that we can also draw much more complicated diagrams. For example, using the QCD Feynman rules we can draw something much uglier:

This is another physical contribution to the interaction between a quark and an anti-quark. It should be clear that one can draw arbitrarily many diagrams of this form, each more and more complicated than the last. What does this all mean?

Each Feynman diagram represents a term in a mathematical expression. The sum of these terms gives the complete probability amplitude for the process to occur. The really complicated diagrams usually give a much smaller contribution than the simple diagrams. For example, each photon vertex additional internal photon line (*edit Dec 11, thanks ChriSp and Lubos)* gives a factor of roughly α=1/137 to the diagram’s contribution to the overall probability. (There are some subtleties here that are mentioned in the comments.) Thus it is *usually* fine to just take the simplest diagrams and calculate those. The contribution from more complicated diagrams are then very small corrections that are only important to calculate when experiments reach that level of precision. For those with some calculus background, this should sound familiar: it is simply a Taylor expansion. (In fact, most of physics is about making the right Tayor expansion.)

*However*, QCD defies this approximation. It turns out that the simplest diagrams do *not* give the dominant contribution! It turns out that both the simple diagram and the complicated diagram above give roughly the same contribution. One has to include many complicated diagrams to obtain a good approximate calculation. And by “many,” I mean almost *all* of them… and “almost all” of an infinite number of diagrams is quite a lot. For various reasons, these complicated diagrams are very difficult to calculate and at the moment our normal approach is useless.

There’s a lot of current research pushing this direction (e.g. so-called holographic techniques and recent progress on scattering amplitudes), but let’s move on to what we can do.

## QCD and the lattice

`Surely,’ said I, `surely that is something at my window lattice;

Let me see then, what thereat is, and this mystery explore -

— Edgar Allen Poe, “The Raven”

A different tool that we can use is called **Lattice QCD**. I can’t go into much detail about this since it’s rather far from my area of expertise, but the idea is that instead using Feynman diagrams to calculate processes *perturbatively*—i.e. only taking the simplest diagrams—we can use computers to numerically solve for a related quantity. This related quantity is called the **partition function** and is a mathematical object from which one can straightforwardly calculate probability amplitudes. (I only mention the fancy name because it is completely analogous to an object of the same name that one meets in thermodynamics.)

The point is that the lattice techniques are non-perturbative in the sense that we don’t calculate individual diagrams, we simultaneously calculate all diagrams. The trade-off is that one has to put spacetime on a lattice so that the calculations are actually done on a four-dimensional hyper-cube. The accuracy of this approximation depend on the lattice size and spacing relative to the physics that you want to study. (Engineers will be familiar with this idea from the use of Fourier transforms.) As usual, a picture is worth a thousand words; suppose we wanted to study the Mona Lisa:

The first image is the original. The second image comes from putting the image on a lattice, you see that we lose details about small things. Because things with small wavelengths have high energies, we call this an ultraviolet (UV) cutoff. The third image comes from having a smaller canvas size so that we cannot see the big picture of the entire image. Because things with big wavelengths have low energies, we call this an IR cutoff. The final image is meant to convey the limitations imposed by the combination of the UV and IR cutoffs; in other words, the restrictions from using a lattice of finite size and finite lattice spacing.

If you’re interested in only the broad features the Mona Lisa’s face, then the lattice depiction above isn’t so bad. Of course, if you are a fine art critic, then the loss of small and large scale information is unforgiveable. Currently, lattice techniques have a UV cutoff of around 3 GeV and an IR cutoff of about 30 MeV; this makes them very useful for calculating information about transitions between charm (mass = 1.2 GeV) and strange quarks (mass = 100 MeV).

## Translating from theory to experiment (and back)

When I was an undergraduate, I always used to be flummoxed that theorists would always draw these deceptively simple looking Feynman diagrams on their chalkboards, while experimentalists had very complicated plots and graphs to represent the same physics. Indeed, you can tell whether a scientific paper or talk has been written by a theorist or an experimentalist based on whether it includes more Feynman diagrams or histograms. (This seems to be changing a bit as the theory community has made a concerted effort over the past decade to learn to the lingo of the LHC. As Seth pointed out, this is an ongoing process.)

There’s an reason for this: experimental data is *very *different from writing down new models of particle interactions. I encourage you to go check out the sample event displays from CMS and ATLAS on the Symmetry Breaking blog for a fantastic and accessible discussion of what it all means. I can imagine fellow bloggers Jim and Burton spending a lot of time looking at similar event displays! (Or maybe not; I suspect that an actual analysis focuses more on accumulated data over many events rather than individual events.) As a theorist, on the other hand, I seem to be left with my chalkboard connecting squiggly lines to one another.

Once again, part of the reason why we speak such different languages is non-perturbativity. One cannot take the straightforward Feynman diagram approach and use it when there is all sorts of strongly-coupled gunk flying around. For example, here’s a diagram for electron–positron scattering from Dieter Zeppenfeld’s PiTP 2005 lectures:

The part in black, which is labeled “hard scattering,” is what a theorist would draw. As a test of your Feynman diagram see if you can “read” the following: This diagram represents an electron and positron annihilating into a *Z* boson, which then decays into a top–anti-top pair. The brown lines also show the subsequent decay of each top into a *W* and (anti-)bottom quark.

Great, that much we’ve learned from our previous posts. The big question is: *what’s all that other junk?!* That, my friend, is the result of QCD. You can see that the pink lines are gluons which are emitted from the final state quarks. These gluons can sprout off other gluons or quark–anti-quark pairs. All of these quarks and gluons must then **hadronize** into color-neutral hadron states, mostly mesons. These are shown as the grey blobs. These hadrons can in turn decay into other hadrons, depicted by yellow blobs. Most of all of this happens before any of the particles reach the detector. Needless to say, there are many, many similar diagrams which should all be calculated to give an accurate prediction.

In fact, for the LHC it’s even more complicated since even the initial states are colored and so they also spit off gluons (“hadronic junk”). Here’s a picture just to show how ridiculous these process look at a particle-by-particle level:

Let me just remark that the two dark gray blobs are the incoming protons. The big red blob represents all of the gluons that these protons emit. Note that the actual “hard interaction,” i.e. the “core process” is gluon-gluon scattering. This is a bit of a subtle point, but at very high energies, the actual point-like objects which are interacting are gluons, not the quarks that make up the proton!

All of this hadronic junk ends up being sprayed through the experiments’ detectors. If the origin of some of the hadronic junk comes from a high-energy colored particle (e.g. a quark that came from the decay of a new heavy TeV-scale particle), then they are collimated into cones that are pointing in roughly the same direction called a **jet**, (image from Gavin Salam’s 2010 lectures at Cargese)

Some terminology: **parton** refers to either a quark or gluon, LO means “leading-order”, NLO means “next-to-leading order.” The **parton shower** is the stage in which partons can radiate more low-energy partons, which then confine into hadrons. Now one can start to see how to connect our simple Feynman diagrams to the neat looking event reconstructions at the LHC: (image from Gavin Salam’s lectures again)

Everything except for the black lines are examples of what one would actually read off of an event display. This is meant to be a cross-section of the interaction point of the beamline. The blue lines come from a **tracking chamber**, basically layers of silicon chips that detect the passage of charged particles. The yellow and pink bars are readings from the **calorimeters**, which tell how much energy is deposited into chunks of dense material. Note how ‘messy’ the event looks experimentally: all of those hadrons obscure the so-called **hard scattering **underlying event (*edit Dec 11, thanks to ChriSp)*,** **which is what we draw with Feynman diagrams.

So here’s the situation: theorists can calculate the “hard scattering” or “underlying event” (black lines in the two diagrams above), but all of the QCD-induced stuff that happens *after* the hard scattering is beyond our Feynman diagram techniques and *cannot* be calculated from first principles. Fortunately, most of the non-peturbative effects can again be accounted for using computers. The real question is given an underlying event (a Feynman diagram), how many times will the final state particles turn into a range of different hadrons configurations. This time one uses **Monte-Carlo** techniques where instead of calculating the probabilities of each hadronic final state, the computer randomly generates these final states according to some pre-defined probability distribution. If we run such a simulation over and over again, then we end up with a simulated distribution of events which should match experiments relatively well.

One might wonder why this technique should work. It seems like we’re cheating—where did these “pre-defined” probability distributions come from? Aren’t these what we want to calculate in the first place? The answer is that these probability distributions come from experiments themselves. This isn’t cheating since the experiments reflect data about *low-energy physics*. This is well known territory that we really understand. In fact, everything in this business of hadronic junk is low-energy physics. The whole point is that the *only* missing information is the high-energy underlying event **hard scattering **(*ed. Dec 11)*—but fortunately that’s the part that we *can *calculate! The fact this works is a straightforward result of “decoupling,” or the idea that physics at different scales shouldn’t affect one another. (In this case physicists often say that the hadronic part of the calculation “factorizes.”)

To summarize: theorists can calculate the underlying event hard scattering (*ed. Dec 11) *for their favorite pet models of new physics. This is not the whole story, since it doesn’t reflect what’s actually observed at a hadron collider. It’s not possible to calculate what happens next from first principles, but fortunately this isn’t necessary, we can just use well-known probability distributions to *simulate* many events and predict what the model of new physics would predict in a large data set from an actual experiment. Now that we’re working our way into the LHC era, clever theorists and experimentalists are working on new ways to go the other way around and take the experimental signatures to try to recreate the underlying model.

As a kid I remember learning over and over again how a bill becomes a law. What we’ve shown here is how a model of physics (a bunch of Feynman rules) becomes a prediction at a hadron collider! (And along the way we’ve hopefully learned a lot about what Feynman diagrams are and how we deal with physics that can’t be described by them.)

Nice, thanks.

Nicely done. Thanks!

Well-written, indeed. I suppose you would prefer longer comments, wouldn’t you?

Great post, but two comments:

A) When experimentalists talk about the “underlying event” they mean everything but the hard scattering.

B) More nit-picky: Adding a new diagram with an extra vertex in QED doesn’t always come with a factor of alpha in the amplitude. The new diagram can interfere with the others, and then the new contribution goes like sqrt(alpha).

Actually, there is no interference since the initial or final state is has an extra photon. But the argument would be true if you add two vertices.

Hi ChriSp, thanks for the correction! I can’t believe I let that slip by me… and I used the phrase incorrectly several times. -_-‘ I’ve now corrected them above, leaving the original text crossed-out as a reminder of my shame.

Regarding your comment (B), I knew someone would bring this up! I was trying to keep things brief (unsuccessfully) so didn’t want to get into things short. I did try to be careful though, and said that each QED vertex gives a factor of alpha to the *probability* (i.e. cross section), not the amplitude. This, I believe, is a correct statement modulo interference terms, which I didn’t want to get into.

And yes, it is true that for a given process one should include *two* additional vertices if you don’t want an additional external photon. (Of course, one should in principle include external soft photon emissions and such… but I suppose soft gluons are already taken care of by the hardonization programs?)

Anyway, the statement about the alpha suppression to the probability was meant to be very hand-wavy just to demonstrate the main idea of the expansion. You are correct that there are subtleties (cross terms, matching precise final states, etc).

Thanks for the comments,

F

Hi Flip, thank you for correcting the text. I apologize that I added to the confusion by writing amplitude when I meant cross section/probability in my second comment.

Dear Flip, I wanted to correct you about statement B but thought you were just popularly simplifying things.

But ChriSp has corrected you and you have revealed that you meant it literally but the statement is not 100% accurate. So let me tell you why you statement is inaccurate in some more detail.

The probability is computed as the probability amplitude squared (absolute value of that). The probability amplitude looks like

(Term0 + e.Term1 + e^2.Term2 + …)

Now, try to square it. What will you get?

Term0^2 + 2e.Term0.Term1 + e^2.(Term1^2 + 2.Term0.Term2) + …

Now, look that the leading contribution goes like Term0^2. But the subleading one, the first one that depends on the diagram Term1 with an extra vertex, is suppressed by one power of “e” only. That’s because the squaring of the amplitude contains the mixed terms that are products of the old term in the amplitude – without the new vertex – and the new one – with the vertex.

The probability formula also contains terms suppressed by e^2 but they come after that.

Just to be sure, we must also realize that the actual pure QED amplitudes for a process with fixed external particles actually contain either “only odd powers of e” or “only even powers of e” because one can only add new cubic QED vertices in pairs.

So in the real world, the expansion of the amplitude is

(Term0 + e^2.Term1 + e^4.Term2 + …)

and its square is

Term0^2 + 2e^2.Term0.Term1 + e^4.(Term1^2 + 2.Term0.Term2) + …

Note that the adjacent terms in the actual probability above differ by a factor scaling like e^2 i.e. alpha. However, the first subleading term, proportional to Term0.Term1, while it’s suppressed by alpha – and there is never sqrt(alpha) relative factor appearing in the formulae for probabilities (in this respect, Flip would be always right) – is actually coming from mixing the original minimal diagram Term0 and another diagram that however has “two more QED vertices” and not one as Flip wrote.

So the simplest fix is to replace

“For example, each photon vertex gives a factor of roughly α=1/137″

by

For example, each pair of photon vertices (and for diagrammatic reasons, they can only be added in pairs) gives a factor of roughly α=1/137 …

If one computes inclusive cross sections, then the external states don’t have to be identical among the diagrams. And one can add probabilities one by one. By in that case, there is no interference (there can’t be any interference between states with different numbers of particles). The total probability has the form

Term0^2 + e^2.Term1^2 + …

because the diagrams Term0, Term1 – with different numbers of external photons – are squared before they’re added. So in this inclusive context, Flip’s original statement was totally right.

But I am probably just repeating what you said differently now.

Best wishes

Lubos

Best wishes

Lubos

Hi Lubos — yep, I (and I think ChriSp) completely agree with what you’ve written. I’ve updated the original post to something more innocuous and have attributed you and ChrisSp.

The point that you make about there is indeed a term linear in e from the cross term is what I meant by “modulo interference terms” and the subsequent points about whether one takes an exclusive versus inclusive cross section are what I was nodding to when I mentioned soft photons and such. But you’re right that I didn’t make this explicit and that I certainly didn’t think too carefully about it when I wrote the original post.

Anyway, I think (hope) everyone now agrees and further that what we agree upon is correct. For the sake of the intended outreach audience of the blog, I think I’ll leave the details of this discussion in the comments rather than making extensive revisions to the main post.

Actually, I guess I should mention that comments like these are one of the reasons why I really like blogs as a medium for discourse; we can have nice discussions to clarify subtle points (usually points that I miss originally).

It is also why I try to make corrections with the original text in “strike out” and with attributions to those who contribute constructive comments. Apart from intellectual honesty and issues of citations*, it is a nod to those who contribute polite corrections/further information that adds to the post.

(*–by ‘citation’ I mean hyperlinks, not actual citations from reputable works! This is not a peer/editor-reviewed journal!)

Anyway, thanks for the discussion everyone.

-F

Dear Flip, retroactively, I am actually sure you knew the correct answers before I wrote my comments so my addition was purely pedagogical. Your scratching is giving too much credit to me.

Reviewing the expansions, note that the cross sections are always expansions in alpha=e^2, not sqrt(alpha) – either because of squaring before summation, or because of pair-appearance of the vertices in sums that may interfere. This is no coincidence – only “alpha” is a real physical parameter. Funnily enough, there is a corresponding statement in M-theory but the exponent is not 2 but 3.

All simple enough physical expressions in M-theory are expansions in L_{Planck}^3. Note that Newton’s constant is L_{Planck}^9 – the exponent is the dimension minus two (just like it is 2 in 3+1 dimensions, area may be divided by Newton’s constant to get dimensionless entropies).

Also, the M2-branes have tension 1/L_{Planck}^3 anfd the M5-branes have tension scaling like 1/L_{Planck}^6 because the exponents are the worldvolume dimensionalities – all the exponents are multiples of 3. This appearance of multiples of 3 may be understood from some kind of U-duality argument, also reflected in del Pezzo surfaces via the mysterious duality.

However, the M-theory case is much more complicated because it’s not just about some perturbative counting of pairs of vertices etc. These are seemingly modest patterns but they do indicate that we don’t understand a certain conceptual argument that will make these things as obvious as the counting of vertices in perturbative expansions.

Cheers

LM

By the way, this is the LHC blog, and I am very curious whether we will hear anything nontrivial from Michigan (LHC First Data) on Tuesday.

However, we already know what we will hear from your competitors in the Fermilab:

http://motls.blogspot.com/2010/12/michigan-combined-tevatron-sees-3-sigma.html#more

The bbb MSSM channel is exciting, indeed.

Ah, now I understand why the alpha expansion was so interesting. I didn’t know about the third power of L(string) in string theory, though admittedly my string background is rather limited.

Thanks for the link about the Michigan conference.

Dear Flip, just a detail: L_{string}, referring to the typical length of the string moving in 10 dimensions, always appears in the even powers, just like the fine-structure constant.

It’s just the 11-dimensional L_{Planck}, the typical length scale of the 11-dimensional M-theory which contains no strings but only M2-branes, M5-branes etc., where you get powers that are multiples of 3. That’s a part of why M-theory (in d=11 etc.) remains more mysterious than (perturbative) string theory (in d=10).

A recent “membrane minirevolution” describing some exotic 3D theories with a Lagrangian has somewhat reduced this mystery but much of it remains.

Cheers

LM

awesome blog! i just came across it via a google image search.

one gripe. i find the first feynman diagram *much* uglier than the second.