• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Flip Tanedo | USLHC | USA

View Blog | Read Bio

More Feynman Diagrams

In a previous post we learned how to draw Feynman diagrams by drawing lines and connecting them. We started with a set of rules for how one could draw diagrams:


We could draw lines with arrows or wiggly lines and we were only permitted to join them using intersections (vertices) of the above form. These are the rules of the game. We then said that the arrowed lines are electrons (if the arrow goes from left to right) and positrons (if the arrow points in the opposite direction) while the wiggly lines are photons. The choice of rules is what we call a “model of particle interactions,” and in particular we developed what is called quantum electrodynamics, which is physics-talk for “the theory of electrons and photons.”

Where did it all come from?

One question you could ask now is: “Where did these rules come from? Why do they prohibit me from drawing diagrams with three wiggly lines intersecting?”

The short answer is that those are just the rules that we chose. Technically they came from a more mathematical formulation of the theory. It is not obvious at all, but the reason why we only allow that one particular vertex is that it is the only interaction that both respects the (1) spacetime (“Lorentz”) symmetry and (2) internal ‘gauge’ symmetry of the theory. This is an unsatisfying answer, but we’ll gradually build up more complicated theories that should help shed some light on this. Just for fun, here’s the mathematical expression that encodes the same information as the Feynman rules above: [caution: I know this is an equation, but do not be scared!]


Without going into details, the Psi represents the electron (the bar turns it into a positron) while the A is the photon. The number e is the ‘electric coupling’ and determines the charge of the electron. Because equations can be intimidating, we won’t worry about them here. In fact our goal will be to go in the opposite direction: we will see that we can learn quite a lot by only looking at Feynman diagrams and never doing any complicated math. The important point is that our cute rules for how to connect lines really captures most of the physics encoded in these ugly equations.

Now a quick parenthetical note because I’m sure some of you are curious: In the equation above, the partial is a kind of derivative. Derivatives tell us about how things change, and in fact this term tells us about how the electron propagates through space. The e?A term tells us how the photon couples to the electron. The m term is the electron’s mass. We’ll have more to say about this down the road when we discuss the Higgs boson. Finally, the Fs are the “field strength” of the photon: it is the analog of the derivative term for the electron and tells us how the photon propagates through space. In fact, these F’s encode the electric and magnetic fields.

[Extra credit for advanced readers: notice that the electron mass term looks like the Feynman rule for a two-electron interaction with coupling strength m. You can see this by looking at the electron-electron-photon term and removing the photon.]

What we can learn from just looking at the rules

We learned that we could use our lines and intersections to draw diagrams that represent particle interactions. If you haven’t already, I encourage you to grab a piece of scratch paper and play with these Feynman rules. A good game to play is asking yourself whether a certain initial state can ever give you a certain final state. Here are a few exercises:

  1. You start with one electron. Can you ever end up with a final state positron? [Answer: yes! Draw one such diagram.]
  2. If you start with one electron, can you ever end up with more final state positrons than final state electrons? [Answer: no! Draw diagrams until you’re convinced it’s impossible.]
  3. Draw a diagram where an electron and a photon interact to produce 3 electrons, 2 positrons, and 2 photons. Draw a few more to get a feel for how many different ways one can do this.
  4. If you start with a photon, can you end up with a final state of only multiple photons? [This is actually a trick question; the answer is no but this is a rather subtle quantum mechanical effect that’s beyond our scope. You should be able to draw a diagram think that the answer is ‘yes.’]

So here’s what you should get out of this: Feynman rules are a nice way to learn what kinds of particle interactions can and cannot occur. (e.g. questions 1 and 2) In fact, the lesson you should have gleaned is that there is a conservation of electric charge in each diagram coming to the conservation of electric charge in each intersection. You can also see how complicated interactions can be reduced to simple interactions with “virtual particles” (intermediate particles that don’t appear in the initial state). We are able to do this simply by stating the Feynman rules of our theory and playing with drawings. No math or fancy technical background required.

Summing diagrams: an analogy to summing paths

There’s a lot more one could do with Feynman diagrams, such as calculating probabilities for interactions to occur. Actually doing this requires more formal math and physics background, but there’s still a lot that we can learn conceptually.

For example, there were two simple diagram that we could draw that represented the scattering of an electron and a positron off of one another:


We recall that we can describe these interactions in words by “reading” them from left to right:

  • The first diagram shows an electron and a positron annihilating into a photon, which then “pair produces” into another electron and positron.
  • The second diagram shows an electron and a positron interacting by sending a photon between them. This is definitely a different process since the electron and positron never actually touch, unlike the first diagram.

Remember that these diagrams are actually shorthand for complex numbers. The numbers represent the probability for each these processes to occur.  In order to calculate the full probability that an electron and a positron will bounce off of one another, we have to add together these contributions as complex numbers.

What does this mean? This is just quantum mechanics at work! Recall another old post about the double slit experiment. We learned that quantum mechanics tells us that objects take all paths between an initial observed state to a final observed state. Thus if you see a particle at point A, the probability for it to show up at point B is given by the sum of the probability amplitudes for each intermediate path.

The sum of diagrams above is a generalization of the exact same idea. Our initial observed state is an electron and a positron. Each of these have some fixed [and observed] momentum. If you want to calculate the probability that these would interact and produce an electron and positron of some other momentum (e.g. they bounce off each other and head off in opposite directions), then one not only has to sum over the different intermediate paths, but also the different intermediate interactions.

Again, a pause for the big picture: we’re not actually going to calculate anything since for most people, this isn’t as fun as drawing diagrams. But even just describing what one would calculate, we can see how things reduce to our simple picture of quantum mechanics: the double slit experiment.

Momentum Conservation

Each initial and final state particle has a well-defined momentum. (By ‘momentum’ I also include the particle’s total energy.) As one could guess, any physical diagram must satisfy the conservation of momentum. In fact, this is built into each intersection: we assume that the sum of the momentum going into each intersection (i.e. from the left) is equal to the momentum going out of it (to the right). Thus you cannot have two very low energy initial state electrons scattering into something with a very high energy final state.

Perhaps more obviously, this means that you cannot have diagrams where “nothing” turns into stuff, or something turns into nothing:


One will note that both of these diagrams are technically allowed by our diagrammatic Feynman rules. We thus have to impose momentum conservation as an additional Feynman rule.

Here’s an exercise for slightly more advanced readers who know special relativity: convince yourself that momentum conservation prohibits any diagrams that only contain a single interaction, e.g.


A hint: consider going into the rest frame of the particles.

[A much simpler exercise for everyone: “read” each of these diagrams from left to right and describe what’s going on in words. Even though these are all variations of the rule for intersecting lines, how do these three diagrams differ in physical interpretation?]

It is straightforward to see that in the electron-positron scattering diagrams above, the momentum of the intermediate photon is fully determined by the momenta of the external particles. For example, in the first diagram the photon momentum must be the sum of the initial particle momenta. (An an exercise for advanced readers again: convince yourself that the intermediate photon is not ‘on shell’, i.e. the square of its 4-momentum doesn’t equal zero. This is okay because the photon is a virtual particle.)

Loop diagrams: a prelude for things to come

Now I’d like to pause to mention an ‘advanced topic’ that we’ll get to in a future post. If you’ve been diligent and have played with drawing different kinds of Feynman diagrams, you’ll have noticed that you can also draw diagrams that have closed loops, such as:


We call such graphs loop diagrams for obvious reasons. Diagrams without loops are called tree diagrams. It turns out that loop diagrams are rather special and introduce a few ‘deep’ issues that I’ll only mention in passing for now: (some of these are a bit ‘advanced’, don’t worry if they’re a little vague — we’ll come back to them later)

  • The above diagram is a contribution to the electron-positron scattering process that we considered above. You should be able to convince yourself that there are in fact an infinite number of contributions for each interaction between a given final and initial state particles given by drawing more loops in creative ways. This sounds weird, but remember that there were also an infinite number of paths between any two points when we studied the “infinite-slit” experiment.
  • For those of with some calculus background: what we’re actually doing is a Taylor expansion. What is our expansion parameter? The electromagnetic coupling e (in the equation we wrote above). In other words, we are expanding in the number of vertices. Each vertex gives a factor of e (which is a small number), so that the full result is actually very well approximated by only taking into account tree diagrams.
  • In light of our comment about momentum conservation, you should convince yourself that the “loop” particles (which are completely virtual) can have any arbitrarily large momentum. This is in contrast to intermediate particles in tree diagrams whose momentum is constrained by the external momenta. This is actually rather interesting: this means that the loops are sensitive to physics at higher energy scales.
  • In light of our understanding of quantum mechanics, we see that even for a single loop diagram we have to sum over an infinite number of possible loop momenta. This can be a problem: a sum over an infinite set of numbers can itself be infinite. Thus we worry that the calculation of our quantum mechanical probabilities may end up giving nonsensical results (what does it mean for a probability to be infinite?). In fact, it turns out that this is very deeply related to the idea that the loops are sensitive to physics at higher energies. We will discuss all of this more thoroughly when we get to the Higgs boson.

Loop diagrams can be very tedious to calculate. In fact, they’re the bane of most graduate students’ lives when they first learn quantum field theory. Fortunately we’re not going to calculate any of them and, for now, will just marvel at their ability to complicate things.

Action-at-a-distance: Attractive and Repulsive Forces

Our discussion has become a little technical, so let’s take a step back and see how some of these pieces come together. We recall that photons are not only particle of light, but they are the intermediate ‘force particles’ which mediate the electric force between electrons (and positrons). The cartoon picture of these force particles is that charged particles “toss” photons back and forth when they interact. One can imagine, as pictured in the Particle Adventure, two electrons as basketball players tossing a ball between them while standing on ice. The momentum of the ball being tossed back and forth translates into a motion of the particles away from each other.

This always begs the question: how the heck are we supposed to understand forces that causes particles to attract? (For example, an electron and a positron.) The cartoon picture doesn’t make sense anymore!

There are a few ways to answer this question. First of all, “forces” are classical descriptions of quantum phenomena. In order to properly derive a force, one should find a way to construct the potential energy of a system and see what kind of particle motion causes it to decrease. There is a way to do this from the quantum perspective, and it turns out to give exactly the correct behavior. For those with some background in quantum mechanics, I would suggest the first 30 or so pages of Zee’s textbook.

However, I promised you no calculations. So let me try to motivate this more heuristically. We should recall that classically, in the presence of a force field, momentum is not conserved (the force causes acceleration). Our Feynman rules, however, explicitly require momentum conservation. So we could cast our question in a different way: how can our quantum theory gives us any kind of force?

To go from a quantum (virtual particle) picture to a classical (force) picture, we have to somehow include the effect of many quantum particles to generate a macroscopic phenomenon. What actually happens when an electron and a positron are attracted to each other over long distances is something like this:


This almost looks like one of the diagrams we considered above, except now there are several final state photons. The electron and the positron move towards each other by shedding photons off into space. This is precisely what a stranded astronaut would do to get back to her space shuttle: throw a wrench in the opposite direction and let conservation of momentum do its job.

Now you say: “That’s crazy! My physics textbook says that oppositely charged particles attract and that’s it — there’s no mention of a bunch of extra photons.” Well, your physics textbook also says that there is something else: the electromagnetic field. The extra photons in the quantum mechanical picture precisely set up the electromagnetic field in the classical picture! This sounds weird, but this is what we mean when we say that the photon is the quantum carrier of the electromagnetic force: it is the quantum of the electric field.

Now I have dodged the question about how a force ‘knows’ whether it should be attractive or repulsive. Thus far I’ve only explained why this could happen. The question now is how do the photons know to be emitted in such a way that the particles attract or repel? I don’t have simple explanation for this; the most straightforward way to determine this from first principles is to actually do the calculation — which we’ve promised not to do. What we will do is motivate why electron-electron scattering (repulsive force) should be different from electron-positron scattering (attractive force). The simplest way to note that these two process should behave differently is that they are described by different Feynman diagrams!

Electron-positron scattering is mediated by the diagrams we discussed above:


Whereas electron-electron scattering is described by a different pair of graphs:


To be certain they look similar, especially the second diagrams, the actual calculation (which we won’t do!) gives different results. To get the classical behavior one has to include the emission of photons that become the electromagnetic field. The key result is that the quantum probability amplitude for electron-electron scattering prefers to emit photons in such a way that the particles repel, while the quantum probability amplitude for electron-positron scattering prefers them to attract.

Next time…

Phew! We said quite a lot about QED in this post. Next time we’ll back off of the details and get back to the big picture. We’ll expand our model to include some new particles and see what our Feynman diagrams can tell us. In future posts we’ll work our way (in baby steps) towards the Feynman rules for the Standard Model and we’ll start to see what kind of phenomena physicists hope to observe at the LHC.

Flip, on behalf of the US/LHC blog.


Tags: ,