[Note: the latter part of this post touches on rather technical ideas, though I’ve done my best to make them as transparent as possible to all of our readers. For semi-experts, I’ve included a few references to reviews where I am aware of such literature.]
This week at Cornell we have a very special guest: Nima Arkani-Hamed of the Institute for Advanced Study in Princeton. Nima is one of the eminent theoretical physicists of his generation. This fame has leaked in to the popular press: in 2006 Popular Science named him one of the 5th annual “Brilliant 10,” in 2007 CNN named him one of the “geniuses who will change your life,” and he’s been featured in articles from The New Yorker to Esquire. His research has touched on many of the possibilities of new physics at the LHC: supersymmetry, extra dimensions, and so-called “Little Higgs” models.
As this semester’s Messenger Lecturer, Nima is giving a series of public talks titled “The Future of Fundamental Physics.” I encourage anyone in the Ithaca area to attend the talks; they are free and open the public. The public lectures will eventually be available online via CornellCast. The Messenger lectures span all disciplines, but we’ve been lucky to have very well respected physicists in the recent past: Steven Weinberg in 2007, and Sir Martin Rees in 2005. Further, the most famous set of Messenger Lectures were given by the most famous American physicist of all time: Richard Feynman in 1964; these were originally recorded by the BBC and subsequently purchased by Microsoft and made public on an interactive website.
In addition to the Messenger Lectures, Nima is also spending the mornings giving talks to the particle theory group about his recent work on the calculation of scattering amplitudes. Since this research program has relevance to the LHC, I wanted to take some time to qualitatively explain what all the fuss is about. (As with many topics that I end up talking about, I’m not an expert on this, so will try to be conservative in what I say.) Most of this is based on an informal talk that Nima gave this morning, but any errors are purely my own! [I apologize that I will be unable to give proper attribution to all of the players involved, see the cited literature for more complete references.]
One of the things that I’ve been trying to explain in my posts is how to understand the physics of the LHC using Feynman diagrams. This is very nice because it is intuitive and it is how grad students today learn quantum field theory. These diagrams encode rules for calculating quantum probabilities for physical processes. Nima’s approach—and the approach of his colleagues and predecessors—is to look for an alternate way to calculate the probabilities for gluon-gluon interactions.
At the level that we’ve discussed Feynman diagrams in this blog, it is not obvious why we would want or need an alternate method. Recall however, that to actually calculate a process—not just draw diagrams that tell us what can happen—we have to draw all possible Feynman diagrams, associate to each of them a [complex] number, and then sum these numbers. The process of calculating all such Feynman diagram can get very thorny when there are large numbers of particles involved. Consider, for example the diagrams involving three gluons and two quarks:
(these are from Mangano and Parke) where one also should include permutations over the 1,2,3 labels. In fact, when you go up to a diagram with six gluons you end up with 220 diagrams and the calculation contains tens of thousands of algebraic terms! These sorts of diagrams aren’t just theoretical exercises, either: they represent actual backgrounds to processes at the LHC that need to be calculated. Fortunately, many noble theorists took up the cause of finding efficient ways to calculate these so-called scattering amplitudes and they developed a fantastic toolbox of tricks to make such calculations tractable. (Semi-experts can learn more about the older tricks here and here.) Nima explained that these tricks were originaly just to get through the honorable tedium of doing such difficult calculations, and that they might lead to anything bigger (as we will explain below), is an example that “Nature leaves no good deed unrewarded.”
Many of these tricks were based on the idea that the Feynman diagram expansion carries a lot of redundancy—for those familiar with field theory, this redundancy is just what we call gauge invariance. (Nima insists that a more accurate term is gauge redundancy.) We briefly mentioned gauge invariance in a previous post. Suffice it to say that this gauge invariance is a key part of our understanding of quantum theory: it tells us how we get forces. Practically, however, when we calculate scattering amplitudes using Feynman diagrams, we end up with a bunch of diagrams that are not individually gauge invariant, but that “miraculously” sum to something which is gauge invariant (as it had to be).
I should say that there are good reasons why all particle physics grad students learn to calculate Feynman diagrams, despite their redundancy:
- For small numbers of external particles (such as one would deal with in a typical first year grad homework), one Feynman diagram calculations are perfectly tractable using pen and paper (maybe a lot of paper)
- Feynman diagrams connect to our intuition about quantum mechanics: they are manifestly local and manifestly unitary. “Local” means that fundamental particles have point-like interactions with one another. This is important because non-local interactions would violate causality: if we boost into a different reference frame using special relativity, then it looks like we mess up cause and effect relations. “Unitary” means that we conserve probability; we’re happy to deal with quantum probabilities, but those probabilities are meaningless if the probability for something to occur is greater than 100%.
As we said above, the trade off for this manifest locality and unitarity is that each diagram is not gauge invariant, i.e. gauge invariance is a property that “pops out” of doing a redundant calculation. (Historically people used gauge invariance as a check that their long calculations were correct.) Trying to tease out this gauge invariance allowed people calculating scattering amplitudes to develop tricks to simplify their work.
Sometimes, however, calculational tricks can provide deeper insights. In 2004 – 2005, Britto, Cachazo, Feng, and Witten (BCFW) proved a set of very powerful ‘tricks’ that relate the scattering amplitude for a process with n external states to those with fewer than n external states. This is really powerful: it allows one to relate more complicated calculations to easier calculations. What was really really interesting about the BCFW relations, however, was that these relations could be proved using fairly sophisticated twistor methods from topological string theory. Eventually people found “down to earth” proofs that only invoked well-known field theory methods, but Nima suggests that perhaps there’s more to the fancy twistor techniques. (For an introduction for physicists, see these notes from a Cambridge lecture course. For non-experts who want to know what the heck a twistor is, I refer to the popular literature by Sir Roger Penrose.)
The BCFW relations expressed amplitudes in a very simple way. Instead of tens of thousands of terms that miraculously simplify, the BCFW procedure gives a handful of terms (say, 3) which are all manifestly gauge invariant. The cost, however, is that these terms are now no longer local! Of course the terms sum up to something which is local, but now locality is something that “pops out” of the BCFW calculation, just as gauge invariance “popped out” of the Feynman diagram calculation. Also, the BCFW method makes complicated scattering amplitudes much much easier to calculate.
What is even more interesting about this method—which I should say is really a combination of the BCFW relation with other “helicity violation” techniques—is what happens when we look at a particular theory called N=4 super Yang-Mills (SYM). This is a type of gauge theory that does not itself describe the real world, but ends up being much simpler to work with because of its symmetry. In N=4 SYM, the BCFW terms in a scattering amplitude exhibits an additional symmetry: Yangian symmetry. This is a combination of conformal symmetry and dual conformal symmetry (nevermind what these are!) that one cannot see using traditional field theory methods. Further, when cast using these seemingly “unnecessarily-fancy” twistor methods, these scattering amplitudes seem to sidestep the idea of “space-time” altogether. This is not to say that spacetime doesn’t exist, but it’s something that “pops out” of the theory in the same way that locality popped out.
Now we get to what Nima proposes as a grand motivation for this entire program. Forget everything about scattering amplitudes, let’s talk about one of the big, big, big questions in theoretical physics. We know theoretically that there are deep problems when we try to combine general relativity and quantum theory; these are beyond the scope of this post, but part of the problems have to do with not having honest-to-goodness observables. Scattering amplitudes represent honest-to-goodness observables when we don’t take into account general relativity (i.e. observables in flat space). Nima’s hope is that learning how to calculate scattering amplitudes in novel ways may provide hints for the big question of how we might complete our theory to incorporate both general relativity and quantum field theory.
There is a precedent for this. Nima suggests thinking about physics before quantum mechanics. Well before we knew about quantum theory, we had a perfectly good theory of classical, deterministic Newtonian mechanics. Even in Newtonian mechanics, however, there were some very difficult things to calculate. Looking for a way to solve these problems, the classical physicists of the 1800s developed a new formulation of Newtonian mechanics that was totally equivalent, but took a completely different form: the principle of least action. A good example of a problem that is easily solved with least action—but that is very difficult using Newton’s laws—is a double pendulum.
The principle of least action, or Lagrangian/Hamiltonian mechanics, is now a staple of every undergraduate physics education. The neat thing about it was that unlike Newton’s laws, it is not manifestly deterministic—determinism is something that “pops out.” This happens because the principle of least action is based on the idea that one should look at all possible paths that a system can take (even the non-deterministic ones!) and select the one that minimizes a certain quantity. Note that even though this is a totally classical formulation that is nothing more and nothing less than Newton’s laws… and yet it sounds eerily like quantum mechanics! Indeed, the principle of least action is the easiest way to “promote” classical mechanics to quantum mechanics. It’s not the same thing as quantum mechanics—the theory is still equivalent to Newton’s laws—but it’s a formulation of Newtonian mechanics that uses the language and highlights the salient aspects of quantum theory. Of course, this isn’t how things worked out historically—such things are sometimes only clear in hindsight. The lesson is this:
Sometimes having a completely equivalent reformulation of a given and accepted theory can provide hints for the “more complete” theory that subsumes it.
The reason for this is clear: in some limit quantum mechanics had to reproduce Newtonian mechanics, so one can hope to try to reformulate Newtonian mechanics in such a way that one [luckily] works with the classical limit of quantum theory. With this “classical limit of quantum theory,” one can then make the leap to a fully non-deterministic quantum theory at the point where all of the language and all of the physics is helping to make the transition as easy as possible.
Nima hopes that a similar process is happening with scattering amplitudes. These novel techniques might point to a radical reformulation for how to calculate scattering amplitudes in such a way that we might get hints about what kind of theory joins general relativity with quantum mechanics. One reason to be hopeful that this is plausible is that locality is precisely the kind of thing which we expect to break down in such a theory. Thus just as determinism was an “emergent” in the least action formulation of classical mechanics and ended up being discarded in quantum mechanics, we might suspect that locality is only “emergent” in general relativity and quantum theory, and is actually discarded in whatever theory subsumes it.
If (and this is a big if) this program is indeed working in the correct direction, then we are still quite a way from understanding the big picture of how to make the next step. But in that case, there is certainly a revolution somewhere on the horizon. On the other hand, as Nima himself will say, there’s no guarantee that this scattering amplitude program (despite its hints) is the correct direction. But even in that case, these techniques are still valuable for the calculational power they provide.
All I can say is that—as is often the case with topics that Nima works on—he’s certainly piqued my interest and I’m very happy to have this week to learn directly from one of the highly influential theoretical physicists of our time.