Here’s another dispatch from the intensity frontier—that is, the ongoing Fundamental Physics of the Intensity Frontier workshop in Rockville, MD… and Twitter. This one ties in to our exploration of Feynman diagrams, too.
Today’s “charged lepton working group” had an excellent experimental overview talk by Chris Polly of Fermilab on the experimental prospects for “muon g-2″ (“g minus 2”) experiments. You can find the pdf here and the rest of the charged lepton agenda here. After explaining what the g-2 is, I’d like to discuss one of Chris’ especially nice slides where he summarized the history of the heroic g-2 calculation.
Unless otherwise noted, all images from this post are from Chris’ talk (and see further references therein!) with his tacit permission.
You may remember from high school physics that moving electric charges (i.e. currents) generate a magnetic field. Further, recall that electrons spin. Even though we think of electrons as point-like and even though this “spin” is completely quantum mechanical, this also generates a magnetic field. This means that fundamental charged particles like electrons are kind of like little bar magnets. More importantly, electrons in a magnetic field will wobble (“precess”) just like a gyroscope. So while my image of an electron is this:
Chris wants us to think about it more like this:
Don’t worry about the details; the point is that this is an object with quantum spin that behaves precisely as one would expect classically. Now the big question: so what?
The response of our “electron gyroscope” to a uniform external magnetic field is for it to wobble. The technical term is precession. The sensitivity of the electron to a magnetic field is given by something called the g-factor (related to its ‘magnetic moment’), which just happens to have the distinction of being the most accurately verified prediction in the history of physics. Just for fun, the number is something like,
g = 2.0023193043617.
Look at all those significant figures! That’s the Standard Model showing off. I suspect g stands for gyromagnetic.
Precession in a magnetic field is exhibited by all spinning charged particles, including the electron’s heavier sibling, the muon. The g-factor of the muon is slightly different from that of the electron due to quantum effects. The experimentally measured value for the muon g is
g = 2.00233184178
This observed muon g-factor happens to match the theoretical calculation up to ten orders of magnitude. Chris had a very nice slide in which he dissected the history of the heroic calculation effort leading to these ten decimal points of theory prediction.
The first part of the story comes from Dirac, one of the fathers of quantum mechanics, who predicted the leading factor of 2. This value is “almost classical,” and the subsequent two zeros after the decimal point represents the smallness of quantum corrections.
After this brief “desert” of corrections from Dirac’s prediction, the first corrections from quantum electrodynamics comes from Schwinger, who calculated the corrections from quantum field theory. This is represented by a Feynman diagram which is a correction to the usual electron-electron-photon vertex:
(Pop quiz! You should have expected that the relevant diagram has something to do with the photon coupling to the charged leptons since the photon is the force particle for the electromagnetic field.)
The next advance comes from Tom Kinoshita, who calculated higher order corrections within quantum electrodynamics. In fact, he continues to work on such calculations at tenth order in the electric coupling—at this level there are 12,672 different Feynman diagrams contributing!
The real difficulties come from quantum corrections which involve intermediate hadrons. Such diagrams come from fluctuations in which a virtual photon emits a quark/anti-quark virtual pair, which may then turn into a meson/anti-meson pair before annihilating back into a virtual photon. Recall that at low energies, the theory of quarks and gluons is very non-perturbative; thus the contribution from these virtual hadrons is actually the main source of theoretical uncertainty in the calculation of this quantity.
Finally, the next correction to this value comes from the exchange of virtual heavy gauge bosons. Because these particles are much heavier than the characteristic energy of the process (the muon mass), their quantum effects are highly suppressed.
Okay, great. This is a very well calculated object. So what? Here’s the exciting part. If we rewrite this in terms of a quantity a (which contains the same information as g), we find:
The Standard Model prediction does not agree with the experimental observation. Of course, the relevant question is: by how much? The answer turns out to be around 3.6 standard deviations! In other words, if you’re the type of person to get excited about things quickly, then this is something which seems very intriguing. This has been a well known result for some time and people would like to continue to check this with even more precise experimental measurements and theoretical calculations—if it continues to disagree then this starts to look like a very strong hint for new physics from the intensity (low energy) frontier!
Chris opened his talk with the questions he was asked by his family over Thanksgiving:
- So what are y’all doing up at that lab [Fermilab]?
- Why would ya do that?
That’s exactly how Chris phrased it in his talk, mentioning his Missouri heritage. Being an excellent science communicator as well as an excellent scientist, Chris explained that his collaboration is working on a more precise measurement of the (g-2) value of the muon, which is related to its gyromagnetic ratio. When he explains, however, that this is already the most accurately physical quantity of all time, his family would again scratch their heads and wonder why this is worth measuring once again.
This really gets to the heart of the intensity frontier: by measuring very precisely known quantities down to the level of their theoretical precision, we can look for the quantum (virtual!) effects of new physics in very accurately measured observables. The point isn’t that we’re pushing from ten to eleven decimal points of precision, but rather that the next decimal point will go a long way to confirm (or refute) that the observed discrepancy is indeed a signal of new physics.
My thanks to Chris Polly for sharing his slides and for an excellent talk. All credit for the information herein goes to him… except for any mistakes, which are solely the fault of the blogger. 🙂