• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Flip Tanedo | USLHC | USA

Read Bio

Nima Arkani-Hamed’s Messenger Lectuers

Wednesday, October 20th, 2010

Hi everyone! I just wanted to point out that Nima Arkani-Hamed’s Messenger Lectures on “The Future of Fundamental Physics” are now available online from Cornell. I blogged a little about Nima some time ago to discuss the big picture of his current research program, but the Messenger Lectures are intended for a general audience. I think the readers of this blog will particularly enjoy it; I know that many of the physicists in the audience also gleaned a lot from his particular presentation.

(I’m shamelessly using the same flyer image I used before—the grad students here spent a lot of time putting up these flyers all over the university and town.)

I should note that at the moment Lecture 2 is not available online, I’m told that there were some problems with the audio recording which they are trying to fix. (Unfortunately this might be the lecture that readers of the blog would particularly enjoy since Nima spends a little time talking about Feynman diagrams and the rules for the Standard Model!) Hopefully it will be put up soon. [28 Oct: Lecture 2 is now posted, though there are still some audio problems in the beginning of the lecture. The Cornell Cast team is continuing to work on this and will try to post a better version in a few weeks.]

Nima also gave a series of technical lectures on the current status of his research on the calculation of scattering amplitudes. The grad students here recorded these and we’re currently processing the files. The lectures were particularly nice because they were a compact introduction to the field and we hope to eventually make these available as well. In the meanwhile, Nima and Freddy Cachazo gave a similar set of technical lectures at the Perimeter Institute and those are already available online through the PIRSA archive.

Enjoy the lectures!

Share

Scattering Amplitudes and beyond

Monday, October 4th, 2010

[Note: the latter part of this post touches on rather technical ideas, though I’ve done my best to make them as transparent as possible to all of our readers. For semi-experts, I’ve included a few references to reviews where I am aware of such literature.]

This week at Cornell we have a very special guest: Nima Arkani-Hamed of the Institute for Advanced Study in Princeton. Nima is one of the eminent theoretical physicists of his generation. This fame has leaked in to the popular press: in 2006 Popular Science named him one of the 5th annual “Brilliant 10,” in 2007 CNN named him one of the “geniuses who will change your life,” and he’s been featured in articles from The New Yorker to Esquire. His research has touched on many of the possibilities of new physics at the LHC: supersymmetry, extra dimensions, and so-called “Little Higgs” models.

As this semester’s Messenger Lecturer, Nima is giving a series of public talks titled “The Future of Fundamental Physics.” I encourage anyone in the Ithaca area to attend the talks; they are free and open the public. The public lectures will eventually be available online via CornellCast. The Messenger lectures span all disciplines, but we’ve been lucky to have very well respected physicists in the recent past: Steven Weinberg in 2007, and Sir Martin Rees in 2005. Further, the most famous set of Messenger Lectures were given by the most famous American physicist of all time: Richard Feynman in 1964; these were originally recorded by the BBC and subsequently purchased by Microsoft and made public on an interactive website.

In addition to the Messenger Lectures, Nima is also spending the mornings giving talks to the particle theory group about his recent work on the calculation of scattering amplitudes. Since this research program has relevance to the LHC, I wanted to take some time to qualitatively explain what all the fuss is about. (As with many topics that I end up talking about, I’m not an expert on this, so will try to be conservative in what I say.) Most of this is based on an informal talk that Nima gave this morning, but any errors are purely my own! [I apologize that I will be unable to give proper attribution to all of the players involved, see the cited literature for more complete references.]

One of the things that I’ve been trying to explain in my posts is how to understand the physics of the LHC using Feynman diagrams. This is very nice because it is intuitive and it is how grad students today learn quantum field theory. These diagrams encode rules for calculating quantum probabilities for physical processes. Nima’s approach—and the approach of his colleagues and predecessors—is to look for an alternate way to calculate the probabilities for gluon-gluon interactions.

At the level that we’ve discussed Feynman diagrams in this blog, it is not obvious why we would want or need an alternate method. Recall however, that to actually calculate a process—not just draw diagrams that tell us what can happen—we have to draw all possible Feynman diagrams, associate to each of them a [complex] number, and then sum these numbers. The process of calculating all such Feynman diagram can get very thorny when there are large numbers of particles involved. Consider, for example the diagrams involving three gluons and two quarks:

(these are from Mangano and Parke) where one also should include permutations over the 1,2,3 labels. In fact, when you go up to a diagram with six gluons you end up with 220 diagrams and the calculation contains tens of thousands of algebraic terms! These sorts of diagrams aren’t just theoretical exercises, either: they represent actual backgrounds to processes at the LHC that need to be calculated. Fortunately, many noble theorists took up the cause of finding efficient ways to calculate these so-called scattering amplitudes and they developed a fantastic toolbox of tricks to make such calculations tractable. (Semi-experts can learn more about the older tricks here and here.) Nima explained that these tricks were originaly just to get through the honorable tedium of doing such difficult calculations, and that they might lead to anything bigger (as we will explain below), is an example that “Nature leaves no good deed unrewarded.”

Many of these tricks were based on the idea that the Feynman diagram expansion carries a lot of redundancy—for those familiar with field theory, this redundancy is just what we call gauge invariance. (Nima insists that a more accurate term is gauge redundancy.) We briefly mentioned gauge invariance in a previous post. Suffice it to say that this gauge invariance is a key part of our understanding of quantum theory: it tells us how we get forces. Practically, however, when we calculate scattering amplitudes using Feynman diagrams, we end up with a bunch of diagrams that are not individually gauge invariant, but that “miraculously” sum to something which is gauge invariant (as it had to be).

I should say that there are good reasons why all particle physics grad students learn to calculate Feynman diagrams, despite their redundancy:

  1. For small numbers of external particles (such as one would deal with in a typical first year grad homework), one Feynman diagram calculations are perfectly tractable using pen and paper (maybe a lot of paper)
  2. Feynman diagrams connect to our intuition about quantum mechanics: they are manifestly local and manifestly unitary. “Local” means that fundamental particles have point-like interactions with one another. This is important because non-local interactions would violate causality: if we boost into a different reference frame using special relativity, then it looks like we mess up cause and effect relations. “Unitary” means that we conserve probability; we’re happy to deal with quantum probabilities, but those probabilities are meaningless if the probability for something to occur is greater than 100%.

As we said above, the trade off for this manifest locality and unitarity is that each diagram is not gauge invariant, i.e. gauge invariance is a property that “pops out” of doing a redundant calculation. (Historically people used gauge invariance as a check that their long calculations were correct.) Trying to tease out this gauge invariance allowed people calculating scattering amplitudes to develop tricks to simplify their work.

(more…)

Share

World of Glue

Sunday, October 3rd, 2010

I’m a bit overdue for my next post introducing the Standard Model through Feynman diagrams, so let’s continue our discussion of the theory the subnuclear “strong” force, quantum chromodynamics, or QCD. This is the force that binds quarks into protons and neutrons, and the source of many theoretical and experimental headaches at the LHC.

For a listing of all the posts in this series, see the original post here.

Last time we met the quarks and “color,” a kind of charge that can take three values: red, green, and blue. Just as the electromagnetic force (which has only one kind of charge) pulls negatively charged electrons to positively charged protons to form electrically neutral atoms, QCD forces the quarks to be confined into color-neutral bound states:

  • Three quarks, one of each color. These are called baryons. Common examples are the proton and neutron.
  • A quark and an anti-quark of the same color (e.g. a red and anti-red quark). These are called mesons.

Collectively baryons and mesons are called hadrons. Because the strong force is so strong, these color-neutral bound states are formed very quickly and they are all we see in our detectors. Also like the electromagnetic force, the strong force is mediated by a particle: a boson called the gluon, represented in plush form below (courtesy of the Particle Zoo)

Gluons are so named because they glue together mesons and baryons into bound states. In Feynman diagrams we draw gluons as curly lines. Here’s our prototypical gluon-quark vertex:

We see that the gluon takes an incoming quark of one color and turns it into an outgoing quark of another color. According to the rules associated with Feynman diagrams, we can move lines around (always keeping the orientation of arrows relative to the vertex the same!) and interpret this vertex as

  • A red quark and a blue anti-quark (with color anti-blue) annihilate into a gluon
  • A red quark emits a gluon and turns into a blue quark
  • A red quark absorbs a gluon and turns into a blue quark
  • A gluon decays into a blue quark and a red anti-quark

As a simple homework, you can come up with the two interpretations that I’ve left out.

Let us make the following caveats:

  1. The quarks needn’t have different colors, but one arrow has to be pointing in while the other is pointing out. (The quarks carry electric charge, and remember that one way to understand the arrows is as the flow of electric charge.)
  2. The quarks involved in the vertex can have any flavor (up, down, strange, charm, bottom, top), but both must have the same flavor. This is because QCD is “flavor blind,” it treats all of the flavors equally and doesn’t mix them up. (Compare this to the W boson!)
  3. The interpretations for the vertex above are all correct, except that none of them are allowed kinematically. This is because the gluon is massless. In other words, conservation of energy and momentum are violated if you consider those processes. This is for precisely the same reason that we couldn’t have single-vertex photon diagrams in QED.

Homework: up an down quark scattering by gluons. Draw all the diagrams for the following processes that are mediated by a single gluon (i.e. only contain a single internal gluon line)

  1. uu uu (one diagram)
  2. u anti-u u anti-u (two diagrams)
  3. u anti-u d anti-d (one diagram)
  4. u anti-d u anti-d (one diagram)

You may assign colors as necessary (explain why it matters or does not matter). Why is it impossible to draw a u anti-d d anti-u diagram (note that this process is allowed if you replace the gluon by a W)? [Hint: you might want to review some of our discussions about QED and muons; the diagrams are all very similar with photons replaced by gluons!]

We can continue to make analogies to QED. We explained that the QED vertex had to be charge neutral: an arrow pointing inwards carries some electric charge, while the arrow pointing outward must carry the same electric charge. The gluon vertex above is electrically neutral in this sense, but does not seem to be color neutral! It brings in some red-charge while splitting out some blue charge.

The resolution is that gluons themselves carry color charge! This is now very different from QED. It’s a little bit like the W boson, which carried electric charge and so could interact with photons. We can see from the vertex above that the gluon must, in fact, carry two charges: in that example the gluon carries an incoming blue charge and and outgoing red charge; or, in other words charge blue and charge anti-red. These are the charges that it must carry in order for the vertex to be color neutral.

Thus there are actually many types of gluons which we can classify according to the color and anti-color which they can carry. Since there are three colors (and correspondingly three anti-colors), we expect there to be nine types of gluons. However, for somewhat mathematical reasons, it turns out that there are only eight.

The mathematical reason is that the gluons are in the adjoint representation of SU(3) and so number only 32-1 = 8. They are associated with the space of traceless Hermitian 3×3 matrices. The “missing” gluon is the quantum superposition of “red/anti-red + blue/anti-blue + green/anti-green.” If that’s all gibberish to you, then that’s okay—these are little details that we won’t need.

The fact that gluons themselves carry color charge means something very important: gluons feel the strong force, which means that gluons interact with other gluons! In other words, we can draw three- and four- gluon vertices:

There are no five or higher vertices, but as homework you can convince yourself that from these vertices you can draw diagrams with any number of external gluons. In fancy schmancy mathematical language, we say that QCD is non-Abelian because the force mediators interact with themselves. (In fact, the weak force is also non-Abelian, but its story is one of broken dreams which we will get to when we meet the Higgs.)

Now, you might wonder: if the strong force is so strong that gluons bind quarks together, and if gluons also interact with themselves, is it possible for gluons to bind each other into some kind of bound state? The answer is yes, though we have yet to confirm this experimentally. The bound states are called glueballs and can be pretty complicated objects. Theoretically we have good reasons to believe that they should exist (and eventually decay into mesons and baryons), and very sophisticated simulations of QCD have also suggested that they should exist… but experimentally they are very hard to see in a detector and we have yet to confirm any glueball signature. Very recently some theorists have suggested that there might have been hints at the BES collider in Beijing.

Gluon hunting, however, is something of a lower energy frontier since our predictions for the lightest glueball masses are around 1.7 GeV; so don’t expect anything from the LHC on this. It’s worth remarking, on the other hand, about a mathematical issue regarding a world of glue. (This is a bad pun for a computer game that I like.) The question is: if the universe had no matter particles and only gluons—which would then form glueball bound states—are there any massless particles observable in nature? Sure, the gluons themselves are massless, but they’re not observable states; only glueballs are observable. Everything we know about QCD—which isn’t the whole story—suggests that glueballs always have some non-zero mass, but we don’t know how to prove this. This question, in fact, is one of the Clay Mathematics Millennium Prize Problems, making it literally a million dollar question.

Next time: the wonderful world of hadrons and what we can actually detect at the LHC.

Cheers,
Flip, US LHC blog

Share

The W mass from Fermilab

Thursday, September 16th, 2010

With the LHC running and experimentalists busy taking real data, one of the things left for theory grad students like me is to learn how to interpret the plots that we hope will be ready next summer.

A side remark: the LHC will keep taking physics into next year, but in 2012 will shut down for a year to make the adjustments necessary to ramp up to its full 7 TeV/beam energy. Optimistic theorists are hoping that the summer conferences before the shutdown will share plots that offer some clues about the new physics we hope to see in the subsequent years.

The most basic feature we can hope for is a resonance, as we described when we met the Z boson. The second most basic signal of a new particle is a little more subtle, it’s a bump in the transverse mass distribution. I was reminded of this last night because of a new conference proceeding paper on the arXiv last night (1009.2903) presenting the most recent fit to the W boson mass from the CDF and D0 collaborations at the Tevatron.

The result isn’t “earth shattering” by any extent. We’ve known that the W mass is around 80 GeV for quite some time. The combined results with the most recent data is really an update about the precision with which we measure the value because it is so important for determining other Standard Model relations.

Here’s the plot:

It’s not the prettiest looking plot, but let’s see what we can learn before going into any physics. In fact, we won’t go into very much physics in this post. The art of understanding plots can be subtle and is worth a discussion in itself.

  • On the x-axis is some quantity called mT. The plot tells us that it is measured in GeV, which is a unit of energy (and hence mass). So somehow the x axis is telling us about the mass of the W.
  • On the y-axis is “Events per 0.5 GeV.” This tells us how many events they measured with a given mT value.
  • What is the significance of the “per 0.5 GeV” on the y-axis? This means, for example, that they count the number of events between 70 GeV and 70.5 GeV and plot that on the graph. This is called “binning” because it sets how “bins” you divide your data set into. If you have very small bins then you end up with more data points on the x axis, but you have much fewer data points per bin (worse statistics per bin). On the other hand, if you have very large bins you end up with lots of data per bin, but less ability to determine the overall shape of the plot.
  • The label for the plot tells us that we are looking at events where a W boson is produced and decays into a muon and a neutrino. This means (I assume?) that the experimentalists have already subtracted off the “background” events that mimic the signature of a muon and a neutrino in the detector. (In general this is a highly non-trivial step.)
  • The blue crosses are data: the center of the cross is the measured value and the length of the bars gives the error.
  • The values under the plot give us the summary of the statistical fit: it tells us that the W mass is 80.35 GeV and that the χ2/dof is reasonable. This latter value is a measure of how consistent the data is with your theory. Any value near 1 is pretty good, so this is indeed a good fit.
  • The red line is the expected simulated data using the statistical fit parameters. We can see visually that the fit is very good. You might wonder why it is necessary to simulate data—can’t the clever theorists just do the calculation and give an explicit plot? In general it is necessary to simulate data because of QCD which leads to effects that are intractable to calculate from first principles, but this is a [very interesting] story for another time.

Now for the relevant question: what exactly are we plotting? In order to answer this, we should start by thinking about the big picture. We smash together some particles, somehow produce a W boson, which decays into a muon and a neutrino. We would like to measure the mass of the W boson from the “final states” of the decay. The primary quantities we need to reconstruct the W mass are the energies and momenta of the muon and neutrino. Then we can use energy and momentum conservation to figure out the W‘s rest mass. (There’s some special relativity involved in here which I won’t get into.)

Homework: for those of you with some background in high school or college physics, think about how you would solve for the W mass if you had a measurement for the muon energy and momentum. For this “toy calculation” you don’t need special relativity, just use E = (rest mass energy) + (kinetic energy) and assume that the neutrino is massless. [The discussion below isn’t too technical, but it will help if you think about this problem a little before reading on.]

The first point is that we cannot measure the neutrino: it’s so weakly interacting that it just shoots out of our detector without any direct signals… like a ninja. That’s okay! Conservation of energy and momentum tells us that it is sufficient to determine the energy and momentum of the muon. We know that the neutrino momentum has to be ‘equal and opposite’ and from this we can reconstruct its energy (knowing that it has negligible mass).

… except that this too is a little simplified. This would be absolutely true if the W boson were produced at rest, such as at electron-positron colliders like LEP or SLAC. However, at the Tevatron we’re colliding protons and antiprotons…. which means we’re accelerating protons and antiprotons to equal and opposite energies, but the actual stuff that’s colliding are quarks, which each carry an unknown fraction of the proton energy and momentum! Thus the W boson could end up having some nonzero momentum along the axis of the beam and this spoils our ability to use a simple calculation based on energy/momentum conservation to determine the W mass.

This is where things get slick—but I’ll have to be heuristic because the kinematics involved would be more trouble than they’re worth. The idea is to ignore the momentum along the beam direction: it’s worthless information because we don’t know what the initial momentum in that direction was. We only look at the transverse momenta, which we know should be conserved and was initially zero.

If we use conservation of energy/momentum on only the transverse information, we can extract a “fake” mass. Let us call this the transverse mass, mT. (Technically this is not yet the “transverse mass,” but since we’re not giving rigorous mathematical definitions, it won’t matter.) This fake mass is exactly equal to the real mass when the W has no initial longitudinal momentum. This is a problem: we have no way to know the initial longitudinal momentum for any particular event… we just know that sometimes it is close to zero and other times its not.

The trick, then, is to take a bunch of events. Up to this point, in principle you didn’t need more than one event to determine the W mass as long as you knew that the one event had zero longitudinal momentum. Now that we don’t know this, we can plot a distribution of events. For the events where the longitudinal momentum of the W is zero, we expect that our transverse mass measurements are close to the true W mass. For the events with a non-negligible longitudinal momentum, part of the “energy” of the W goes into the longitudinal direction which we’re ignoring, and thus we end up measuring a transverse mass which is less (never greater!) than the true W mass.

Thus we have a strategy: if we can measure a bunch of events, we can look at the distribution and the largest possible value that we measure should represent those events with the smallest longitudinal momentum, and hence should give the correct W mass.

This is almost right. It turns out that there are a few quantum effects that spoil this. During the production of the W, nature can conspire to pollute even the transverse momentum data: the W might emit a photon that shifts its transverse momentum a little, or the quarks and gluons might produce some hadrons that also give the W some transverse momentum kick. This ends up smearing out the distribution. It turns out that these can be taken into account in a very clever—but essentially mathematical—way, and the result is the plot above. You can see that the distribution is still smeared out a little bit towards the tail, but that there is a sharp-ish edge at the true W boson mass. This is what experimentalists look for to fit their data to get extract the W mass. (For more discussion on the W mass and a CMS perspective, see this post by Tommaso a few months ago.)

I really like this story—there’s a lot of intuition and physics that goes into the actual calculations. It turns out, however, that for the LHC things can get a lot more complicated. Instead of single W bosons, we hope to produce pairs of exotic particles. These can each end up decaying into things that are visible and invisible, just like the muon–neutrino system that the W decays into. However, now that there are two such decays, the kinematics ends up becoming much trickier.

Recently some very clever theorists from Cambridge, Korea, and Florida have made lots of progress on this problem and have developed an industry for so-called “transverse mass” variables. For those interested in the technical details, there’s now an excellent review article (arXiv:1004.2732). [These sorts of analyses will probably not be very important until after the LHC 2012 shutdown when more data can be collected, but they offer a lot of promise for how we can connect models of new physics to data from experiments.]

Cheers,
Flip

Share

Meet the quarks

Tuesday, September 14th, 2010

One of the most important experiments in the history of physics was the Rutherford experiment where “alpha particles” were shot at a sheet of gold foil. The way that the particles scattered off the foil was a tell-tale signature that atoms contained a dense nucleus of positive charge. This is one of the guiding principles of high-energy experiments:

If you smash things together at high enough energies, you probe the substructure of those particles.

When people say that the LHC is a machine colliding protons at 14 TeV, what they really mean is that it’s a quark/gluon collider since these are the subnuclear particles which make up protons. In this post we’ll begin a discussion about what these subatomic particles are and why they’re so different from any of the other particles we’ve met.

(Regina mentioned QCD in her last post—I think “subtracting the effects of QCD,” loosely phrased, is one of the ‘problems’ that both theorists and experimentalists often struggle with.)

This post is part of a series introducing the Standard Model through Feynman diagrams. An index of these entries can be found on the original post. In this post we’ll just go over the matter particles in QCD. (I’m experimenting with more frequent—but shorter—posts.)

A (partial) periodic table for QCD

The theory that describes quarks and gluons is called Quantum Chromodynamics, or QCD for short. QCD is a part of the Standard Model, but for this post we’ll focus on just QCD by itself. Quarks are the fermions—the matter particles—of the theory. There are six quarks, which come in three “families” (columns in the table below):

The quarks have cute names: the up, down, charm, strange, top, and bottom. Historically the t and b quarks have also been called “truth” and “beauty,” but—for reasons I don’t quite understand—those names have fallen out of use, thus sparing what would have been an endless parade of puns in academic paper titles.

The top row (u,c,t) is composed of particles with +2/3 electric charge while the bottom row is composed of particles of -1/3 charge. These are funny values since we’re used to protons and electrons with charges +1 and -1 respectively. On the one hand this is a historical effect: if we measured the quark charges first we would have said that

  • the down quark has charge -1
  • the up quark has charge +2
  • the electron has charge -3
  • and the proton has charge +3

It’s just a definition of how much is one “unit” of charge. However, the fact that the quark and lepton charges have these particular ratios is a numerical curiosity, since it is suggestive (for reasons we won’t go into here) of something called grand unification. (It’s not really as “grand” as it sounds.)

One quark, two quark, red quark, blue quark?

I drew the above diagram very suggestively: there are actually three quarks for each letter above. We name these quarks according to colors: thus we can have a red up quark, a blue up quark, and a green up quark, and similarly with each of the five quarks. Let me stress that actual quarks are not actually “colored” in the conventional sense! These are just names that physicists use.

The ‘colors’ are really a kind of “chromodynamic” charge. What does this mean? Recall in QED (usual electromagnetism) that the electron’s electric charge means that it can couple to photons. In other words, you can draw Feynman diagrams where photons and electrons interact. This is precisely what we did in my first post on the subject. In QED we just had two kinds of charge: positive and negative. When you bring a positive and negative charge together, they become neutral. In QCD we generalize this notion by having three kinds of charge, and bringing all three charges together gives you something neutral. (Weird!)

The naming of different kinds of quarks according to colors is actually very clever and is based on the way that colored light mixes. In particular, we know that equal parts of red + green + blue = white. We interpret “white” as “color neutral,” meaning having no “color charge.”

There’s a second way to get something color neutral: you can add something of one color with it’s “anti-color.” (You can formalize these in color theory, but this would take us a bit off course.) For example, the “anti-color” of red is cyan. So we could have red + “anti-red” (cyan) = color neutral.

If we don’t see them, are quarks real?

The point of all of these “color mixing” analogies is that [at low energies], QCD is a strongly coupled force. In fact, we often just call it the strong force. It’s responsible for holding together protons and neutrons. In fact, QCD is so strong that it forces all “color-charged” states to find each other and become color neutral. We’ll get into some details about this in follow up posts when we introduce the QCD force particles, the gluons. For now, you should believe (with a hint of scientific skepticism) that there is no such thing as a “free quark.” Nobody has ever picked up a quark and examined it to determine its properties. As far as you, me, the LHC, and everyone else is concerned, quarks are always tied up in bound states.

There are two kinds of bound states:

  • Bound states of 3 quarks: these are called baryons. You already know two: the proton and the neutron. The proton is a combination (uud) while the neutron is a combination (ddu). For homework, check that the electric charges add up to be +1 and 0. Because these have to be color neutral, we know that the quark colors have to sum according to red + green + blue.
  • Bound states of a quark and an anti-quark: these are called mesons. These are color-neutral because you have a color + it’s anti-color. The lightest mesons are called pions and are composed of up and down quarks. For example, the π+ meson looks something like (u anti-d).  (Check to make sure you agree that it has +1 electric charge.)

Collectively these bound states are called hadrons. In the real world (i.e. in our particle detectors) we only see hadrons because any free quarks automatically get paired up with either anti-quarks or two other quarks. (Where do these quarks come from? We’ll discuss that soon!)

This seems to lead to an almost philosophical question: if quarks are always tied up in hadrons, how do we know they really exist?

A neat historical fact: Murray Gell-Mann and Yuval Ne’eman, progenitors of the quark model, originally proposed quarks as a mathematical tool to understand the properties of hadrons; largely because we’d found lots of hadrons, but no isolated quarks. For a period in the 60s people would do calculations with quarks as abstract objects with no physical relevance.

Why we believe that quarks are real

This seems to lead to an almost philosophical question: if quarks are always tied up in hadrons, how do we know they really exist? Fortunately, we are physicists, not philosophers. Just as Rutherford first probed the structure of the atomic nucleus by smashing high energy alpha particles (which were themselves nuclei), the deep inelastic scattering experiments at the Stanford Linear Accelerator Center (joint with MIT and Caltech) in the 60s collided electrons into liquid hydrogen/deuterium targets and revealed the quark substructure of the proton.

A discussion of deep inelastic scattering could easily span several blog posts by itself. (Indeed, it could span several weeks in a graduate quantum field theory course!) I hope to get back to this in the future, since it was truly one of the important discoveries of the second half of the twentieth century. To whet your appetites, I’ll only draw the Feynman diagram for the process:

This is unlabeled, but by now you should see what’s going on. The particle on top is the electron that interacts with the proton, which is drawn as the three quark lines on the bottom left. The circle (technically called a “blob” in the literature) represents some QCD interactions between the three quarks (holding them together). The electron interacts with a quark through some kind of force particle, the wiggly line. For simplicity you can assume that it is a photon (for homework, think about what is different if it’s a W). We’ve drawn the quark that interacts as the isolated line coming out of the blob.

This quark is somewhat special because it’s the particle that the electron recoils against. This means that it gets a big kick in energy, which can knock it out of the proton. As I mentioned above, this quark is now “free” — but not for long! It has to hadronize into more complicated QCD objects, mesons or baryons. The spectrum of outgoing particles gives clues about what actually happened inside the diagram.

We’ve just glossed over the surface of this diagram: there is a lot of very deep (no pun intended) physics involved here. (These sorts of processes are also a notorious pain in the you-know-where to calculate the first time one meets them in graduate courses.)

(By the way: the typical interactions of interest at the LHC are similar to the diagram above, only with two protons interacting!)

A hint of group theory and unification

I would be negligent not to mention some of the symmetry of the matter content of the Standard Model. Let’s take a look at all of the fermions that we’ve met so far:

There are all sorts of fantastic patterns that one can glean from things that we’ve learned in these blog posts alone!

The top two rows are quarks (each with three different colors), while the bottom two rows are leptons. Each row has a different electric charge. Each column carries the same properties, except that each successive column is heavier than the previous one. We learned that the W boson mediates decays between the columns, and since heavy things decay into lighter things, most of our universe is made up of exclusively the first column.

There are other patterns we can see. For example:

  • When we first met QED, we only needed one type of particle, say the electron. We knew that electrons and anti-electrons (positrons) could interact with a photon.
  • When we met the weak force (the W boson), we needed to introduce another type or particle: the neutrino. An electron and an anti-neutrino could interact with a W boson.
  • Now we’ve met the strong force, QCD. In our next post we’ll meet the force particle, the gluon. What I’ve already told you, though, is that there are three kinds of particles that interact with QCD: red, green, and blue. In order to form something neutral, you need all three color charges to cancel.

There’s a very deep mathematical reason why we get this one-two-three kind of counting: it comes from the underlying “gauge symmetry” of the Standard Model. The mathematical field of group theory is (a rough definition) the study of how symmetries can manifest themselves. Each type of force in the Standard Model is associated with a particular “symmetry group.” Without knowing what these names mean, it should not surprise you if I told you that the symmetry group of the Standard Model is: U(1) SU(2) SU(3). There’s that one-two-three counting!

It turns out that this is also very suggestive of grand unification. The main thrust of the idea is that all three forces actually fit together in a nice way into a single force which is represented by a single “symmetry group,” say, SU(5). In such a scheme, each column in the “periodic table” above can actually be “derived” from the mathematical properties of the GUT (grand unified theory) group. So in the same way that QCD told us we needed three colors, the GUT group would tell us that matter must come in sets composed of quarks with three colors, a charged lepton, and a neutrino; all together!

By the way, while they sound similar, don’t confuse “grand unified theories” with a “theory of everything.” The former are theories of particle physics, while the latter try to unify particle physics with gravity (e.g. string theory). Grand unified theories are actually fairly mundane and I think most physicists suspect that whatever completes the Standard Model should somehow eventually unify (though there has been no direct experimental evidence yet). “Theories of everything” are far more speculative by comparison.

Where we’ll go from here?

I seem to have failed in my attempt to write shorter blog posts, but this has been a quick intro to QCD. Hopefully I can write up a few more posts describing gluons, confinement, and hadrons.

For all of you LHC fans out there: QCD is really important. (For all of you LHC scientists out there, you already know that the correct phrase is, “QCD is really annoying.”) When we say that SLAC/Brookhaven discovered the charm quark or that Fermilab discovered the top quark, nobody actually bottled up a quark and presented it to the Nobel Prize committee. Our detectors see hadrons, and the properties of particular processes like deep inelastic scattering allow us to learn somewhat indirectly about the substructure of these hadrons to learn about the existence of quarks. This, in general, is really, really, really hard—both experimentally and theoretically.

Thanks everyone,
Flip, US LHC Blogs

(By the way, if there are particle physics topics that people want to hear about, feel free to leave suggestions in the comments of the blog. I can’t promise that I’ll be able to discuss all of them, but I do appreciate feedback and suggestions. Don’t worry, I’ll get to the Higgs boson eventually… first I want to discuss the particles that we have discovered!)

Share

Welcome to college, future colleagues (advice for science undergrads)

Sunday, September 12th, 2010

It’s the beginning of another academic year, which means another generation of young people entering universities across the US. As a grad student one feels a sense of nostalgia when realizing that some fraction of these students will be following in your footsteps to become your future colleagues at the scientific frontier. Of course, the nostalgia fades away when you realize that those students are hidden among the other hundred or so that are just taking the class you’re teaching only because it’s a requirement for their major and just want to figure out how to pass the final exam. 🙂

With slightly more seriousness, a warm “welcome to college” to all of the freshmen out there, and a special ‘hello’ to all of the future scientists among you. I have a strong belief that one of the roles of the graduate student community is to provide mentors for undergraduates, especially those who are interested in pursuing academic careers. To that extent, there are a few things that I always thought I would have appreciated knowing when I was a freshman and that I’d like to share with the blogosphere.

(Random factoid: the first mention of my name on the blogosphere came from a Cosmic Variance post about applying to grad schools. I really appreciated that post and write this in the same spirit.)

These tend to be physics/science-oriented, but I hope at least some of it is broadly-applicable:

1. Figure out what you want. There’s no single definition for success in college; this all depends on what you want to get out of your undergraduate years. Depending on whether you pursue an academic career, go into industry, professional school, politics, start the next great rock band—whatever it is you end up following, you will be judged on different criteria. College is a time of self discovery—and that can take time—but it helps if you discover yourself sooner rather than later since that gives you more time to prepare for the next step. Once you find what it is you want to do, dive into it with enthusiasm.

2. Find mentors. No matter what you end up doing (but especially if you want to pursue scientific research), find people who can provide guidance and inspiration. For scientists this usually means faculty and grad students. They’ve been where you are and they’re bursting with advice about how they navigated their path through academia. Make a point to talk to them! Go to office hours and ask about their research. Invite them to lunch. Be pro-active about this.

3. Do research. Undergraduate research experience is practically an unstated prerequisite for a strong grad school application. Research is where your coursework comes to life and you find out what it’s like to work on open-ended questions. It’s also a chance to try out different fields: not sure if you’d enjoy particle physics? Spend a summer (or better: a year) as an undergrad research assistant!

Actually doing research will help you figure out what you really want to pursue (theory or experiment? condensed matter or high energy?). Even if you end up deciding that you never want to set foot into a condensed matter lab ever again in your life, at the very least you’ve learned something new and valuable about yourself; maybe you’ll find you are more drawn to the nuances of developing theoretical models rather than the ingenuity required to construct experiments. That’s great! And if you get a few publications and a nice letter of recommendation from a respected professor out of it as well—then all the better.

Keep an eye out for undergraduate research opportunities in your department. Talking to faculty is really important here. If there aren’t many options at your university, look for summer research opportunities at other universities or national labs.

4. Be lopsided. Forget the idea of a “well rounded” university student, be “well lopsided!” Find the things that you are really passionate about and devote yourself to them. You don’t have to join every single club, be on every single committee, and juggle three majors. Your passions don’t have to be all academic things, either: even if all you do is party and research (with a reasonable division between the two), then you’ll still be a better researcher than someone who is split ten ways between several extracurricular (even academic) activities.

As a caveat: while you’re being lopsided, try to keep your balance. You might want to do nothing more than eat, breathe, and physics. This isn’t enough. Make time to socialize, exercise, and otherwise challenge yourself in ways that you wouldn’t be able to outside of college.

5. Do your problem sets. I cannot over-emphasize how important it is to practice your science. This is especially true in mathematics and physics. Problem sets are more than just ways to make sure you do your reading; they force your brain to apply what you’ve learned to new problems. This is—at a very fundamental level—what scientific research is all about. I will go so far as to say that you only learn something meaningfully when you’ve used it to generate new (at least to you) ideas, such as when you solve a hard homework problem. (In grad school this becomes: “… when you’ve used it to write an academic paper.”)

While we’re on the topic: do your problem sets with friends. You should always be able to write up a solution on your own, but it is good to discuss with others to learn how to generate solutions. Again: this is how real science is done, though collaboration and communication. Anyway, it’s always good to find friends with similar goals: it forms a kind of “support group” to encourage one another. ((It’s also important to make friends with people who are doing completely different things from you!))

6. Learn to communicate. This holds universally no matter what you want to do, but somehow this ends up being understated for scientists. A big part of science is communicating your work to a broad range of people. Whether it’s a colleague whom you are working with on a particular problem, a funding agency that needs to be convinced that your line of research is a good investment, or the general public (whose tax dollars fund research, from whom future scientists emerge, and who really do want to learn about the frontiers of human knowledge), you need to be able to explain your work. Be comfortable discussing, presenting, and writing about ideas in your field.

7. Learn to think. This is a little more abstract, but I think it’s important in a very general way. This generation of college students grew up with Wikipedia at their fingertips. Information is cheap and readily available. You don’t need to spend tens of thousands of dollars in tuition to learn facts. The value of being at a university is to learn how to generate and use those facts. This is the “transformative” nature of education; you need to be able to parse information and generate meaning.The professors giving your lectures aren’t trying to make you memorize facts from their textbook; they want you to interact with those facts: question them, generalize them to principles, apply them elsewhere, cross-reference against accepted dogma, etc.

8. Develop tools. The other thing you should get out of your classes are a set of tools that will be valuable in the future. If you’re going to be a physicist, then you will certainly need to be well versed in quantum mechanics, for example. One often under-appreciated skill: programming. Also, for those who will be working in physics and mathematics, learn how to use the LaTeX typesetting system. (For particle theorists in particular: differential geometry, complex analysis, and group theory!)

9. Go to academic talks and read academic papers. You don’t magically learn how to read papers and listen to talks when you become a grad student. These are skills that you have to develop. Challenge yourself—even if you only understand the first five minutes of a talk, you’ll at least begin to familiarize yourself with words and ideas. Start with what’s accessible: departments usually have colloquia which are meant to be accessible to a broad audience within the department, and look for “review articles” which are meant to be pedagogical introductions to current research. If you’re just starting out, read the American Journal of Physics (which has lots of undergraduate-level discussions) or Physics Today. (Everyone who reads this blog should follow Symmetry Magazine.) As you learn more, start attending seminars in the fields that you’re interested in and start to peruse current research on the arXiv.

10. Have fun. This is an amazing time in your life where you have professors who will teach all sorts of things to you, a vibrant community of young people around you, and no responsibilities other than to make the most of your time. Do it!

Share

Physics by Poets

Monday, September 6th, 2010

Research is in full swing so I’ve been spending a lot of late nights in the office (and have been a bit slow to blog—sorry about that!) … here’s a photo out of my office window taken at the beginning of another long evening:

Yeah, those are some Feynman diagrams that I didn’t want to forget—I drew them on my window using a chalk marker. Actually, this picture is meant to be a bit of a joke: diagrams of this type are called Penguin diagrams, so the picture above is a bunch of flying penguins over Ithaca’s Cayuga Lake. (If you’re keeping up with my posts about Feynman diagrams I’ll eventually have a lot to say about penguins and why they’re so interesting.) Anyway, my calling in life is in physics and not poetry but—that being said—I think it’s cute.

I was reminded about the interplay between physics and poetry since I usually listen to something in the background while doing calculations; today it was This American Life. I should explain that after dinner time there’s two kinds of physics that I do:

  1. The kind where I’m trying to figure out something that I didn’t understand properly during the day—in which case I’m usually listening to jazz or classical music to help me concentrate, or
  2. The kind of where I’m just churning through a tedious calculation or typing up some code—in which case I usually listen to podcasts where I can half-listen to a narrative while doing something that’s otherwise kind of boring.

Tonight was a calculation night, and this week’s This American Life podcast was a rerun that I hadn’t heard in a while titled, “Family Physics.” The idea was that they’d tell stories whose overarching theme is the application of principles of physics to human interactions. I really enjoyed the episode, but as they mention in the introduction, physicists groan when popular writers do this (New Yorker, I’m looking at you).

In the 80s and 90s there were several popular books that tried to tie together themes in quantum physics with themes in eastern mysticism. Unfortunately for physicists, part of the effect of these books was to create this image that theoretical physics was somehow “mystical” and “philosophical” in a way that scientists tend to abhor. There’s nothing inherently wrong with identifying common themes between unrelated ideas—that’s poetry—but it’s important to note that physics is a science and is based on rigid scientific principles of rationalism and backed by the scientific method. I’m not saying the books weren’t any good—Fritjof’s Capra’s (a former theoretical physicist) Turning Point was adapted into a nice movie that became one of my favorites in high school—but they weren’t actually books about science.

Anyway, one offshoot of this were countless popular-level accounts of how Heisenberg’s uncertainty principle is supposed to tell us something deep about human existence. There’s something very charming and—indeed—poetic about this, since nature is something that is independent of humanity and so “truths” coming from nature must somehow be “deeper” than those written by people who don’t invoke fancy words like the cosmological principle.

Of course, this is wrong; using nature as an analogy for abstract human ideas makes them no more “true” than using human analogies to describe abstract ideas in nature. The analogies can be cute, they can even be insightful, but in the end the analogies themselves are not science. (Nor is quantum physics actually telling you how you should break up with your girlfriend, etc.)

The bottom line, of course, is that sometimes these analogies are so elegant that they become enjoyable and valuable in themselves. This is what I consider poetry. (Though I concede that more cultured readers may scoff at this as a simplistic definition! Like I said, I’m a scientist and not a poet.) Along these lines, there are three works—each in different media—that really stick out for me, and that I recently found myself thinking about while listening to This American Life. (I had to stop working on my calculation for a while. 🙂 )

  1. Godel, Escher, Bach, by Douglas Hofstadter, who is the son of Nobel Laureate physicist Robert Hofstadter. The book, however, is not about physics, but rather the common themes between Godel’s incompleteness theorem in mathematics, M.C. Escher’s graphic arts, and Bach’s music. The book is an absolute pleasure to read, though I admit that I have yet to finish it because it requires some attention to properly digest.
  2. The photography of Naglaa Walker collected in on physics. These are more along the lines of the 90s New Yorker articles invoking the Uncertainty Principle in that they make superficial connections between physics ideas and her photographs, but it’s done without any pretense of depth and I enjoy Naglaa’s wit.
  3. Finally, Hypermusic Prologue, an opera by Harvard theoretical physicist Lisa Randall that purports to draw from Randall’s seminal work on warped extra dimensions, something near and dear to my research. I haven’t seen the opera (I missed the adaptation at the Guggenheim in January), but am really intrigued by the idea. I think scientists need to have a facility with explaining their research to a broad audience, but the choice of medium here is—for many—just as esoteric as the physics behind it. Because of this I am curious to see what kind of interesting new analogies Randall and composer Hector Parra were able to develop.

In fact, this is the reason why physicists (or maybe it’s just me?) often get so annoyed when people make very glib or uninspired analogies to “deep ideas” in science: it’s because there’s so much more that one can make out of these analogies!

A final remark: one of my favorite magazines, Symmetry Magazine, is an excellent particle physics outreach publication and has a regular section where they feature science-inspired artists. There’s been a lot of fun and interesting graphic art over the past year which I encourage you to check out in their back-issues. I should explain that I view these as being rather different from analogies based on physics; instead, they are inspired by the aesthetics of physics itself (a common example is the shape of the ATLAS detector). This week’s artist, Kate Nichols, takes a more active role in the science of her work.

Anyway, maybe the conclusion is that I’m better off doing physics than being an art critic. 🙂 [Stick to your day job, Flip!]

Cheers,
Flip

[Some of you have said that you’re waiting for more Feynman diagram posts—there are a few that I’m working on, I promise!]

Share

Solar neutrinos, astronaut ice cream, and flavor physics

Monday, August 2nd, 2010

I’ve been thinking of a good way to introduce flavor physics—a subject which can be surprisingly subtle—to a general audience. Here’s my best shot at it.

An invitation: the solar neutrino problem.

By the 1960s physicists thought they had a pretty good understanding of the nuclear reactions that caused the sun to shine. One of the many predictions of their model is the number of neutrinos emitted by the sun.

A scientific model is only as good as its the experimental verification of its predictions, so the next step was to actually count these solar neutrinos. Of course, our readers already know that neutrinos are very weakly interacting and this makes them very hard to detect.

Well, when a couple of enterprising astrophysicists set up such an experiment at the Homestake Mine in the 60s (which is still a template for modern neutrino detectors), they were shocked to find that they only counted only a third of the expected neutrino flux.

At this point a good scientist will go back and check their experiment, look for systematic errors, and then go back to check the assumptions of the underlying model. Let’s gloss over these rather important steps and just state that this discrepancy could not be explained by any known effect and would be referred to as the solar neutrino problem.

What gives??

From solar neutrinos to astronaut ice cream

Before answering this puzzle, let’s take a detour and fast forward many decades to my first visit to the National Air and Space Museum when I was about 10. I remember thinking that the big airplane displays were pretty cool… but nothing compared to a discovery I made in the gift shop: astronaut ice cream. (Sometimes I look back and wonder how I ever became a scientist.)

For those who aren’t familiar, astronaut ice cream is just freeze-dried ice cream that has a uniquely chalky texture. My favorite variety was Neapolitan, which was a combination of strawberry, chocolate, and vanilla. The bars looked something like this:

The Neapolitan astronaut ice cream bar will be a very useful analogy in what follows, so bear with me. Ordinarily one would expect the bars to come in a single flavor: strawberry, chocolate, or vanilla. Instead, a Neapolitan bar is a mixture of all three.

In fact, to properly set up the analogy, we should imagine that there are three types of Neapolitan bars so that if we took one of each bar, we would have the same amount of each flavor as we would if we had one of each single-flavor bar. Thus the three Neapolitan bars are just a mixture of the three single-flavor bars.

Now here’s the crux of the matter: even though the Neapolitan bar is packaged as a mix of three flavors, when you bite into it you only get to taste one flavor at a time.

Okay, maybe you can mix two flavors if you take a bite along the seam—but let’s forget about those cases because they break the careful analogy I’m trying to put together. 🙂

What this all has to do with neutrinos

Now let’s connect this to the solar neutrino problem. The incorrect assumption associated with the solar neutrino problem turned out to be that neutrinos are more like Neapolitan bars rather than single-flavor bars. The “flavor” in question is the identity of the neutrino as either electron-like, muon-like, or tau-like.

In other words, the “pre-packaged” neutrinos that propagate between the sun and Earth are a mixture of electron/muon/tau-like neutrinos. What we mean by this is that they are a quantum superposition of these three different flavors, in precisely the same way that Schrodinger’s cat is a superposition of different corporeal states.

Now here’s the neat part: even though the neutrinos propagate as Neapolitan bars, they only interact as definite flavors (electron, muon, or tau). In other words, when the neutrinos are produced in the sun, they are produced with a definite flavor. They are also detected on Earth with a definite flavor. But everywhere in between when they’re propagating on their own, they are a mixture of all three flavors.

Physicists will say that there is an “interaction basis” (electron, muon, tau neutrinos) and a “mass basis” (propagating superpositions).

We can now work out the resolution of the solar neutrino problem. The nuclear reactions in the sun involve electrons (not muons or taus) and so produce electron-neutrinos. Similarly, the detectors on Earth only detect electron-neutrinos since are composed molecules made up of electrons. In between, however, the neutrinos travel a long enough distance that they get all mixed up into Neapolitan admixtures of all three flavors. Thus when the solar neutrinos reach the detectors, only one third of them are detectable, explaining the deficit of neutrino counts!

Actually, this explanation for the factor of 1/3 is a big fat lie… it’s just a cute numerical coincidence. The point is that mixing causes one to only observe a fraction of the total neutrinos, but the specific fraction depends on many things. We’ll discuss this below.

Neapolitan Neutrinos and their relation to mass

Of course, this resolution came from decades of progress in theory and experiment, including many red-herring directions which we won’t discuss (but is a key part of doing real science!). One important a fact that from our understanding of quantum field theory is particularly important:

Particles which propagate through any appreciable distance are states of definite mass.

For more advanced readers, the reason for this is that the mass term is part of the quadratic part of the action which can be expilcitly solved and about which we perform perturbation theory.

The reason why neutrinos propagate as Neapolitan mixtures is that those are the mixtures that have definite mass. A purely electron-flavored neutrino turns out not to have a definite mass, but rather a ‘quantum superposition’ of masses. Conservation of energy requires that only a single mass state should be allowed to travel over long (i.e. non-quantum) distances.

Thus the discovery of neutrino mixing (and hence the resolution of the solar neutrino problem) only came hand-in-hand with the discovery that neutrinos have tiny but non-zero masses in 1998. This discovery, at the joint US/Japan Super-Kamiokande detector in Japan, is a great science story for another day.

Update (3 Aug 2010): as a commenter pointed out, the definitive solution to the solar neutrino problem actually only came with data from the joint US/UK/Canada Solar Neutrino Observatory (SNO) in Ontario. In 2001, SNO detected a 1/3 of the expected solar neutrinos while Super-K detected 1/2. The difference between the two experiments is that SNO is sensitive only to electron-neutrinos, while Super-K also has some sensitivity to muon- and tau-neutrinos. By combining the information from the two experiments, SNO researchers were able to extrapolate the total number of neutrinos (of all flavors) and found that this number matched the total neutrino flux expected from the sun. These solar neutrinos were all produced as electron-neutrinos, but “oscillated” into other flavors while propagating as mass-states. For a more detailed but accessible account of this story written by one of its heroes, see John Bachall’s contribution to the Nobel eMuseum.

Revised Feynman Rules

Recall that the W boson mediates flavor-changing effects. In that previous post, readers mori and Stephen correctly point out that I was being a little misleading about the W interactions. This was a deliberate choice to avoid this “flavor vs. mass” state issue. Now that we’re familiar with the difference between neutrino flavor states (electron, muon, tau) versus neutrino mass states (Neapolitan mixtures which we’ll just call 1, 2, and 3), however, we can revise our W boson Feynman rules to be more accurate.

Let’s start in the flavor basis. For clarity I will associate electron-neutrinos with strawberry ice cream. These single-flavor states are the actual states that interact with other particles. In particular, electrons will only interact with electron-neutrinos. In terms of these interacting-states, the Feynman rules are simple:

We’re only drawing the electron interactions. There are also interactions with muons which only interact with muon-neutrinos (chocolate flavored), and similarly for taus (vanilla). However, although the Feynman rules are simple, the flavor basis isn’t so useful since these states only exist at the instant of interaction. The moment the neutrino flies off, it settles into one of three mass states, which we will call neutrino-1, neutrino-2, and neutrino-3. We’ll represent these as Neapolitan ice cream bars.

Let us draw the Feynman rules in terms of these mass states. In other words, we’re drawing the Feynman rules with the assumption that the particles are given a chance to travel some distance. Now an electron can interact with any of the three mass states:

The reason for this is that the electron only interacts with electron-neutrinos, i.e. strawberry flavor; but each of the three mass states (ν1, ν2, ν3) contain some electron (strawberry). This is where flavor mixing really shows up in the W interactions: the e doesn’t only interact with ν1, but all of the mass eigenstate neutrinos.

How much mixing?

There’s no reason to believe that the mass-state neutrinos all have an equal amount of each flavor. In fact, the particular mixtures look something more like this:

These ratios are set by the particular values of the neutrino masses.

  • ν1 is about 2/3 electron-neutrino and 1/6 each of muon/tau-neutrino
  • ν2 is about and equal mixture of all three
  • ν3 is mostly an even split between muon and tau neutrinos

Note that this may lead you to wonder why it was that the original Homestake experiment detected 1/3 of the expected neutrinos, since this is the value we would expect if each mass state had an equal fraction of each flavor. The answer: this is a coincidence!

The particular fraction of the total number of detected neutrinos depends on a lot of factors in a rather involved equation. These factors include:

  • The differences between the neutrino masses
  • The distance between the Earth and the sun
  • The energy (or rather the energy spectrum) of neutrinos emitted by the sun
  • How the neutrinos interact within the sun
  • The range of energies to which our neutrino detectors are sensitive

Different solar neutrino detection experiments have found a range of different values for the number of detected neutrinos, but once these effects are taken into account, they are all consistent and shed light on the fundamental parameters that govern the neutrino sector.

Analogy to quark mixing

I haven’t yet properly introduced the Feynman rules quarks, but it turns out that you can obtain the interactions of the quarks with the photon, W, and Z by simply taking our lepton Feynman rules and replacing charged leptons with up-type quarks and neutrinos with down-type quarks.

In particular, there are three up/down-type flavor pairs:

  • up quark and down quark
  • charm quark and strange quark
  • top quark and bottom quark

The W boson again causes mixing between these families, while all other interactions only stay within an up/down pair. It turns out that the mixing between quarks is not as dramatic as that between leptons, but because of hadronic effects (i.e. the strong force) measurements of quark flavor can be notoriously difficult. (For experts: See this post at Resonaances for an update on a recent interesting quark flavor storyline at the D0 detector in Fermilab and this post from ICHEP by the same author for a broader status report.)

Closing Remarks

  • This pattern of neutrino mixing has a fancy name, tri-bimaximal mixing, and one interesting line of research is to understand where this structure comes from. (It seems to be related to the symmetries of the tetrahedron.)
  • Because the amount of detected mixing depends on so many experimental parameters, there are many different neutrino experiments that differ by baseline (distance between source and observer). Since we can’t change the distance between the sun and the Earth, a good alternative is to detect neutrinos coming from nuclear reactors by setting up detectors at fixed distances.
  • Yet another source of neutrinos come from the atmosphere, when cosmic rays interact with molecules in the upper atmosphere (some at LHC energies!) and send a shower of particles down to Earth.
  • Here’s a really, really good question that may even stump a few physicists: why is it that the neutrinos mix while the charged leptons don’t? (Alternately, why do down quarks mix but not up quarks?) Shouldn’t they somehow behave similarly? The answer turns out to be somewhat technical, but the punchline is that they do, but the time scales involved make the effect irrelevant. I refer those with a technical background to arXiv:0706.1216.

That’s all for now!
Flip, US/LHC

Share

The W boson: mixing things up

Friday, July 2nd, 2010

For those of you who have been following our foray into the particle content of the Standard Model, this is where thing become exciting. We now introduce the W boson and present a nearly-complete picture of what we know about leptons.

We’re picking up right where we left off, so if you need a refresher, please refer to previous installments where we introduce Feynman rules and several particles: Part 1, Part 2, Part 3, Part 4, Part 5

The W is actually two particles: one with positive charge and one with negative charge. This is similar to every electron having a positron anti-partner. Here’s the Particle Zoo’s depiction of the W boson:

Together with the Z boson, the Ws mediate the weak [nuclear] force. You might remember this force from chemistry: it is responsible for the radioactive decay of heavy nuclei into lighter nuclei. We’ll draw the Feynman diagram for β-decay below. First we need Feynman rules.


Feynman Rules for the W: Interactions with leptons

Here are the Feynman rules for how the W interacts with the leptons. Recall that there are three charged leptons (electron, muon, tau) and three neutrinos (one for each charged lepton).

In addition, there are also the same rules with the arrows pointing in opposite directions, for a total of 18 vertices. Note that we’ve written plus-or-minus for the W, but we always use the W with the correct charge to satisfy charge conservation.

Quick exercise: remind yourself why the rules above are different from those with arrows pointing in the opposite direction. Hint: think of these as simple Feynman diagrams that we read from left to right. Think about particles and anti-particles.

In words: the W connects any charged lepton to any neutrino. As shorthand, we can write these rules as:

Here we’ve written a curly-L to mean “[charged] lepton” and a νi to mean a neutrino of the ith type, where i can be electron/muon/tau.

Exercise: What are the symmetries of the theory? In other words, what are the conserved quantities? Compare this to our previous theory of leptons without the W.

Answer: Electric charge is conserved, as we should expect. However, we no longer individually conserve the number of electrons. Similarly, we no longer conserve the number of muons, taus, electron-neutrinos, etc. However, the total lepton number is still conserved: the number of leptons (electrons, muons, neutrinos, etc.) minus the number of anti-leptons stays the same before and after any interaction.

Really neat fact #1: The W can mix up electron-like things (electrons and electron-neutrinos) with not-electron-like things (e.g. muons, tau-neutrinos). The W is special in the Standard Model because it can mix different kinds of particles. The “electron-ness” or “muon-neutrino-ness” (and so forth) of a particle is often called its flavor. We say that the W mediates flavor-changing processes. Flavor physics (of quarks) is the focus of the LHCb experiment at CERN.

Exercise: Draw a few diagrams that violate electron number. [If it’s not clear, convince yourself that you cannot have such effects without a W in your theory.]

Answer: here’s one example: a muon decaying into an electron and a neutrino-antineutrino pair. (Bonus question: what is the charge of the W?)

Remark (update 7 July): In the comments below Mori and Stephen point out that in the ‘vanilla’ Standard Model, leptons don’t have flavor-changing couplings to the W as I’ve drawn above. This is technically true, at least before one includes the phenomena of neutrino-oscillations (only definitively confirmed in 1998). In the presentation here I am assuming that such interactions take place, which is a small modification from the “most minimal” Standard Model. Such effects must take place due to the neutrino oscillation phenomena. We will discuss this in a future post on neutrino-less double beta decay.

Feynman Rules for the W: Interactions with other force particles

There are additional Feynman rules. In fact, you should have already guessed one them: because the W is electrically charged, it interacts with the photon! Thus we have the additional Feynman rule:

[Update, Aug 9: note that for these vertices I’ve used the convention that all of the bosons are in-coming. Thus these are not Feynman Diagrams representing physical processes, they’re just vertices which we can convert into diagrams or pieces or diagrams. For example, the above vertex has an incoming photon, incoming W+, and an incoming W-. If we wanted the diagram for a W+ emitting a photon (W+ -> W+ photon), then we would swap the incoming W- for an outgoing W+ (they’re sort of antiparticles).]

This turns out to only be the tip of the iceberg. We can replace the photon with a Z (as one would expect since the Z is a heavy cousin of the photon) to get another three-force-particle vertex:

Finally, we can even construct four force-particle vertices. Note that each of these satisfies charge conservation!

These four-force-particle vertices are usually smaller than any of the previous vertices, so we won’t spend too much time thinking about them.

Really neat fact #2: We see that the W introduces a whole new kind of Feynman rule: force particles interacting with other force particles without any matter particles! (In fancy words: gauge bosons interacting with other gauge bosons without any fermions.)

Remarks

  1. The most interesting feature of the W is that it can change fermion flavors, i.e. it can not only connect a lepton and a neutrino, but it can connect a lepton of one type with a neutrino of a different type. One very strong experimental constraint on flavor physics comes from the decay μ→eϒ (muon decaying to electron and photon). As an exercise, draw a Feynman diagram contributing to this process. (Hint: you’ll need to have a W boson and you’ll end up with a closed loop.)
  2. It is worth noting, however, that these flavor-changing effects tend to be smaller than flavor-conserving effects. In other words, a W is more likely to decay into an electron and an electron-neutrino rather than an electron and a tau-neutrino. We’ll discuss how much smaller these effects are later.
  3. W bosons are rather heavy—around 80 GeV, slightly lighter than the Z but still much heavier than any of the leptons. Thus, as we learned from the Z, it decays before it can be directly observed in a detector.
  4. The W was discovered at the UA1 and UA2 experiments at CERN in the 80s. Their discovery was a real experimental triumph: as you now know from the Feynman rules above, the W decays into a lepton and a neutrino—the latter of which cannot be directly detected! This prevents experimentalists from observing a nice resonance as they did for the Z boson a few months later. They used a slightly modified technique based on a quantity called “transverse mass” to search for a smeared-out resonance using only the information about the observed lepton. Generalizations of this technique are still being developed today to search for supersymmetry! (For experts: see this recent review article on LHC kinematics.)
  5. The W boson only talks to left-handed particles. This is a remarkable fact that turns out to be related to the difference between matter and antimatter. For a proper introduction, check out this slightly-more-detailed post.

Exercise: Now that we’ve developed quite a bit of formalism with Feynman rules, try drawing diagrams corresponding to W boson production at a lepton collider. Assume the initial particles are an electron and positron. Draw a few diagrams that produce W bosons. “Finish” each diagram by allowing any heavy bosons (Z, W) to decay into leptons.

What is the simplest diagram that includes a W boson? Is the final state observable in a detector? (Remember: neutrinos aren’t directly observable.) What general properties do you notice in diagrams that both (1) include a W boson and (2) have a detectable final state (at least one charged lepton)?

Can you draw diagrams where the W boson is produced in pairs? Can you draw diagrams where the W boson is produced by itself?

Hints: You should have at least one diagram where the W is the only intermediate particle. You should also play with diagrams with both the fermion-fermion-boson vertices and the three-boson vertices. You may also use the four-boson vertices, but note that these are smaller effects.

Remark: Try this exercise, you’ll really start to get a handle for drawing diagrams for more complicated processes. Plus, this is precisely the thought process when physicists think about how to detect new particles. As an additional remark, this is not quite how the W was discovered—CERN used proton-antiproton collisons, which we’ll get to when we discuss quantum chromodynamics.

Relating this to chemistry

Before closing our introduction to the W boson, let’s remark on its role in chemistry and simultaneously give a preview for the weak interactions of quarks. You’ll recall that in chemistry one could have β decay:

neutron → proton + electron + anti-neutrino

This converts one atom into an isotope of another atom. Let’s see how this works at the level of subatomic particles.

Protons and neutrons are made out of up and down type quarks. Up quarks (u) have electric charge +2/3 and down quarks (d) have electric charge -1/3. As we will see when we properly introduce the quarks, up and down quarks have the same relationship as electron-neutrinos and electrons. Thus we can expect a coupling between the up, down, and W boson.

A neutron is composed of two down quarks and an up quark (ddu) while a proton is composed of two up quarks and a down quark (uud). [Check that the electric charges add up to what you expect!] The diagram that converts a neutron to a proton is then:

Update: As reader Cris pointed out to me in an e-mail, the W should have negative charge and should decay into an electron and anti-neutrino!

Because the W boson is much heavier than the up and down quarks—in fact, it’s much heavier than the entire proton—it is necessarily a virtual particle that can only exist for a short time. One can imagine that the system has to ‘borrow’ energy to create the W so that the Heisenberg uncertainty principle tells us that it has to give back the energy very quickly. Thus the W can’t travel very far before decaying and we say that it is a “short range force.” Thus sometimes the weak force is called the weak nuclear force. Compare this to photons, which have no mass and hence are a “long range force.”

[We now know, however, that it is not intrinsically a nuclear force (in our theory above we never mentioned quarks or nuclei), and further its ‘weakness’ is related to the mass of the W making it a short-range force.]

Cheers!
Flip (USLHC)

Share

New Q&A websites for physics

Wednesday, June 23rd, 2010

I’m always intrigued by new ways to use the Internet to improve the way we do and share physics. It was something of a coincidence that within a week of each other I received two e-mails introducing new question-and-answer websites of interest to the high energy physics community and the general public interested in physics.

  • A proposal for a High-Energy Physics Q&A site based on the popular Stack Overflow framework. This is still in the “definition” phase where it’s looking to gather a critical number of followers and model questions to demonstrate the viability of the project. A shining example of this sort of site in a related field is Math Overflow.
  • Quora, a similar website built on a slightly different architecture. Quora is a Q&A site for any kind of question (not just science) and is tied into social networking; it requires a Facebook or Twitter account to join. Quora already has High-Energy Physics and Particle Physics sections. (I don’t actually understand the difference between the two categories.)

Both sites show a lot of promise and I look forward to seeing how they progress. These are the Web 2.0 progeny of newsgroups (like sci.physics.research) and  forums (e.g. Physics Forums) that really piqued my interest in physics as a teenager.

I guess at this point I should make an obligatory reference to CERN’s role in the history of the Internet.

Anyway, I encourage people to check out the proposed HEP-overflow (my own made up name) and Quora. HEP-overflow, in particular, needs community support to move on so I especially encourage researchers to take a look at it.

Finally, as always, we’re still very happy to try to answer any questions that you leave in the comments of our blog! 🙂

Cheers,
Flip (US LHC blogs)

Share