• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Flip Tanedo | USLHC | USA

Read Bio

An Idiosyncratic Introduction to the Higgs

Friday, March 25th, 2011

A different presentation of the Higgs

There have been several very clever attempts to explain the Higgs to a general audience using analogies; one of my favorites is a CERN comic based on an explanation by David Miller. Science-by-analogy, however, is a notoriously delicate tightrope to traverse. Instead, we’ll take a different approach and jump straight into the physics. We can do this because we’ve already laid down the ground work to use Feynman diagrams to describe particle interactions.

In the next few posts we’ll proceed as we did with the other particles of the Standard Model and learn how to draw diagrams involving the Higgs. We’ll see what makes the Higgs special from the diagrammatic point of view, and then gradually unpack the deeper ideas associated with it. The approach will be idiosyncratic, but I think it is closer to the way particle physicists really think about some of the big ideas in our field.

This first post we’ll start very innocently. We’ll present simplified Feynman rules for the Higgs and then use them to discuss how we expect to produce the Higgs at the LHC. In follow-up posts we’ll refine our Feynman rules to learn more about the nature of mass and the phenomenon called electroweak symmetry breaking.

Feynman Rules (simplified)

First off, a dashed line represents the propagation of a Higgs boson:

You can already guess that there’s something different going on since we haven’t seen this kind of line before. Previously, we drew matter particles (fermions) as solid lines with arrows and force particles (gauge bosons) as wiggly lines. The Higgs is indeed a boson, but it’s different from the gauge bosons that we’ve already met: the photon, W, Z, and gluon. To understand this difference, let’s go into a little more depth on this:

  • Gauge bosons, things which mediate “fundamental” forces, carry angular momentum, or spin. Gauge bosons carry one unit of spin; roughly this means if you rotate a photon by 360 degrees, it returns to the same quantum mechanical state.
  • Fermions, matter particles, also carry angular momentum. However, unlike gauge bosons, they carry only half a unit of spin: you have to rotate an electron by 720 degrees to get the same quantum state. (Weird!)
  • The Higgs boson is a scalar boson, which means it has no spin. You can rotate it by any amount and it will be the same state. All scalar particles are bosons, but they don’t mediate “fundamental” forces in the way that gauge bosons do.

This notion of spin is completely quantum mechanical, and it is a theorem that any particle with whole number spin is a boson (“force particle”) and any particle with fractional spin is a fermion (“matter particle”). It’s not worth dwelling too much about what kind of ‘force’ the Higgs mediates—it turns out that there are much more interesting things afoot.

Now let’s ask how the Higgs interacts with other particles. There are two Feynman rules that we can write down right away:

Here we see that the Higgs can interact with either a pair of fermions or a pair of gauge bosons. This means, for example, that a Higgs can decay into an electron/positron pair (or, more likely, a quark/anti-quark pair). For reasons that will become clear later, let’s say that the Higgs can interact with any Standard Model particle with mass. Thus it does not interact with the photon or gluon, and for argument’s sake we can ignore the interaction with the neutrino.

The interaction with fermions is something that we’re used to: it looks just like every other fermion vertex we’ve written down: one fermion coming in, one fermion coming out, and some kind of boson. This reflects the conservation of fermion number. We’ll see later that because the Higgs is a scalar, there’s actually something sneaky happening here.

Finally, the Higgs also interacts with itself via a four-point interaction: (This is similar to the four-gluon vertex of QCD.)

There are actually lots of subtleties that we’ve not mentioned and a few more Feynman rules to throw in, but we’ll get to these in the next post when we will see what happens with the Higgs gets a “vacuum expectation value”. Please, no comments yet about how I’m totally missing the point… we’ll get to it all gradually, I promise.

Higgs Production

Thus far all we’ve been doing is laying the groundwork in preparation for a discussion of the neat things that make the Higgs special. Even before we get into that stuff, though, we can use what we’ve already learned to talk about how we hope to produce the Higgs at the LHC. This is an exercise in drawing Feynman diagrams. (Review the old Feynman diagram posts if necessary!)

The general problem is this: at the LHC, we’re smashing protons into one another. The protons are each made up of a goop of quarks, antiquarks, and gluons. This is important: the protons are more than just three quarks! As we mentioned before, protons are terribly non-perturbative objects. Virtual (anti-)quarks and gluons are being produced and reabsorbed all over the place. It turns out that the main processes that produce Higgs bosons from proton collisions comes from the interaction of these virtual particles!

One of the main “production channels” at the LHC is the following gluon fusion diagram:

This is kind of a funny diagram because there’s a closed loop in the middle. (This makes it a very quantum effect… and somewhat more tricky to actually calculate.) What’s happening is that a gluon from one proton and a gluon from the other proton interact to form a Higgs. However, because the gluons don’t directly interact with a Higgs, they have to do so through quarks. It turns out that the top quark—which is heaviest—has the strongest interaction with the Higgs, so the virtual quarks here are tops.

Another way to get a Higgs is associated production with a top pair. The diagram looks like this:

Here gluons again produce a Higgs through top quarks. This time, however, a top quark and an anti-top quark are also produced along with the Higgs. We can draw a similar diagram without the gluons:

This is called vector fusion, because virtual W or Z bosons produce a Higgs. Note that we have two quarks being produced as well.

Finally, there is associated production with a W or Z. As homework you can fill in the particle labels assuming the final gauge boson is either W or Z:

There are other ways of producing a Higgs out of a proton-proton collision, but these are the dominant processes. While we know a lot about the properties of a Standard Model Higgs, we still don’t know its mass. It turns out that the relative rates of these processes depends on the Higgs mass, as can be seen in the plot below (from the “Tevatron-for-LHC” report):

The horizontal axis is the hypothetical HIggs mass, while the vertical axis measure the cross section for Higgs production by the various labeled processes. For our purposes, the cross section is basically the rate at which these processes occur. (Experimentally, we know that a Standard Model Higgs should have a mass between about 115 GeV and 200 Gev.) We can see that the gg → h is the dominant production mechanism throughout the range of possible Higgs masses—but this is only half of the story.

We don’t actually directly measure the Higgs in our detectors because it decays into lighter Standard Model particles. The particular rate at which it decays to different final states (“branching ratios”) are plotted above, image from CDF. This means we have to tell our detectors to look for the decay products of the Higgs in addition to the extra stuff that comes out of producing the Higgs in the first place. For example, in associated production with a top pair, we have gg → tth. Each of the tops decay into a b quark, a lepton, and a neutrino (can you draw the diagram showing this?), while the Higgs also decays—say, into a pair of b quarks. (For now I’m not distinguishing quarks and anti-quarks.) This means that one channel we have to look for is the rather cumbersome decay,

gg → tth →blν blν bb

Not only is this a lot of junk to look for in the final state (each of the b quarks hadronizes into a jet), but there are all sorts of other Standard Model processes which give the same final state! Thus if we simply counted the number of “four jets, two leptons, and missing energy (neutrinos)” events, we wouldn’t only be counting Higgs production events, but also a bunch of other background events which have nothing to do with the Higgs. One has to predict the rate of these background events and subtract them from the experimental count. (Not to mention the task of dealing with experimental uncertainties and possible mis-measurements!)

The punchline is that it can be very tricky to search for the Higgs and that this search is very dependent on the Higgs mass. This is why we may have to wait a few years before the LHC has enough data to say something definitive about the Higgs boson. (I’ve been somewhat terse here, but my main point is to give a flavor of the Higgs search at the LHC rather than explain it in any detail.)

As a single concrete example, consider the gluon fusion production channel, gg → h. This seems nice since there’s no extra particles in the production process. However, from the plot above, we can see that for relatively light masses (less than 140 GeV) the Higgs will want to decay into b quarks. This is no good experimentally since the signal for this has hopelessly large background from non-Higgs events.

In fact, rather counter intuitively, that one of the best ways to use gluon-fusion to search for a light-mass Higgs is to look for instances where it decays into a pair of photons! This is really weird since the Higgs doesn’t interact directly with photons, so this process must occur through virtual quarks, just like the Higgs-gluon coupling above. As the branching ratio chart above shows, this is a very rare process: the Higgs doesn’t want to decay into photons very often. However, the upshot is that there aren’t many things in the Standard Model which can mimic this “two photon” signal so that there is very little background. You can see that this stops working if the Higgs is too heavy since the decay rate into photons shrinks very quickly.

Next time

In our next post we’ll introduce a completely new type of Feynman rule representing the Higgs “vacuum expectation value.” In doing so we’ll sort out what we really mean when we say that a particle has mass and continue our march towards the fascinating topic of electroweak symmetry breaking (“the Higgs mechanism”).

Share

A couple of lectures: the science of nuclear meltdowns & Coleman’s QFT

Thursday, March 24th, 2011

Lecture on Fukushima radiation

Forgive me for digressing a bit from the LHC focus of this blog, but I wanted to take time to share a timely and accessible public lecture by UCSB particle physicist Benjamin Monreal about the science of the Fukushima reactor meltdowns in Japan. I strongly recommend it for those who want to be able to make sense of the news regarding radiation in Japan and elsewhere.

One of our jobs as scientists is to be there to inform the public when something like this happens, and Benjamin rises to the occasion with exceptional clarity. As he mentions towards the end of his talk, it is often the case that misinformation is one of the biggest dangers after an event like this, and he goes a long way to explain what’s actually going on. I learned quite a lot from the presentation and it has helped me provide a proper scientific context for the news about the region.

It’s always very difficult to cope with the aftermath of a natural disaster on the scale of the Tohoku earthquake two weeks ago. The particle physics community is especially international and the news of the disaster hit quite close to home for many of us with friends and colleagues in Japan. Our hearts go out to everyone affected.

Coleman’s QFT Lectures

Sidney Coleman. Image from L. Motl.

Now to change gears quite a bit, I’d like to share another link that has been making a splash in particle physics circles: a typed up version of Sidney Coleman’s 1985-1986 Physics 253a quantum field theory course at Harvard, thanks to the heroic typesetting efforts of Bryan Gin-ge Chen and Ting Yuan Sen. (See also the videos of the lectures from 1975-76.) The link is perhaps most useful for young physicists who are learning field theory (or older physicists who are teaching field theory), but as a concession for the non-physicists reading this blog, here’s a link to Coleman’s well known seminar, “Quantum Mechanics in your face.”

Let me provide some background. Sidney Coleman is one of the towering figures of theoretical physics in our time and one of the true masters of quantum field theory. While he doesn’t have the same popular image as Richard Feynman, his unique charm and wit as well as his dexterity as a teacher are nothing short of legendary in the physics community.

Coleman’s life and work were commemorated at Harvard in 2005 at “Sidneyfest.” The list of famous presenters and speakers speak volumes about Coleman’s influence. Sadly, Coleman passed away in 2007. He left behind an indelible mark on the history of quantum field theory as well as several lectures (most notably his Erice lectures, published in Aspects of Symmetry) which continue to educate generation after generation of particle physicists.

Share

My “Workbench” (2011 edition)

Monday, March 7th, 2011

Hi everyone! Today I wanted to share something that is less about physics, but more about what it’s like to be a physicist. For those who have been asking for more “Physics through Feynman Diagrams” posts, don’t worry: I’ve been spending a lot of time thinking about how to explain the Higgs mechanism, and this is something I’m looking forward to typing up.

One of my first blog posts on US LHC was titled “My Workbench,” following the style of the regular column in Seed magazine. This semester our group finally moved to a new building, so I wanted give a snapshot of my research environment. So, without further ado, here’s my annotated office.

Before going into details… yes, it’s an office. I don’t have a lab, I don’t wear a lab coat, I don’t even wear closed-toed shoes. (When indoors I’m usually shuffling around in comfy Birkenstocks.) This is partly because I’m a theorist and my experimental colleagues wouldn’t let my clumsy hands anywhere near lab equipment—but actually most experimentalists have very similar offices where they do much their analysis work (Christine’s rappelling onto the ALICE detector notwithstanding!).

1. One important feature is a nice window that gets natural lighting (it looks into an atrium with a skylight, which is why the immediate view is the building next door). It may sound a bit superficial, but we often work long hours and getting some sunlight makes a big difference. My officemates and I also have several plants, which brightens the atmosphere a bit. (You can tell that my plant has grown since my original workbench post!)

2. Penguins. This is a bit of an inside joke, but two of my first projects as a graduate student had to do with the calculation of a particular process called a ‘penguin diagram.’ I seem to have collected various penguin-themed posters and stuffed animals from friends who were amused by my odd paper and talk titles (most recently, “Warped Penguins”).

3. Headphones. It’s nice to have some ambient background noise when concentrating. Some of the other students in my group swear by pink noise, but I’m usually listening to something silly like this spoof from PhD comics. These headphones are also very helpful when having Skype discussions with collaborators who are far away.

4. Here you can see my basketball shoes. I haven’t played basketball in a while (these days I spend more time swimming), but as a student it is really important to maintain some balance in one’s life otherwise it’s easy to go off the deep end. Other common recreational activities in our group include foosball and ping pong. I’ve found that many of my most interesting physics discussions have happened while during non-physics recreation with other physicists.

5. This is my trusty messenger bag, which I carry with me everywhere I go like Linus and his blanket. I usually carry a notepad with the current research idea I’m obsessing over, my laptop, and several journal articles which I’m supposed to read. Most of the latter get skimmed over on the bus in the mornings. During the winter (which seem to last forever here in upstate New York) I usually have an extra pair of gloves and a pull-over packed in case of inclement weather.

6. This is my messy desk. Usually various bits of scratch paper, print outs, and reference books find their way strewn about. I’m a relatively tidy person and clean up every other night or so, but my style of working is to sprawl everything out as I use them. (I’m sure my officemates get annoyed by this, but thus far they’ve been very accommodating.)

7. In my upper storage units I have several reference books. You can also see my ping pong paddle peeking through. When I’m confused with my work I’ll start pulling down books to try to sort things out… and when that doesn’t work, I’ll try to work through the problem with a colleague over ping pong. 🙂

8. This is my officemate’s desk. There’s a total of three of us in the office, which I think is a good balance of having company while not becoming too distracting. It’s nice to be able to have other people around when you have “stupid questions” about physics… especially since more often than not these “stupid questions” can have surprisingly profound answers. You can see that my neighbor is much neater than I am. 🙂

Here’s a close-up of my desk:

9. This is an old t-shirt from the SLAC National Accelerator Laboratory. I fell in love with particle physics as a undergraduate at Stanford and got my first taste of research at SLAC as a summer student. I highly recommend the experience to all undergraduates interested in particle physics.

10. Unlike some of the other students, I don’t usually have lots of work on the tack board directly in front of me. Instead, I’ve put up several photos of other physicists, mostly graduate students whom I have gotten to know over the past few years. (If you’ve spent any appreciable time doing particle physics with me, I probably have your photo up here.) This is more than just nostalgia; a large part of doing physics is collaboration. The best way to generate and test new ideas is to bounce them off of colleagues and work with people with complimentary skill sets. As such, there is a strong sense of camaraderie in the particle physics community. The people whom you get to know in grad school tend to be the same people you keep bumping into and working with the rest of your career.

11. This is my laptop. I spend a lot of time on it. These days we access all of the latest research papers directly from the Internet, we communicate with collaborators using video conferencing, we correspond via e-mail, we run simulations over computing farms…. all through our computers. That being said, I’ve also been working on knowing when to turn off my computer (and its associated distractions) when I need to bunker down and do an old-fashioned pen-and-paper calculation.

12. Here are some of my most commonly used books. Apparently you can tell a lot about a physicist by what kinds of books are on his or her bookshelf… so for those of you who care, here’s a short summary of what I keep at arm’s reach:

  • The QFT books by Peskin, Ryder, Weinberg, Zee, and Srednicki
  • Aspects of Symmetry by Coleman
  • Current Algebra and Anomalies (an old volume of reprints)
  • The particle physics books by Cheng & Li and Mohapatra
  • The SUSY texts by Terning, Bailin & Love, Wess & Bagger, and Binetruy
  • Some mathematically-oriented reading: Anomalies in QFT by Bertlmann, the text by Nakahara, and the monograph by Gockeler and Schuker
  • Perfectly Reasonable Deviations from the Beaten Path, a collection of Richard Feynman’s letters. This isn’t a physics book, but I find it inspiring to skim through it when I’m having a rough research day.
  • Just for fun: on my desk I also have two volumes of PhD comics, the “Scientific Progress goes Boink” Calvin and Hobbes collection, and Our Dumb World by the Onion

I have a bunch of other books… but they’re hidden behind the plush versions of the Standard Model, via the Particle Zoo:

13. This is a plush dog that I picked up during a Guy Fawkes carnival at Cambridge University. It’s followed me through my postgraduate studies. (There’s a lot of silliness in my office; would you believe me if I said that it is balanced by a very serious focus on my research? Well, that’s what I tell my adviser, anyway…)

14. When I finish up a project I have several sheets of hand-written notes and calculations. I’ve collected these into large binders, initially as “trophies” of past work, but I’ve since found that there are times when one has to refer back to a subtle detail long after the project is done.

15. I should note that this sheep is not part of the Standard Model. I think I picked it up during a wine tasting trip with one of our seminar speakers. 🙂 Below are random fridge magnets which I’ve stuck to a large filing cabinet containing several more papers: mainly saved documents, important references, notes for old talks, and material related to teaching.

Before I sign out, there is one very conspicuous omission: my office doesn’t yet have any chalkboards! (These are in the process of being installed.) Chalkboards are really useful since so much of our daily work involves explaining ideas to one another; though there’s a bit of a divide in the particle physics community between theorists who love chalkboards and experimentalists who (for reasons I don’t really understand) tend to prefer whiteboards.

Well, that’s my office! I don’t expect MTV to visit for an episode of Cribs any time soon, but at least now you know where I am when I’m typing up blog posts every couple of weeks. 🙂

Share

Effective Theories: Dancing with the Quarks

Thursday, February 17th, 2011

Last time I posted, we looked at the “Eightfold way” classification of mesons. We argued that this is based only on symmetry and allowed physicists in the 60s to make meaningful predictions about mesons even though mesons are ultimately complicated “non-perturbative” objects where quarks and anti-quarks perform an intricate subatomic ‘dance’ (more on this below!).

Historical models of mesons

In fact, physicists even developed theories of mesons as fundamental particles—rather than bound states of quarks—which accurately described the observed light meson masses and interactions. These theories were known as “phenomenological” models, chiral perturbation theory, or nonlinear sigma models. These are all fancy names for the same idea.

The non-linear sigma model is a useful tool even in modern particle physics, as evidenced by the so-called little Higgs models. In these models the  Higgs boson is relatively light due to a mechanism called collective symmetry breaking in which multiple symmetries must be broken to generate a Higgs mass. (For  technical introductions for physicists, see here and here.) This idea that light particles come from broken symmetries has its origin in “phenomenological” models of mesons via the Goldstone mechanism.

From a formal point of view these models suffered a theoretical sickness: while they agreed well with experiment at low energies, they didn’t seem to make much sense if you used them to calculate predictions for high energies. It’s not that the predictions didn’t match with experiments, it’s that theory seemed to make no predictions! (Alternately, its predictions were nonsense.) The technical name for this illness is non-renormalizability, and it was American Nobel Laureate Ken Wilson who really clarified the correct way to understand these theories.

Ken Wilson (b. 1936) may not have the public fame as Richard Feynman or Robert Oppenheimer, but he is without a doubt one of the great American theoretical physicists of the century. His research focused on the theoretical framework of quantum field theory and its applications to both particle physics and condensed matter physics. He was one of the great thinkers of our field who really understood the “big idea,” and I think he is nothing short of a hero of modern physics.

Rather than going into the precise sense in which a non-renormalizable theory is a ‘sick’ theory, let’s emphasize Wilson’s key insight: these sick theories are fine as long as we are careful to ask the right questions. Wilson made this statement in a much more mathematically rigorous and elegant way—but in this post we’ll focus on getting the intuition correct.

Effective theories

The point is that these “non-renormalizable” theories are just approximations for the behavior of a more fundamental theory, which we call an effective theory (here’s a very old post on the big idea). These approximations get the “rough behavior” correct, but doesn’t sweat the details. If you then try to ask the approximate theory something about the details that it neglects, then it gives you a gibberish response. Wilson taught us how to understand the gibberish as the theory saying, “I’m not sophisticated enough to answer that!”

Here’s a concrete example. One of my previous posts presented a pixelated image of the Mona Lisa to demonstrate “lattice QCD.” (This is actually exactly the effective theory that Ken Wilson was working on.)

The pixelated Mona Lisa is an “effective” image with details blurred out compared to the “fundamental” image. Even with these details removed, from far away the images look the same. In fact, the effective image is sufficient to answer questions like

  • What is the overall color of the image or of different patches of the image? (Beige/brown)
  • How many figures are in the image? (One… but keep this in mind for later)

On the other hand, effective Mona Lisa is completely unequipped to answer more subtle questions like

  • Where is the Mona Lisa looking?
  • Is the Mona Lisa happy or sad?

Okay, arguably even art historians can’t come up with answers to those questions. But the point is that the pixelated image can’t even begin to try to answer them—the questions ask about details that were left out of the “effective” image. Such questions are outside of the domain of validity of the effective image.

Now here’s a very important lesson in particle physics:

Models of particle physics also have a domain of validity, beyond which they are ill equipped to make sensible predictions.

For some models, like the effective theories of mesons, asking questions outside of the model’s domain of validity leads to nonsense answers. On the other hand, within the domain of validity the models are perfectly predictive. In fact, different “effective models” have to agree when their domains of validity overlap. Here’s an example from an old post where classical electromagnetism is an effective theory for quantum electrodynamics, as manifested by the formula for the electric field.

Dancing with the quarks

Now let’s get back to mesons, albeit though an analogy. We know that a pion is really a quark–anti-quark caught up in a subatomic dance. They spin about one another, exchange gluons, and can even interact with other particles as a joint entity. Here’s a rough picture:

But here’s the thing: that’s the picture that we see only if we can really look very closely and observe the quarks directly. This requires having front row seats at “Dancing with the Quarks” (or at least an HDTV). For someone who can only watch the broadcast at low resolution, the dance looks very different: everything is blurred out:

In fact, this is now just like the case of the pixelated Mona Lisa. Note that because the quarks are so meticulously coordinated, the blurry picture looks like there’s only one object dancing! We call that object a pion and we can make careful measurements of how it spins and interacts… all without knowing that if we only had better resolution we would actually see two quarks dancing in unison rather than one pion.

Things brings us back to the state of particle physics in the 1960s. We can create an entire effective theory to describe the pion, but we have to accept that we’ve put on our fuzzy glasses and can’t make out any details. We can’t ask our effective theory something like “how many hands are in the picture above?” Well, it looks like two… but it’s hard to be sure. I could ask an even more difficult question: what is the gender of the dancers in the picture above? Now the effective theory completely falls apart. Any answer that it can give must be manifestly wrong because it doesn’t even know that there are two dancers, much less the particular gender of either. In the same way, the effective theories of mesons seemed to fall apart when you asked questions about energies higher than their regime of validity.

Modern Effective Theories

Let me end by remarking that even though the underlying goal of high energy physics is to probe nature at a fundamental level, effective theories are still incredibly useful tools.

  1. Matching theories to low-energy experiments. It is often the case that theories of exotic new particles at high energies are constrained by experiments that are conducted at much lower energies. For example, many models of new physics are limited by how they would affect the physics of ordinary W and Z bosons.  By writing an effective theory of W and Z bosons that parameterizes the effect of new physics, we can provide robust bounds on the properties of whatever new particles appear at high energies. (For experts: these are the electroweak precision constraints, see hep-ph/0405040, hep-ph/0412166, hep-ph/0604111) The analogy to the dancing quarks is to use the blurry picture to tell us that, “I don’t know how many hands there are, but if there are more than two, then they have to be pretty close to one another.” (For experts: this approach has recently been applied to direct detection of dark matter.)
  2. Phenomenological models.” In the previous case we simplify a calculation of a fundamental theory by working with an effective theory; this is a top-down approach. We can also consider the bottom-up approach where we write down a model that describes known low-energy physics and figure out at what energy it breaks down. We can then predict that there should be some new physics not encapsulated in our model appearing at those energies. This is where we are with particle physics: we have observed a bunch of neat particles and measured their properties—but the entire framework breaks down somewhere around the TeV scale unless we have something like the Higgs boson appearing.
  3. Strong coupling and duality. This brings us back to mesons. Recall that our effective meson theory was a way for 1960s physicists to describe the particles coming out of early colliders without ever having to worry about the horrible non-perturbative QCD substructure that we now know is actually there. In some cases, there is a much stronger relation between the fundamental and effective theories and the two theories are said to be dual to one another. The 1990s were revolutionary for the development of formal dualities between seemingly unrelated theories: Witten’s web of dualities in M-theory, Seiberg duality in supersymmetric gauge theories, and gauge/gravity dualities like the AdS/CFT correspondence proposed by Maldacena. (For theoretical physics fans: those are some really big names in the field; each one of them is a MacArthur”Genuis” fellow!)

Anyway, there’s a surprising amount of “deep” physics that one can glean from thinking about mesons… even if they are somewhat “boring” particles that aren’t even fundamental. The notion of effective field theory is one of the central pillars of particle physics (as well as statistical physics), and in fact perhaps provides the most solid intuition about the entire field of high energy physics.

Share

No love for low scale supersymmetry at the LHC

Monday, February 14th, 2011

Happy Valentine’s Day everyone… well, unless you were expecting hints for supersymmetry (SUSY) at the LHC. Last night the ATLAS collaboration posted the results for one of its supersymmetry searches to the arXiv. They corroborate last month’s results from CMS on a similar type of search. (The CDF site has an excellent public summary that should be at the right level for physics enthusiasts with no formal background.)

What is supersymmetry?

Supersymmetry is an extension of the Standard Model in which every particle and anti-particle has a superpartner particle with a silly name, such as “gluinos” as the partners of gluons and “squarks” as the partners of quarks. The neat thing about supersymmetry is that the partner of a matter particle is a force particle (with a prefix s-), while the partner of a force particle is a matter particle (with a suffix -ino). SUSY does a lot of great stuff for us theoretically, but it must be broken so that the Standard Model particles and the SUSY particles are split up and have different masses. Because this is Valentine’s day, let’s leave the details of this splitting up to another post.

What is the LHC telling us?

Here’s one of the key plots from the ATLAS paper (which includes the CMS result):

I’ll not get into the details here and will keep the discussion as accessible as possible. The axes of the plot are parameters in a particular supersymmetric model. The horizontal axis is the “universal scalar mass” m0 (related to the mass of the squarks) while the vertical axis is the “universal gaugino masses” (related to the mass of the gluinos and its cousins). The area inside the curves (lighter masses) are ruled out. The red line is the ATLAS result, the black line is the recent CMS result, and the other lines are various exclusions from older experiments.

These parameters aren’t quite the same as the masses of the superparnters, but they are related by some formulae which experts in the field have memorized. A good estimate for the stringency of the bounds on the actual superpartner masses come from the conclusion of the paper:

For a chosen set of parameters within MSUGRA/CMSSM, and for equal squark and gluino masses, gluino masses below 700 GeV are excluded at 95% CL.

Some translations:

  • MSUGRA/CMSSM: These stand for “minimal supergravity” and “constrained minimal supersymmetric Standard Model.” The most general supersymmetric version of the Standard Model has over 115 free parameters… this would be a nightmare to plot. For simplicity, experimentalists typically plot their results against simplified reference models with much smaller parameter spaces.
  • Squark and gluino masses: squarks are the partners of quarks and gluinos are the partners of gluons. The experiment is setting a lower bound on these masses. (Recall: heavier things are harder to produce.) The 700 GeV lower bound on the squark/gluino mass (in the case where they’re equal) is much heavier than any particle in the Standard Model—recall that the top quark is ‘only’ 172 GeV.
  • 95% CL. This is a confidence level that explains the statistical strength of this bound. Roughly it answers the question, “based on the data, how sure are you of the statement you’re making?” Here’s a great explanation.

What’s actually happening at the LHC?

The general idea is that a common feature of most SUSY models is that when supersymmetric partner is produced at a collider, it will eventually decay into familiar stuff and a particle which escapes undetected. This escaping particle is called the lightest supersymmetric particle (LSP) and is a natural dark matter candidate, but its presence is only experimentally determined because the measured momenta of all the familiar stuff doesn’t balance. Thus a good way to search for the presence of supersymmetric partners is to look for:

  1. A high energy “normal” particles (typically QCD “jets”)
  2. Large “missing energy,” i.e. momentum that doesn’t add up

The high energies are important to tell us that something heavy (like a new particle) may have been involved, and the missing energy is important to tell us that something escaped undetected. By looking for decays of this type, ATLAS and CMS are able to constrain the existence of supersymmetric partners up to a certain mass. In fact, the reason why the LHC has been able to greatly improve the bounds on SUSY—even at such an early stage of running—is that the previous constraints from the Tevatron were limited not by how much data they could take, but by the energy scale of the collision.

Here’s an example, another plot from the ATLAS paper:

This plot shows the number of events in a particular range of “effective mass,” a kind of kinematic variable which characterizes the energy of an event. Here’s what’s happening:

  1. ATLAS records a bunch of data over the past year or so. For each recorded particle collision (“event“), ATLAS records information about what its detectors see (“signal“).
  2. Physicists go through this data when they want to search for new particles. The set of physicists who worked on this search focused only on the events whose signals included a lepton (e or μ), QCD jets (quarks and gluons), and missing energy.
  3. They then plot the number of events whose “effective mass” is in a certain energy range. This gives the data points on the plot above.
  4. In order to compare to the Standard Model, they run a “Monte Carlo” simulation of the kind of signal that known physics would produce in this particular channel. These are all of the different colored pieces of the histogram—they represent events that we expect to be counted even if there is no new physics in these events.
  5. If the data points line up with the sum of expected events, then we conclude (up to a certain statistical significance) that there was no new physics observed.

For reference, the dotted line is the expected contribution from one particular choice of SUSY parameters. That line would have to be added to the Standard Model sum (shown as a thin red line); clearly the data points do not show this excess.

What does this mean for supersymmetry?

This isn’t great news for supersymmetry. One of the appealing features of supersymmetry is that it can solve the hierarchy problem of the Higgs mass. This problem is only really solved, however, if the SUSY particles are not that much heavier than their Standard Model partners. Thus the more we push up the lower bound on the super partner masses, the more trouble we have explaining the Higgs paradigm within the Standard Model.

I think I am not yet enough of an expert to comment on how severe the recent ATLAS/CMS results are in terms of current favorite models of supersymmetry. However, I will note that the particular model that was used to make these bounds represents a very narrow subset of possible supersymmetric extensions of the Standard Model. As explained above, this is by necessity: a plot over a 115-dimensional parameter space is simply not possible. Most of these parameters are related in plausible ways and the bounds from ATLAS and CMS are probably farily robust over huge swaths of parameter space, but in principle there is a lot of freedom to tweak a parameter here or there to try to evade particular experimental bounds. [For experts: last I heard there was some nit picking about the tan-β dependence of these results?]

This is actually a fairly important point. For the past two decades theorists have worked hard to come up with clever supersymmetric models which can either give novel experimental signatures or which are otherwise “generic” in a way that is not captured by the usual models used to experimentally constrain SUSY. With the advent of the LHC era, however, more thought has gone into better interfacing with our experimental colleagues to connect the results of the LHC to a more robust set of SUSY parameters. (This is part of a larger shift in the particle physics community over the past decade to have better communication between our theoretical and experimental practitioners.)

Anyway, there’s one thing that’s for sure: the Standard Model particles will be without super partners once again on Valentine’s day.

PS — [from Cosmic Variance] apparently the White House is also due to release its FY2012 budget request this Valentine’s day. Given the push towards spending cuts, it’s not looking like fundamental science will get much love… but I’m crossing my fingers anyway. (I don’t want to get political, but fundamental research is an investment in the American science and engineering infrastructure and the future of the American economy.)

Share

Mysteries of Mesons: The Eightfold Way

Saturday, January 29th, 2011

Hi everyone! I’m going to make a brief digression from my Feynman diagram posts because there are a few important ideas that I wanted to explain before I get to the Higgs and more speculative scenarios. I’ve been meaning to explore some of these ideas in the context of meson physics for some time, but my draft post ended up getting longer and longer until I decided to cut it up into shorter bite-sized pieces; this is the first piece.

Recall that mesons are bound states of a quark and an antiquark (a kind of quark ‘atom’).  They are interesting because they capture a lot of “known unknowns.” Quantum chromodynamics can, in principle, tell us everything we would want to know about the meson system, but it’s very difficult (in many cases practically impossible) to calculate anything from these first principles. We already know why: non-perturbativity.

But here’s the funny thing: we’ve known about mesons for a very long time, much longer than we’ve known about the fundamental quarks and gluons that make up a meson. Instead of discovering the “fundamental” objects first and then observing the complicated dynamics that the “fundamental” theory (QCD) generates, physicists at early colliders found a plethora of these funny particles and had no idea where they came from and why there were so many of them. They knew that these particles could interact with one another, for example by looking at bubble chamber tracks (image from BNL):

In these posts we’ll explore a little about how past generations of physicists developed some theories of mesons. Even though this type of physics is more than half a century old, it represents a fantastic time when new particles being discovered every month. There are lessons from that time that will carry over to the interpretation of new results from the LHC. Further, we’ll see how some of the theoretical ideas developed at the time have continued to develop in surprising new ways.

The Eightfold Way

One of the first things that physicists wanted when they found all of these new particles was to find a way to classify them.

Jim’s inaugural post gave a nice example of the “Eightfold Way,” a sort of periodic table of hadrons originally developed by Murray Gell-Mann (and independently by Yuval Ne’eman) in the 60s. Jim showed the baryon table showing the proton, neutron, and some of their more exotic cousins. Here is the analogous meson table:

Before explaining what’s going on here, we can learn a few things just by staring at this picture.

  1. Each dot represents a meson. There are three types of particle names: the pions (?), the kaons (K), and the etas (?).
  2. Evidently there’s some meaning to the placement of each particle relative to the others.
  3. The mesons each have an electric charge: +, -, or neutral (0).
  4. It looks like opposite points of the hexagon are antiparticles of one another since we expect antiparticles to have the opposite charge. (This is indeed the case.)

So we’ve met our first nine mesons. These turn out to be the lightest mesons, and in fact the pions are the very lightest mesons. There are actually many, many, many mesons out there, but for now let’s focus on the lightest ones. The pions are all made up of up and down quarks, the kaons contain a strange quark, and the etas are quantum superpositions up–anti-up / down–anti-down / strange–anti- strange quarks.

Just like the periodic table of chemistry, however, the peculiar arrangement of this diagram is also trying to teach us something. You might think that it would be useful to arrange these mesons according to the quark content. There are two problems with this:

  1. The eightfold way was developed before quarks were experimentally discovered. (Actually, the eightfold way provided and important part of the theoretical structure that led people to suspect that quarks might be real!)
  2. As we saw for the etas, some mesons are not well defined in terms of individual quark/anti-quark pairs but rather as quantum superpositions of several types of quark/anti-quark pair. In fact, this is true for the neutal pions and kaons as well.

So the Eightfold Way is not quite organized according to quark content, at least not directly. The structure of the diagram is actually based on the symmetries of the mesons. The branch of mathematics that describes symmetries is called group theory (in particular, representation theory) and is now a staple in the education of every particle physicist. Back in the 1960s, however, the field was not so well known to physicists and Murray Gell-Mann essentially re-invented the relevant mathematics for himself. (Historically this has happened fairly often between mathematicians and physicists.)

On the horizontal axis of the diagram is something called isospin, I. On the vertical axis is something else called hypercharge, Y.  For now all that matters is that the usual electric charge is given by Q = I + Y/2 (Edit 31 Jan: thanks to reader Stan for pointing out the factor of 1/2 that I originally missed!). This is indeed the pattern that we see: mesons that are higher and further right tend to be positively charged, while mesons that are lower and to the left tend to be negatively charged. By the way, at this point we don’t need to know very “deeply” what these things mean, but they are properties which particles have. Just as we can describe a circle simply by specifying its radius, we can describe particles by listing some set of properties that include isospin and hypercharge.

I should say that the diagram above shows what is called the pseudoscalar nonet (or octet + singlet) because they describe nine particles. (“Pseudoscalar” tells us about the angular momentum of the particle.) These are mesons which do not have any intrinsic spin. There are also heavier versions of each of those particles, for example the vector nonet of spin-1 particles. This is analogous to the component quark and anti-quark having some angular momentum, just like the excited states of electrons in the hydrogen atom.

You can see that the spin-1 pions are called rhos (?), the spin-1 kaons are called K-stars (K*), and the spin-1 versions of the etas are called the phi (?) and omega (?). In fact, there are even higher spin copies of these guys, not to mention analogous mesons formed out of the heavier quarks. Indeed, now you can see why the 1960s were “boom” years in experimental physics where new particles were being discovered almost weekly.

Relation to modern ideas

This has an interesting relation to very modern ideas for of physics beyond the Standard Model. Models of extra dimensions predict an analogous “tower” of copies of known particles, the so-called Kaluza-Klein tower. Because this KK tower looks just like the tower of mesons. We understand that the meson tower comes from the fact that they are composite particles, so it looks like theories of extra dimensions mimic theories of composite particles like mesons. This is one of the key observations underlying the so called holographic principle or gauge/gravity correspondence in which theories of extra dimensions are “dual” to strongly coupled theories.
In a broader sense, the discussion above represents a deep theme in particle physics where symmetry became the central principle for how we understand nature (I’ve mentioned this before!). These days one of the fundamental tools of a theoretical physicist is group theory (the mathematical description of symmetries) and models of new physics aren’t described so much by the individual particles but by the symmetry content of the theory.
Share

New Year tidbits: CERN-TH Christmas play, translations, and a goodbye to Katie

Friday, January 7th, 2011

Hi everyone! Happy New Year to all of our readers. Here are a few random tidbits, silliness, and a special thank you to our editor. 🙂

Feynman Diagram Posts in Japanese

First of all, I had a very nice and unexpected Christmas gift this year. Mr. Ken Yokoyama, a physics & mathematics enthusiast in Japan who reads our blog, has (with our permission) been translating some of our posts into Japanese so that they may reach a wider audience. Imagine my surprise and delight when I received an e-mail from him with a pdf compilation of his translations my ongoing series on Feynman diagrams! Not only did Ken compile all of the translated posts, but he includes cross-referenced posts and has arranged them very nicely (it also includes a few of Christine‘s recent posts). This takes a lot of time and I’m very grateful to Ken for all of his efforts to share these posts with a broader audience. For now the pdf file can be found here.

What made this particularly enjoyable for me is that I’ve been toying with the idea of eventually trying to compile and revise these posts into a small book about particle physics.  Ken has organized many of the ideas in quite the same way that I had been thinking.

By the way, don’t hold your breath on the book idea. I’m plenty busy at the moment trying to get research done for my PhD thesis! (On the other hand, I’m always looking for experimentalists and illustrators who would be interested in co-authoring such an endeavor at some point in the indeterminate future. 🙂 )

Holiday silliness at particle physics labs

Christmas was also an eventful time for the theorists at CERN who participated in the traditional CERN-TH Christmas play, which is now online. This year’s lightheartedness has a Harry Potter theme and pokes fun at a few current events with a few “in” jokes about life at CERN. (Contrary to popular belief, I’m not actually stationed at CERN… so some of the jokes were over my head.) It seems like the Fermilab theory group also staged a Christmas play, though I could not find a full length recording. (By the way, I’m not stationed at Fermilab, either! 🙂 )

Postdoc season in theoretical physics

Today is January 7th, which is the deadline for many theoretical physics postdoc applicants to accept first round offers from research institutions. Postdocs, or postdoctoral researchers, are the rough equivalent of residency in medicine. It’s a stage between a PhD and a faculty position where scientists can spread their wings without the obligations (or pay scale 🙂 ) of a faculty job. Here’s a summary of the academic career path by Katherine two years ago (my how time passes on the US LHC blog!). This year several colleagues of mine are graduating and have landed great postdoc positions, so many congratulations to them!

Mike, the US LHC blog alumnus, recently explained to me that the postdoc positions are a little different in experimental particle physics—so I’ll leave that to the experimentalists on the blog. 🙂 Mike did share this somewhat dispiriting article from Miller-McCune magazine (a similar article recently showed up in The Economist) which likens the academic research establishment to something of a Ponzi scheme. Personally, I think this is a bit of an exaggeration—most physicists that I’ve talked to are acutely aware that there will be much fewer hires than candidates at each successive step in an academic career, but the points that the columnists bring up indeed reflect times that can test the patience and resolve of young academics.

Moving to a new building

Now for a bit of slightly personal news, I’ll be moving (along with the rest of the Cornell High Energy Theory group) to a new building in a week. Here’s a snapshot I took a while ago when the building had just finished:

I’m putting up the picture partly to elicit a response from fellow US LHC blogger Ken, who did his PhD at Cornell. It’s the end of an era, Ken! From now an particle physics grad students won’t spend their sleepless nights in the old Newman Lab, but rather in the new “Physical Sciences Building.” I assume it will pick up a more interesting name in the near future. Our next task is to figure out a good place to put our group’s foosball table.

The move to the new building also required me to pack up all my stuff in my old office. Since I figured that I’d be the last one to use one of the top-floor cubicles with no air conditioning, I decided to leave something memorable on my chalkboard:

Thank you Katie!

Finally, I’d like to send a special thank you from all of the US LHC bloggers to our editor, Katie Yurkewicz. After four years as the US LHC communicator at CERN, Katie is stepping down to return to the United States to become the new head of the Fermilab Office of Communication. Katie has done a great job making sure everything runs smoothly for us as bloggers and I very much appreciate all of her behind-the-scenes work. Katie has also regularly contributes to the Symmetry Breaking blog and you may remember her husband, Adam, was a long time US LHC blogger. As a reminder of her journey to CERN, here’s an old montage from Symmetry magazine. Best wishes, Katie and Adam!

Katie will be replaced by Kathryn Grimm, who is also the editor of Symmetry Breaking. Welcome Kathryn!

Share

When Feynman Diagrams Fail

Saturday, December 11th, 2010

We’ve gone pretty far with our series of posts about learning particle physics through Feynman diagrams. In our last post we summarized the Feynman rules for all of the known particles of the Standard Model. Now it’s time to fess up a little about the shortcomings of the Feynman diagram approach to calculations; in doing so, we’ll learn a little more about what Feynman diagrams actually represent as well as the kinds of physics that we must work with at a machine like the LHC.

When one diagram isn’t enough

Recall that mesons are bound states of quarks and anti-quarks which are confined by the strong force. This binding force is very non-perturbative; in other words, the math behind our Feynman diagrams is not the right tool to analyze it. Let’s go into more detail about what this means. Consider the simplest Feynman diagram one might draw to describe the gluon-mediated interaction between a quark and an anti-quark:

Easy, right? Well, one thing that we have glossed over in our discussions of Feynman diagrams so far is that we can also draw much more complicated diagrams. For example, using the QCD Feynman rules we can draw something much uglier:

This is another physical contribution to the interaction between a quark and an anti-quark. It should be clear that one can draw arbitrarily many diagrams of this form, each more and more complicated than the last. What does this all mean?

Each Feynman diagram represents a term in a mathematical expression. The sum of these terms gives the complete probability amplitude for the process to occur. The really complicated diagrams usually give a much smaller contribution than the simple diagrams. For example, each photon vertex additional internal photon line (edit Dec 11, thanks ChriSp and Lubos) gives a factor of roughly α=1/137 to the diagram’s contribution to the overall probability. (There are some subtleties here that are mentioned in the comments.) Thus it is usually fine to just take the simplest diagrams and calculate those. The contribution from more complicated diagrams are then very small corrections that are only important to calculate when experiments reach that level of precision. For those with some calculus background, this should sound familiar: it is simply a Taylor expansion. (In fact, most of physics is about making the right Tayor expansion.)

However, QCD defies this approximation. It turns out that the simplest diagrams do not give the dominant contribution! It turns out that both the simple diagram and the complicated diagram above give roughly the same contribution. One has to include many complicated diagrams to obtain a good approximate calculation. And by “many,” I mean almost all of them… and “almost all” of an infinite number of diagrams is quite a lot.  For various reasons, these complicated diagrams are very difficult to calculate and at the moment our normal approach is useless.

There’s a lot of current research pushing this direction (e.g. so-called holographic techniques and recent progress on scattering amplitudes), but let’s move on to what we can do.

QCD and the lattice

`Surely,’ said I, `surely that is something at my window lattice;
Let me see then, what thereat is, and this mystery explore –
— Edgar Allen Poe, “The Raven”

A different tool that we can use is called Lattice QCD. I can’t go into much detail about this since it’s rather far from my area of expertise, but the idea is that instead using Feynman diagrams to calculate processes perturbatively—i.e. only taking the simplest diagrams—we can use computers to numerically solve for a related quantity. This related quantity is called the partition function and is a mathematical object from which one can straightforwardly calculate probability amplitudes. (I only mention the fancy name because it is completely analogous to an object of the same name that one meets in thermodynamics.)

The point is that the lattice techniques are non-perturbative in the sense that we don’t calculate individual diagrams, we simultaneously calculate all diagrams. The trade-off is that one has to put spacetime on a lattice so that the calculations are actually done on a four-dimensional hyper-cube. The accuracy of this approximation depend on the lattice size and spacing relative to the physics that you want to study.  (Engineers will be familiar with this idea from the use of Fourier transforms.) As usual, a picture is worth a thousand words; suppose we wanted to study the Mona Lisa:

The first image is the original. The second image comes from putting the image on a lattice, you see that we lose details about small things. Because things with small wavelengths have high energies, we call this an ultraviolet (UV) cutoff. The third image comes from having a smaller canvas size so that we cannot see the big picture of the entire image. Because things with big wavelengths have low energies, we call this an IR cutoff. The final image is meant to convey the limitations imposed by the combination of the UV and IR cutoffs; in other words, the restrictions from using a lattice of finite size and finite lattice spacing.

If you’re interested in only the broad features the Mona Lisa’s face, then the lattice depiction above isn’t so bad. Of course, if you are a fine art critic, then the loss of small and large scale information is unforgiveable. Currently, lattice techniques have a UV cutoff of around 3 GeV and an IR cutoff of about 30 MeV; this makes them very useful for calculating information about transitions between charm (mass = 1.2 GeV) and strange quarks (mass = 100 MeV).

Translating from theory to experiment (and back)

When I was an undergraduate, I always used to be flummoxed that theorists would always draw these deceptively simple looking Feynman diagrams on their chalkboards, while experimentalists had very complicated plots and graphs to represent the same physics. Indeed, you can tell whether a scientific paper or talk has been written by a theorist or an experimentalist based on whether it includes more Feynman diagrams or histograms. (This seems to be changing a bit as the theory community has made a concerted effort over the past decade to learn to the lingo of the LHC. As Seth pointed out, this is an ongoing process.)

There’s an reason for this: experimental data is very different from writing down new models of particle interactions. I encourage you to go check out the sample event displays from CMS and ATLAS on the Symmetry Breaking blog for a fantastic and accessible discussion of what it all means. I can imagine fellow bloggers Jim and Burton spending a lot of time looking at similar event displays! (Or maybe not; I suspect that an actual analysis focuses more on accumulated data over many events rather than individual events.) As a theorist, on the other hand, I seem to be left with my chalkboard connecting squiggly lines to one another. 🙂

Once again, part of the reason why we speak such different languages is non-perturbativity. One cannot take the straightforward Feynman diagram approach and use it when there is all sorts of strongly-coupled gunk flying around. For example, here’s a diagram for electron–positron scattering from Dieter Zeppenfeld’s PiTP 2005 lectures:

The part in black, which is labeled “hard scattering,” is what a theorist would draw. As a test of your Feynman diagram see if you can “read” the following: This diagram represents an electron and positron annihilating into a Z boson, which then decays into a top–anti-top pair. The brown lines also show the subsequent decay of each top into a W and (anti-)bottom quark.

Great, that much we’ve learned from our previous posts. The big question is: what’s all that other junk?! That, my friend, is the result of QCD. You can see that the pink lines are gluons which are emitted from the final state quarks. These gluons can sprout off other gluons or quark–anti-quark pairs. All of these quarks and gluons must then hadronize into color-neutral hadron states, mostly mesons. These are shown as the grey blobs. These hadrons can in turn decay into other hadrons, depicted by yellow blobs. Most of all of this happens before any of the particles reach the detector. Needless to say, there are many, many similar diagrams which should all be calculated to give an accurate prediction.

In fact, for the LHC it’s even more complicated since even the initial states are colored and so they also spit off gluons (“hadronic junk”). Here’s a picture just to show how ridiculous these process look at a particle-by-particle level:

Let me just remark that the two dark gray blobs are the incoming protons. The big red blob represents all of the gluons that these protons emit. Note that the actual “hard interaction,” i.e. the “core process” is gluon-gluon scattering. This is a bit of a subtle point, but at very high energies, the actual point-like objects which are interacting are gluons, not the quarks that make up the proton!

All of this hadronic junk ends up being sprayed through the experiments’ detectors. If the origin of some of the hadronic junk comes from a high-energy colored particle (e.g. a quark that came from the decay of a new heavy TeV-scale particle), then they are collimated into cones that are pointing in roughly the same direction called a jet, (image from Gavin Salam’s 2010 lectures at Cargese)

Some terminology: parton refers to either a quark or gluon, LO means “leading-order”, NLO means “next-to-leading order.” The parton shower is the stage in which partons can radiate more low-energy partons, which then confine into hadrons. Now one can start to see how to connect our simple Feynman diagrams to the neat looking event reconstructions at the LHC: (image from Gavin Salam’s lectures again)

Everything except for the black lines are examples of what one would actually read off of an event display. This is meant to be a cross-section of the interaction point of the beamline. The blue lines come from a tracking chamber, basically layers of silicon chips that detect the passage of charged particles. The yellow and pink bars are readings from the calorimeters, which tell how much energy is deposited into chunks of dense material. Note how ‘messy’ the event looks experimentally: all of those hadrons obscure the so-called hard scattering underlying event (edit Dec 11, thanks to ChriSp), which is what we draw with Feynman diagrams.

So here’s the situation: theorists can calculate the “hard scattering” or “underlying event” (black lines in the two diagrams above), but all of the QCD-induced stuff that happens after the hard scattering is beyond our Feynman diagram techniques and cannot be calculated from first principles. Fortunately, most of the non-peturbative effects can again be accounted for using computers. The real question is given an underlying event (a Feynman diagram), how many times will the final state particles turn into a range of different hadrons configurations. This time one uses Monte-Carlo techniques where instead of calculating the probabilities of each hadronic final state, the computer randomly generates these final states according to some pre-defined probability distribution. If we run such a simulation over and over again, then we end up with a simulated distribution of events which should match experiments relatively well.

One might wonder why this technique should work. It seems like we’re cheating—where did these “pre-defined” probability distributions come from? Aren’t these what we want to calculate in the first place? The answer is that these probability distributions come from experiments themselves. This isn’t cheating since the experiments reflect data about low-energy physics. This is well known territory that we really understand. In fact, everything in this business of hadronic junk is low-energy physics. The whole point is that the only missing information is the high-energy underlying event hard scattering (ed. Dec 11)—but fortunately that’s the part that we can calculate! The fact this works is a straightforward result of “decoupling,” or the idea that physics at different scales shouldn’t affect one another. (In this case physicists often say that the hadronic part of the calculation “factorizes.”)

To summarize: theorists can calculate the underlying event hard scattering (ed. Dec 11) for their favorite pet models of new physics. This is not the whole story, since it doesn’t reflect what’s actually observed at a hadron collider. It’s not possible to calculate what happens next from first principles, but fortunately this isn’t necessary, we can just use well-known probability distributions to simulate many events and predict what the model of new physics would predict in a large data set from an actual experiment. Now that we’re working our way into the LHC era, clever theorists and experimentalists are working on new ways to go the other way around and take the experimental signatures to try to recreate the underlying model.

As a kid I remember learning over and over again how a bill becomes a law. What we’ve shown here is how a model of physics (a bunch of Feynman rules) becomes a prediction at a hadron collider! (And along the way we’ve hopefully learned a lot about what Feynman diagrams are and how we deal with physics that can’t be described by them.)

Share

“Known knowns” of the Standard Model

Wednesday, December 8th, 2010
This is the tenth (or so) post about Feynman diagrams, there’s an index to the entire series in the first post.

There is a famous quote by former Secretary of Defense Donald Rumsfeld that really applies to particle physicists:

There are known knowns.
These are things we know that we know.
There are known unknowns.
That is to say, there are things that we know we don’t know.
But there are also unknown unknowns.
There are things we don’t know we don’t know.

Ignoring originally intended context, this statement describes not only the current status of the Standard Model, but accurately captures all of our hopes and dreams about the LHC.

  • We have “known knowns” for which our theories have remarkable agreement with experiment. In this post I’d like to summarize some of these in the language of Feynman diagrams.
  • There are also “known unknowns” where our theories break down and we need something new. This is what most of my research focuses on and what I’d like to write about in the near future.
  • Finally, what’s most exciting for us is the chance to trek into unexplored territory and find something completely unexpected—“unknown unknowns.”

Today let’s focus on the “known knowns,” the things that we’re pretty sure we understand. There’s a very important caveat that we need to make regarding what we mean by “pretty sure,” but we’ll get to that at the bottom. The “known knowns” are what we call the Standard Model of particle physics*, a name that says much about its repeated experimental confirmations.

* — a small caveat: there’s actually one “known unknown” that is assumed to be part of the Standard Model, that’s the Higgs boson. The Higgs is currently one of the most famous yet-to-be-discovered particle and will be the focus of a future post. In the meanwhile, Burton managed to take a few charming photos of the elusive boson in his recent post.

First, let’s start by reviewing the matter particles of the Standard Model. These are called fermions and they are the “nouns” of our story.

Matter particles: the fermions

We can arrange these in a handy little chart, something like a periodic table for particle physics:

Let’s focus on only the highlighted first column. This contains all of the ‘normal’ matter particles that make up nearly all matter in the universe and whose interactions explain everything we need to know about chemistry (and arguably everything built on it).

The top two particles are the up and down quarks. These are the guys which make up the proton (uud) and neutron (udd). As indicated in the chart, both the up and down quarks come in three “colors.” These aren’t literally colors of the electromagnetic spectrum, but a handy mnemonic for different copies of the quarks.

Below the up and down we have the electron and the electron-neutrino (?e), these are collectively known as leptons.  The electron is the usual particle whose “cloud” surrounds an atom and whose interactions is largely responsible for most of chemistry. The electron-neutrino is the electron’s ghostly cousin; it only interacts very weakly and is nearly massless.

As we said, this first column (u, d, e, and ?e) is enough to explain just about all atomic phenomena. It’s something of a surprise, then, that we have two more columns of particles that have nearly identical properties as their horizontal neighbors. The only difference is that as you move to the right on the chart above, the particles become heavier. Thus the charm quark (c) is a copy of the up quark that turns out to be 500 times heavier. The top quark (t) is heavier still; weighing in at over 172 GeV, it is the heaviest known elementary particle. The siblings of the down quark are the strange (s) and bottom (b) quarks; these have historically played a key role in flavor physics, a field which will soon benefit from the LHCb experiment. Each of these quarks all come in three colors, for a total of 2 types x 3 colors x 3 columns = 18 fundamental quarks. Finally, the electrons and neutrinos come with copies named muon (?) and tau (?). It’s worth remarking that we don’t yet know if the muon and tau neutrinos are heavier than the electron-neutrino. (Neutrino physics has become one of Fermilab’s major research areas.)

So those are all of the particles. As we mentioned in our first post, we can draw these as solid lines with an arrow going through them.  You can see that there are two types of leptons (e.g. electron-like and neutrino) and two types of quarks (up-like and down-like), as well as several copies of these particles. In addition, each particle comes with an antiparticle of opposite electric charge. I won’t go into details about antimatter, but see this previous post for a very thorough (but hopefully still accessible) description.

You can think of them as nouns. We now want to give them  verbs to describe how they can interact with one another. To do this, we introduce force particles (bosons) and provide the Feynman rules to describe how the particles interact with one another. By stringing together various particle lines to the vertices describing interactions, we end up with a Feynman diagram that tells the story of a particle interaction. (This is the “sentence” formed from the fermion nouns and the boson verbs.)

We will refer to these forces by the ‘theories’ that describe them, but they are all part of the larger Standard Model framework.

Quantum Electrodynamics

The simplest force to describe is QED, the theory of electricity and magnetism as mediated by the photon. (Yes, this is just the “particle” of light!) Like all force particles, we draw the photon as a wiggly line. We drew the fundamental vertex describing the coupling of an electron to the photon in one of our earliest Feynman diagram posts,

For historical reasons, physicists often write the photons as a gamma, ?. Photons are massless, which means they can travel long distances and large numbers of them can set up macroscopic electromagnetic fields. As we described in our first post, you are free to move the endpoints of the vertex freely. At the end of the day, however, you must have one arrowed line coming into the vertex and one arrowed line coming out. This is just electric charge conservation.

In addition to the electron, however, all charged particles interact with the photon the same vertex. This means that all of the particles above, except the neutrinos, have this vertex. For example, we can have an “uu?” vertex where we just replace the e‘s above by u‘s.

QED is responsible for electricity and magnetism and all of the good stuff that comes along with it (like… electronics, computers, and the US LHC blog).

Quantum Flavordynamics

This is a somewhat antiquated name for the weak force which is responsible for radioactivity (among other things). There are two types of force particle associated with the weak force: the Z boson and the W boson. Z bosons are heavier copies of photons, so we can just take the Feynman rule above and change the ? to a Z. Unlike photons, however, the Z boson can also interact with neutrinos. The presence of the Z plays an important role in the mathematical consistency of the Standard Model, but for our present purposes they’re a little bit boring since they seem like chubby photon wanna-be’s.

On the other hand, the W boson is something different. The W carries electric charge and will connect particles of different types (in such a way that conserves overall charge at each vertex). We can draw the lepton vertices as:

We have written a curly-L to mean a charged lepton (e, ?, ?) and ?i to mean any neutrino (?e, ??, ??). An explicit set of rules can be found here. In addition to these, the quarks also couple to the W in precisely the same way: just replace the charged lepton and neutrino by an up-type quark and a down-type quark respectively. The different copies of the up, down, electron, and electron-neutrino are called flavors. The W boson is special because it mediates interactions between different particle flavors. Note that it does not mix quarks with leptons.

Because the W is charged, it also couples to photons:

It also couples to the Z, since the Z just wants to be a copy-cat photon:

Finally, the W also participates in some four-boson interactions (which will not be so important to us):

Quantum Chromodynamics

Finally, we arrive at QCD: the theory of “strong force.” QCD is responsible for binding quarks together into baryons (e.g. protons and neutrons) and mesons (quark–anti-quark pairs). The strong force is mediated by gluons, which we draw as curly lines. Gluons couple to particles with color, so they only interact with the quarks. The fundamental quark-gluon interaction takes the form

The quarks must be of the same flavor (e.g. the vertex may look like up-up-gluon but not up-down gluon) but may be of different colors. Just as the photon vertex had to be charge-neutral, the gluon vertex must also be color-neutral. Thus we say that the gluon carries a color and an anti-color; e.g. red/anti-blue. For reasons related to group theory, there are a total of eight gluons rather than the nine that one might expect. Further, because gluons carry color, they interact with themselves:

QCD—besides holding matter together and being a rich topic in itself—is responsible for all sorts of head aches from both theoretical and experimental particle physicists. On the experimental side it means that individual quarks and gluons appear as complicated hadronic jets in particle colliders (see, e.g. Jim’s latest post). On the theoretical side the issue of strong coupling (and the related idea of confinement) means that the usual ‘perturbative’ techniques to actually calculate the rate for a process quickly becomes messy and intractable. Fortunately, there are clever techniques on both fronts that we can use to make progress.

The missing piece: The Higgs Boson

Everything we’ve reviewed so far are known knowns, these are parts of our theory that have been tested and retested and give good agreement with all known experiments. There are a few unknown parameters such as the precise masses of the neutrinos, but these are essentially just numbers that have to be measured and plugged into the existing theory.

There’s one missing piece that we know must either show up, or something like it must show up: the Higgs boson. I’d like to dedicate an entire post to the Higgs later, so suffice it to say for now that the Higgs is an integral part of the Standard Model. In fact, it is intimately related to the weak sector. The importance of the Higgs boson is something called electroweak symmetry breaking. This is a process that explains why particles have the masses that they do and  why the W, Z, and photon should be so interwoven. More importantly, the entire structure of the Standard Model breaks down unless something like the Higgs boson exists to induce electroweak symmetry breaking: the mathematical machinery behind these diagrams end up giving nonsensical results like probabilities that are larger than 100%. Incidentally, this catastrophic nonsenical behavior begins at roughly the TeV scale—precisely the reason why this is the energy scale that the LHC is probing, and precisely the reason why we expect it to find something.

A fancy way of describing the Standard Model is that there are actually four Higgs bosons, but three of them are “eaten” by the W and Z bosons when they become massive. (This is called the Goldstone mechanism, but you can think of it as the Grimm’s Fairy Tale of particle physics.) This has led snarky physicists to say things like, “Higgs boson? We’ve already found three of them!”

Theories and Effective Theories

By specifying the above particles and stating how the Higgs induces electroweak symmetry breaking, one specifies everything about the theory up to particular numbers that just have to be measured. This is not actually that much information; the structure of quantum mechanics and special relativity fixes everything else: how to write down predictions for different kinds of processes between these particles.

But now something seems weird: we’ve been able to check and cross-check the Standard Model in several different ways. Now, however, I’m telling you that there’s this one last missing piece—the Higgs boson—which is really really important… but we haven’t found it yet. If that’s true, how the heck can we be so sure about our tests of the Standard Model? How can these be “known knowns” when we’re missing the most important part of the theory?

More generally, it should seem funny to say that we “know” anything with any certainty in science! After all, part of the excitement of the LHC is the hope that the data will contradict the Standard Model and force us to search for a more fundamental description of Nature. The basis of the scientific method is that a theory is only as good as the last experiment which checked it, and there are good reasons to believe that the Standard Model breaks down at some scale. If this is the case, then how can we actually “know” anything within the soon-to-be-overthrown Standard Model paradigm?

The key point here is that the Standard Model is something called an effective theory. It captures almost everything we need to know about physics below, say, 200 GeV, but doesn’t necessarily make any promises about what is above that scale. In fact, the sicknesses that the Standard Model suffers from when we remove the Higgs boson (or something like it) are just the theory’s way of telling us, “hey, I’m no longer valid here!”

This is not as weird as one might think. Consider the classical electromagnetic field of a point particle: it is a well known curiosity to any high school student that the potential at the exact location of the point source is infinity. Does that mean that an electron has infinite energy? No! In fact, this seemingly nonsensical prediction is classical electromagnetism telling us that something new has to fix it. That something new is quantum mechanics and the existence of antiparticles, as we previously discussed.

This doesn’t mean that the effective theory is no good, it only means that it breaks down above some region of validity. Despite the existence of quantum mechanics, the lessons we learn from high school physics were still enough for us to  navigate space probes to explore the solar system. We just shouldn’t expect to trust Newtonian mechanics when describing subatomic particles. There’s actually a rather precise sense in which a quantum field theory is “effective,” but that’s a technical matter that shouldn’t obfuscate the physical intuition presented here.

For physicists: the theory of the Standard Model without a Higgs is a type of non-linear sigma model (NL?M). This accurately describes a theory of massive vector bosons but suffers from a breakdown of unitarity. The Higgs is the linear completion of the NL?M that increases the theory’s cutoff. In fact, this makes the theory manifestly unitary, but does not address the hierarchy problem. For an excellent pedagogical discussion, see Nima Arkani-Hamed’s PiTP 2010 lectures.

Where we go from here

The particles and interactions we’ve described here (except the Higgs) are objects and processes that we have actually produced and observed in the lab. We have a theory that describes all of it in a nice and compact way, and that theory requires something like the Higgs boson to make sense at high energies.

That doesn’t mean that there aren’t lots of open questions. We said that the Higgs is related to something called “electroweak symmetry breaking.” It is still unknown why this happens. Further, we have good reason to expect the Higgs to appear in the 115 – 200 GeV range, but theoretically it takes a “natural” value at the Planck mass (1019 GeV!). Why should the Higgs be so much lighter than its “natural” value? What particle explains dark matter? Why is there more matter than anti-matter in the universe?

While the Higgs might be the last piece of the Standard Model, discovering the Higgs (or something like it!) is just the beginning of an even longer and more exciting story. This is at the heart of my own research interests, and involves really neat-sounding ideas like supersymmetry and extra dimensions.

Share

QCD and Confinement

Friday, October 22nd, 2010

Now that we’ve met quarks and gluons, what I should do is describe how they interact with the other sectors of the Standard Model: how do they talk to the leptons and gauge bosons (photon, W, Z) that we met in the rest of this series on Feynman diagrams. I’ll have to put this off a little bit longer, since there’s still quite a lot to be said about the “fundamental problem” of QCD:

The high energy degrees of freedom (quarks and gluons) are not what we see at low energies (hadrons).

Colliders like the LHC smash protons together at high energies so that the point-like interactions are between quarks and gluons. By the time these quarks and gluons scatter into the LHC detectors, however, they have now “dressed” themselves into hadronic bound states. This is the phenomenon of confinement.

As a very rough starting point, we can think about how protons and electrons are bound into the hydrogen atom. Here the electric potential attracts the proton and electron to one another. We can draw the electric field lines something like this:

These are just like the patterns of iron filings near a bar magnet. The field lines are, of course, just a macroscopic effect set up by lots and lots of photons, but we’re in a regime where we’re justified in taking a “semi-classical” approximation. In fact, we could have drawn the same field lines for gravity. They are all a manifestation of the radially symmetric 1/r potential. We can try to extend this analogy to QCD. Instead of a proton and electron attracted by the electric force, let’s draw an up quark and a down quark attracted by the color (chromodynamic) force.

This looks exactly the same as the electric picture above, but instead of photons setting up a classical field, we imagine a macroscopic configuration of gluons. But wait a second! There’s no such thing as a macroscopic configuration of gluons! We never talk about long range classical chromodynamic forces.

Something is wrong with this picture. We could guess that maybe the chromodynamic force law takes a different form than the usual V(r) ~ 1/r potential for electricity and gravity. This is indeed a step in the right direction. In fact, the chomodynamic potential is linear: V(r)~ r. But what does this all mean?

By the way, the form of the potential is often referred to as the phase of the theory. The “usual” 1/r potential that we’re used to in classical physics is known as the Coulomb phase. Here we’ll explain what it means that QCD is in the confining phase. Just for fun, let me mention another type of phase called the Higgs phase, which describes the weak force and is related to the generation of fermion masses.

Okay, so I’ve just alluded to a bunch of physics jargon. We can do better. The main question we want to answer is: how is QCD different from the electric force? Well, thing about electricity is that I can pull an electron off of its proton. Similarly, a satellite orbiting Earth can turn on its thrusters and escape out of the solar system. This is the key difference between electricity (and gravity) and QCD. As we pull the electron far away from the proton, then the field lines near the proton “forget” about the electron altogether. (Eventually, the field lines all reach the electron, but they’re weak.)

QCD is different. The as we pull apart the quarks, the force is that pulls them back together becomes stronger energy stored in the gluon field gets larger. The potential difference gets larger and it takes more energy to keep those quarks separated, something like a spring. So we can imagine pulling the quarks apart further and further. You should imagine the look of anguish on my face as I’m putting all of my strength into trying to pull these two quarks apart—every centimeter I pull they want to spring back towards one another with even more force

… stores more and more energy in the gluon field. (This is the opposite of QED, where the energy decreases as I pull the electron from the proton! Errata: 10/23, this statement is incorrect! See the comments below. Thanks to readers Josh, Leon, Tim, and Heisenberg for pointing this out!) Think of those springy “expander” chest exercise machines. Sometimes we call this long, narrow set of field lines a flux tube. If we continued this way and kept pulling, then classical physics would tell us that we can get generate arbitrarily large energy! Something has to give. Classically cannot pull two quarks apart.

Errata (10/22): Many thanks to Andreas Kronfeld for pointing out an embarrassing error: as I pull the quarks apart the force doesn’t increase—since the potential is linear V(r) ~ r, the force is constant, F(r) ~ -V'(r) ~ constant. Physicists often make this mistake when speaking to the public because in the back of their minds they’re thinking of a quantum mechanical property of QCD called asymptotic freedom in which the coupling “constant” of QCD actually increases as one goes to large distances (so it’s not much of a constant). As Andreas notes, this phenomenon isn’t the relevant physics in the confining phase so we’ll leave it for another time, since a proper explanation would require another post entirely. I’ve corrected my incorrect sentences above. Thanks, Andreas!

What actually happens is that quantum mechanics steps in. At some point, as I’m pulling these quarks apart, the energy in the gluon field becomes larger than the mass energy of a quark anti-quark pair. Thus it is energetically favorable for the gluons to produce a quark–anti-quark pair:

From the sketch above, this pair production reduces the energy in the gluon field. In other words, we turned one long flux tube into two shorter flux tubes. Yet another way to say this is to think of the virtual (quantum mechanical) quark/anti-quark pairs popping in and out of the vacuum, spontaneously appearing and then annihilating. When the energy in the gluon field gets very large, though, the gluons are able to pull apart the quark/anti-quark pair before they can annihilate, thus making the virtual quarks physical.

This is remarkably different behavior from QED, where we could just pull off an electron and send it far away. In QCD, you can start with a meson (quark–anti-quark pair) and try to pull apart its constituents. Instead of being able to do this, however, you inadvertently break the meson not into two quarks, but into two mesons. Because of this, at low energies one cannot observe individual quarks, they immediately confine (or hadronize) into hadronic bound states.

Some context

This idea of confinement is what made the quark model so hard to swallow when it was first proposed: what is the use of such a model if one of the predictions is that we can’t observe the constituents? Indeed, for a long time people thought of the quark model as just a mathematical trick to determine relations between hadrons—but that “quarks” themselves were not physical.

On the other hand, imagine how bizarre this confinement phenomenon must have seemed without the quark model. As you try to pull apart a meson, instead of observing “smaller” objects, you end up pulling out two versions of the same type of object! How could it have been that inside one meson is two mesons? This would be like a Russian matryoshka doll where the smaller dolls are the same size as the larger ones—how can they fit? (Part of the failure here is classical intuition.) This sort of confusion led to the Smatrix or “bootstrap” program in the 60s where people thought to replace quantum field theory with something where the distinction “composite” versus “elementary” particles was dropped. The rise of QCD showed that this was the wrong direction for the problem and that the “conservative” approach of keeping quantum theory was able to give a very accurate description of the underlying physics.

In some sense the S-matrix program is a famous “red herring” in the history of particle physics. However, it is a curious historical note—and more and more so a curious scientific note—that this ‘red herring’ ended up planting some of the seeds for the development of string theory, which was originally developed to try to explain hadrons! The “flux tubes” above were associated with the “strings” in this proto-string theory. With the advent of QCD, people realized that string theory doesn’t describe the strong force, but seemed to have some of the ingredients for one of the “holy grails” of theoretical physics, a theory of quantum gravity.

These days string theory as a “theory of everything” is still up in the air, as it turns out that there are some deep and difficult-to-answer questions about string theory’s predictions. On the other hand, the theory has made some very remarkable progress in directions other than the “fundamental theory of everything.” In particular, one idea called the AdS/CFT correspondence has had profound impacts on the structure of quantum field theories independent of whether or not string theory is the “final theory.” (We won’t describe what the AdS/CFT correspondence is in this post, but part of it has to do with the distinction between elementary and composite states.) One of the things we hope to extract from the AdS/CFT idea is a way to describe theories which are strongly coupled, which is a fancy phrase for confining. In this way, some branches of stringy research is finding its way back to its hadronic origins.

Even more remarkable, there has been a return to ideas similar to the S-matrix program in recent research directions involving the calculation of scattering amplitudes. While the original aim of this research was to solve problems within quantum field theory—namely calculations in QCD—some people have started to think about it again as a framework beyond quantum field theory.

High scale, low scale, and something in-between

This is an issue of energy scales. At high energies, we are probing short distance physics so that the actual “hard collisions” at the LHC aren’t between protons, but quarks and gluons. On the other hand, at low energies these “fundamental” particles always confine into “composite” particles like mesons and these are the stable states. Indeed, we can smash quarks and gluons together  at high energies, but the QCD stuff that reaches the outer parts of the experimental detectors are things like mesons.

In fact, there’s an intermediate energy scale that is even more important. What is happening between the picture of the “high energy” quark and the “low energy meson?” The quark barrels through the inner parts of the detector, it can radiate energy by emitting gluons.

… These gluons can produce quark/anti-quark pairs
… which themselves can produce gluons
… etc., etc.

At each step, the energy of the quarks and gluons decrease, but the number of particles increases. Eventually the energy is such that the “free quarks” cannot prevent the inevitable and they must hadronize. Because there are so many, however, there are a lot of mesons barreling through the detector. The detector is essentially a block of dense material which can measure the energy deposited into it, and what it ‘sees’ is a “shower” of energy in a particular direction. This is what we call a jet, and it is the signature of a high energy quark or gluon that shot off in a particular direction and eventually hadronizes. Here’s a picture that I borrowed from a CDF talk:

Read the picture from the bottom up:

  1. First two protons collide… by which we really mean the quarks and gluons inside the proton interact.
  2. High energy quarks and gluons spit off other quark/gluons and increase in number
  3. Doing this reduces their energy so that eventually the quarks and gluons must confine (hadronize) into mesons
  4. … which eventually deposit most of their energy into the detector (calorimeter)

Jets are important signatures at high energy colliders and are a primary handle for understanding the high energy interactions that we seek to better understand at the LHC. In order to measure the energy and momentum of the initial high energy quark, for example, one must be able to measure all of the energy and momentum from the spray of particles in the jet, while taking into account the small cracks between detecting materials as well as any sneaky mesons which may have escaped the detector. (This is the hadronic analog of the electromagnetic calorimeter that Christine recently described.)

Now you can at least heuristically see why this information can be so hard to extract. First the actual particles that are interacting at high energies are different from the particles that exist at low energies. Secondly, even individual high-energy quarks and gluons lead to a big messy experimental signature that require careful analysis to extract even “basic” information about the original particle.

Share