• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘discoveries’

I thought I might touch on the topic of a recent post by one of my fellow bloggers here at Quantum Diaries.  As was mentioned, researchers at Fermilab have recently discovered the Ξ0b (read Xi-0-b) a “heavy relative of the neutron.”  The significance of this discovery was found to 6.8σ [1]!  The CDF Collaboration has prepared a brief article regarding this discovery, which is is being submitted to Physical Review Letters (a peer-review journal).  A pre-print has been made available on arXiv.

But rather than talk about what’s already been written, let’s discuss something new.  Namely, how on earth did physicists know to look for such a particle?

The answer to this question takes us back to the year 1961.  An American physicist (and now Nobel Laureate, 1969)  by the name of Murray Gell-Mann proposed a way to arrange baryons and mesons based on their properties.  In a sense, Gell-Mann did for particle physics what Dmitri Mendeleev did for chemistry.

Gell-Mann decided to use geometric shapes for arranging particles.  He placed a baryon/meson onto these geometric shapes.  The location a particle was given went according to that particle’s properties.  All his diagrams however were incomplete.  Meaning, there were spaces on the shapes that a particle should have went, but the location was empty.  This was because Gell-Mann had a much smaller number of particles to work with, today more have been discovered; but we still have holes in the diagrams.

But to illustrate how Gell-Mann originally made these diagrams, I’ve shown an example using a triangle, which is part of a larger diagram that appeared in the previous post on this  subject.  I’ve also added three sets of colored lines to this diagram.

Let’s talk about the black set of lines first.  If you go along the direction indicated by each of these lines you’ll notice something interesting.  On the far right line (labeled Qe=+1, Up =2), there is only one particle along this direction, the Σ+b.  This baryon is composed of two up quarks, a beauty quark, and has an electric charge of +1.

Let’s go to the second black line (labeled Qe = 0, Up =1).  Here there are four particles (the Σ0b has yet to be discovered).  But all of these four particles have one up quark, and zero electric charge.

See the pattern?

But just to drive the point home, look at the orange lines.  Each line represents the number of strange quarks found in the particles along the line’s direction (0, 1 or 2 strange quarks!).  The blue lines do the same thing, only for the number of down quarks present in each particle. Also, for all the particles shown on this red triangle, each particle has one beauty quark present!

In fact, if you go back to the original post on the Ξ0b discovery, you’ll notice the diagram has three “levels.”  All the particles on the top level have two beauty quarks present.  Then the red triangle appears (that I’ve shown in detail above).  Then finally in the bottom level, all the particles have zero beauty quarks.

Also, if you spend some time, you can see the black, orange and blue lines I’ve drawn at right actually form planes in this 3D diagram.  And all the particles on one of these planes will have the properties of the plane (electric charge, quark content)!

So what’s the big deal about this anyway?

Well, when Gell-Mann first created the Eight-Fold-Way in the early 1960s, none of the shapes were “filled.”  But just like Dmitri Mendeleev, Gell-Mann  took this to mean that there were undiscovered particles that would go into the empty spots!!!!!

So this seemingly abstract ordering of particles onto geometric shapes (called the Eight-Fold-Way) gave Gell-Mann a way to theoretically predict the existence of new particles.  And just like Mendeleev’s periodic table, the Eight-Fold-Way went one step further, by immediately giving us knowledge on the properties these undiscovered particles would have!

If you’re not convinced, let’s come back to the experimental discovery of the Ξ0b, which is conveniently encompassed by the yellow star in the diagram above.  This particle was experimentally discovered just a few weeks ago.  But Murray Gell-Mann himself could have made the prediction that the Ξ0b existed decades earlier.  Gell-Mann would have even been able to tell us that it would have zero electric charge and be made of a u,s and b quark!!!

In fact, Gell-Mann’s Eight-Fold-Way tells high energy physicists that there is still one particle left to be discovered before this red triangle may be completed.  So, to all my colleagues in HEP, happy Σ0b hunting!

 

 

But in summary, it was the Eight-Fold-Way that gave physicists the clue that the Ξ0b was lurking out there in the void, just waiting to be discovered.

Until Next Time,

-Brian

 

References

[1] T. Aaltonen (CDF Collaboration), “Observation of the Xi_b^0 Baryon,” arXiv:1107.4015v1[hep-ex], http://arxiv.org/abs/1107.4015

Share

I’m often asked as a high energy physicist how do we know that the elementary particles exist.  One might think such questions are absurd.  But, if the scientific method is to stand for anything, then these questions must have merit (and be taken seriously).  After all, it is our duty as scientists to take an unbiased skeptical viewpoint; and to report on what is actually observed in nature.

Trust me, I would find a world were Hogwarts Castle actually existed as a school of magic far more interesting. But alas, nature has no room for such things as wands or Horcruxes.

But I thought I’d try to discuss this week how the gluon was “discovered” decades ago.  The gluon is represented by the “g” in our “periodic table” of elementary particles:

Experimentally observed members of the Standard Model (Ref. 1)

The gluon is what’s called a “vector boson,” meaning it has spin 1 (in units of planck’s fundamental constant, ℏ).  And it is the mediator of the strong nuclear force.  The force which is responsible for binding quarks into hadrons and keeping atomic nuclei together.  When I say the gluon is a mediator, I mean that when a quark interacts with another quark or anti-quark, it does so by exchanging gluons with the other quark/anti-quark.  In fact gluons themselves interact with other gluons by exchanging gluons!!!

But how exactly do the quarks/anti-quarks and gluons interact?  Well quarks & gluons (whenever I say quarks, my statement also applies to anti-quarks) carry something called Color Charge.  Color is a type of charge (similar to electric charge) in physics.  It comes in three types labelled as red, green & blue.  Now where as electric charge has a postive and a negtive, color charge has a “color” (i.e. red charge) and an “anti-color” (i.e. anti-red charge).  It is this color charge that gives rise to the strong nuclear force, and is what is responsible for the interaction of quarks and gluons with each other.  The quantum theory associated with the interactions of quarks and gluons is known as Quantum Chromodynamics (QCD, “Chromo-“ for color!).

However, no particle with a color charge can be directly observed in the universe today.  This is due to something called “Color Confinement,”  which causes colored particles to form bind together into “white” (all colors present in equal parts), or “colorless” (net color is zero) states.  We sometimes call these states “color neutral” or “color singlet” states.  Flip Tanedo has written this nice post about Color Confinement if you’d like to know more.

So if an experimentalist cannot directly observe a gluon, how were they discovered?  One of the best answers to this question comes from electron-positron colliders, such as the LHC’s predecessor: the Large Electron-Positron Collider (LEP), and this is where our story takes us.

Jet’s in Electron-Positron Collisions

While electrons & positrons do not carry color charge, they can produce colored particles in a collision.  The Feynman Diagram for such a process is shown here:

Here an electron and a positron annihilate, emit a virtual photon, which then pair produces a quark and an anti-quark (Image courtesy of Wikipedia, Ref. 2)

Since the quark & anti-quark produced carry color; they must hadronize, or bind together, to form color neutral states.  This hadronization process then gives rise to the formation of jets.

If the momentum of the colliding electron and the positron are equal but opposite (the angle between them is 180 degrees), the two jets produced would appear to be “back-to-back.”  Meaning that the angle between them is also 180 degrees (For those of you counting, you must look in the center-of-momentum frame).

The reason for this is that momentum must be conserved.  If the electron comes in with Y momentum, and the positron comes in from the opposite direction with -Y momentum, then the total momentum of the collision is zero.  Then if I sum over all the momentum of all the particles produced in the collision (termed “outgoing” particles), this sum must also equal zero.  In this case there are only two outgoing particles, and the angle between them must be 180 degrees!

We call such a collision event a “di-jet event,” because two jets are created.  Here’s an example of a Di-Jet Event as seen by the CMS Detector, and would look identical to what is observed in an electron-positron collider.

Di-Jet Event within the CMS Detector, as seen in looking down the beam-pipe in the xy-plane.

The two protrusions of rectangles together with the solid and dotted purple lines represent the two jets in the above image.  The black lines represent each jet’s direction.  Notice how the angle between them is almost exactly 180 degrees.

Now suppose either the quark or the anti-quark in the above Feynman Diagram was very energetic, and radiated off another particle.  QCD tells us that this particle that is radiated is a gluon.  The Feynman Diagram for this “gluon radiation” would look like the above diagram, but with one additional “line,” as shown here:


Gluon radiation from an anti-quark in an electron-positron collision (Image courtesy of Wikiepdia, Ref. 2)

 

We say this Feynman Diagram describes the process e+e →qqg.  Here the anti-quark is shown as radiating a gluon, but the quark could have just as easily radiated a gluon.  If the radiated gluon is very energetic, the theory tells us it would have a different direction from the quark and the anti-quark.  Thus the gluon would make its own jet!

Now an experimentalist has something to look for! If gluons exist, we should see events in which we have not two, but three jets created in electron-positron collisions.  Due to momentum conservation, these three jets should also all lie in the same plane (called “the event plane”); and if the gluon has enough energy, the three jets should be “well separated,” or the angles between the jets are large.

Such electron-positron collision events were observed in the late 1970s/early 1980s at the Positron Electron Tandem Ring Accelerator (PETRA) at the Deutsches Elektronen Synchrotron (DESY).  Here are two examples of three jet events observed by the JADE detector (one of the four detectors on PETRA):

A Tri-Jet event observed in the JADE Detector, again looking down the beampipe (Ref. 3)

 

Another Tri-Jet event observed in the JADE detector (Ref. 4)

From these event displays you can see the grouping of charged & neutral tracks (the solid & dotted lines in the images) in three regions of the JADE detector.  Notice how the tracks are clustered, we say they are “collinear.”  The reason they are appear collinear is because when a quark/gluon hadronizes, the hardonization process must conserve momentum.  The particles produced from hadronization must travel in the same direction as the original quark/gluon.  Then because of this collinear behavior the tracks group together to form jets.  Notice also how the jets are no longer back-to-back, but are well separated from each other (as expected).

While these images were first reported decades ago, we still observe three jet events today at the LHC and other colliders.  Here is an example of a three jet event as recorded by the CMS Detector:

 

A Tri-Jet event in CMS

 

But now let’s actually compare some theoretical predictions of QCD to the experimental data seen at PETRA and see if we can come up with a reason to believe in the gluon.

 

QCD Wins the Day

The MARK-J Collaboration (also one of the detectors at PETRA) decided to investigate three jet events based on two models of the day, the first of which was QCD [4], now a fully formalized theory, which interpreted three jet events as:

e+e →qqg

In which a gluon is produced in the collision, in addition to the quark and anti-quark.  The second model they used was what was called the quark/anti-quark model, or phase-space model [4].  Which interpreted three jet events as simply:

e+e →qq

In which only a quark and an anti-quark are produced.

To compare their theoretical predictions to the experimental data they looked at how energy was distributed in the detector.  They looked to see how well the two predictions matched what was observed by using something called a “normalized chi-squared test”  (a test which is still widely used today across all areas of research).

In a normalized chi-squared test, you perform a statistical test between the two “data sets” (in this case one set is the experimental data, the other is the theoretical prediction), from this test you get a “chi-squared” value.  If the “chi-squared” value divided by the “number of degrees of freedom” (usually the number of data points available) is equal to one, then we say that the two data sets are well matched.   Or, the theoretical prediction has matched the experimental observation.  So if one of the two above models (QCD, and the “Phase-Space” model) has a normalized chi-squared value of one or near one when compared with the data, then that is the model that matches nature!

So to make their energy distributions, the MARK-J Collaboration decided to work in a coordinate system defined by three axes [4,5].  The first of which was called the “Thrust” axis, defined as the direction for which the “energy flow” is maximum [4,5].  This basically means the Thrust axis is taken as the direction of the most energetic jet in the event.

The second axis, the “Major” axis, is taken to be perpendicular to the Thrust axis; but with the requirement that the projected energy of the most energetic jet onto the Major axis in is maximized [4,5].  Meaning if I took the dot product between the Major axis and the direction of the most energetic jet, this dot product would always be maximum (but still keep the Major axis and the Thrust axis perpendicular).  This additional requirement needs to be specified so that the Major axis is unique (there are an infinite number of perpendicular directions to a given direction).

The third axis, called the “Minor” axis, is then perpendicular to these two.  However, it turns out that energy flow along this direction is very close to the minimum energy flow along any axis [4,5].

But let’s not get bogged down in these definitions.  All this is doing is setting up a way for us to compare different events all at once; since no two events will have jets oriented in exactly the same way.  In addition, these definitions also identify the event plane for each collision event.

So here’s what the energy distributions turned out looking like for all events considered:

 

Energy distributions in selected three jet events recorded by the MARK-J Collaboration. The black dots are the data points, the dotted line is the theoretical prediction, more details below (Ref. 5).

 

The angles in the above plots correspond to the where in the energy was deposited within the MARK-J Detector.  The distance from the black dots to the center of each graph is proportional to the amount of energy deposited in the detector in this direction [4,5].

Forty events in total were used to make the above distributions [4].  Each event’s jet topologies where re-oriented so they matched the definitions of the Thrust, Major & Minor axes outlined above.  From the top diagram labeled as “Thrust-Major” plane we see three “lobes” or clustering of data points.  This indicates that the three jet structure, or topology, of these forty events.

By rotating the entire picture along the thrust axis by 90 degrees we end up looking at the “Thrust-Minor” plane, the bottom diagram.  Notice how we now only have two clusterings of data points or lobes.  This is because we are looking at the event plane edge on.  Imagine looking at the Mercedes-Benz symbol.  The plane that the three spokes in it is the “Thrust-Major” Plane.  Then if I turn it so I can see only the metal rim of the Mercedes symbol, I’m looking in the “Thrust-Minor” plane.  So the bottom diagram then illustrates that these events have the jets all lying in a plane, as expected due to momentum conservation.

Now how well did the two theoretical predictions mentioned above match up to the experimental observations?

The “phase space” model (no gluons) was not plotted in the above diagrams.  However, the normalized chi-squared value between the experimental data and the “phase space” model was reported as 222/70 [4]; which is nowhere near one! Researchers took this to mean that this theoretical model does not do a good job at describing the observed behavior in nature (and is thus wrong, or missing something).

Now the QCD prediction (with gluons!) is shown as the dotted line in the above diagrams.  See how well it matches the experimental data?  In fact the normalized chi-squared value between the data and the predictions of QCD was 67/70 [4,5]; now this is close to one! So the predictions of QCD with three jet events being caused by the radiation of an energetic gluon has matched the experimental observation, and given us the proof we needed to believe in gluons!

However, the story of the gluon did not end there.  Much more was needed to be done, for example QCD predicts the gluon to have spin 1.  These measurements which I have outlined in this post did not measure the spin of the gluon.  More work was needed for that; but safe to say by the mid 1980s the gluon was well established as an elementary particle, and we have lived with this knowledge ever since.

Until next time,

-Brian

 

References

[1] Wikipedia, The Free Encyclopedia, “The Standard Model of Elementary Particles,” http://en.wikipedia.org/wiki/File:Standard_Model_of_Elementary_Particles.svg, July 5th, 2011.

[2] Wikipedia, The Free Encyclopedia, “Feynmann Diagram Gluon Radiation,” http://en.wikipedia.org/wiki/File:Feynmann_Diagram_Gluon_Radiation.svg, July 5th, 2011.

[3] P. Söding, “On the discovery of the gluon,” The European Physical Journal H, 35 (1), 3-28 (2010).

[4] P. Duinker, “Review of e+e- physics at PETRA,” Rev. Mod. Phys. 54 (2), 325-387 (1982).

[5] D.P. Barber, et. al., “Discovery of Three-Jet Events and a Test of Quantum Chromodynamics at PETRA,” Phys. Rev. Letters, 43 (12), 830-833 (1979).

Share

Publish now?

Friday, February 18th, 2011

It’s a busy time. First, the LHC was closed up today for the first time this year, allowing the start of machine checkout and then, eventually, circulating beams. The beginning of the 2011 run is in sight, although we won’t have collisions for physics for a while yet. Also, we’re getting close to winter conference season. The Recontres de Moriond meetings are traditionally a venue for the presentation of new experimental physics results, and you can be sure that all of the LHC experiments are readying some interesting stuff for that. I have previously discussed the internal review processes of experiments, which can take a while, so even though the conferences are a few weeks away, a lot of analyses are becoming finalized and starting to be reviewed right now. Whether you are a reviewer or reviewee (or both), it can take a lot of time. (Oh, and then there is the recent discussion of federal budget cuts in Washington, which has us all reading the newspapers pretty closely.) So we don’t lack for things to do.

But, meanwhile, here is something to consider. The ATLAS and CMS experiments are ultimately very similar — they both have similar goals (which are different than for ALICE and LHCb, hence their absence from this discussion), and similar enough capabilities (although differing strengths and weaknesses), and they both record pretty much the same amount of data. So why don’t they publish the same measurements at the same time? Just as an example, the two experiments submitted publications on measurements of rates of W and Z bosons three months apart, with the later one analyzing ten times as much data (and having much more precise results) than the first one. Please note, in an attempt to be neutral, I am not naming names here!! Let’s instead take this as an introduction to a broader question — given that the LHC will continue to pile up data over time, when do you stop and say, “OK, let’s publish with what we’ve got?” How much data is enough?

I’m not going to claim to have all the answers to this question, and for any given measurement there will be a unique set of circumstances. But here are a few possible considerations:

  • Is there a break in the action at the LHC? This is a totally pedestrian consideration, but if the LHC isn’t going to run for, say, three months, as is happening right now, for many measurements it might not be worth the wait for more data, so you should just publish with what you’ve got. There are going to be a lot of publications based on the data recorded in 2010. It’s true that in 2011, if the collision rates are as expected, the 2010 data will quickly be superseded, but why wait those few months, especially if you are doing a measurement in which additional statistics might not make a meaningful difference?
  • When can I make a scientific statement that has sufficient impact on the world? If you only have enough data to make a measurement that’s, for instance, ten times less accurate than the most accurate measurement of the same quantity that’s currently available, there’s no point in publishing. But if you are at least in the range of comparable to the best measurement (even if not yet the best), it might make sense to publish, because it’s accurate enough to make a difference in the world’s understanding. If you average two measurements of equal precision, then the average will be a factor of 1/sqrt(2) = 1.4 more accurate than either individual measurement. Seems worth it, right?
  • Am I worried that someone else is going to beat me to something? Let’s face it, there is some glory to being first, especially if there is something new to report. If you are worried that competitors might get to it first, perhaps you will decide that you have to release your result, even if you know you might do a better job yet, either by recording more data or just having more time to work on it.
  • Then again, it’s better to be second than to be wrong. A wrong result would be embarrassing, for sure, so it’s better to do the work necessary to have greater confidence in the result.
  • If you really can do a much better job with not much more time or effort, why not just do that? If you do, then your measurement is going to be the one in the history books, even if you weren’t first.
  • Do I finally have enough data to report a statistically significant result? Well, this is what we’re all waiting for — at some point some new phenomenon is going to emerge from the data. At first, the statistical strength will be marginal, but as more data are analyzed, the signal will stand out more strongly. You can be sure that once any anomaly is observed, even at a low level, it will be tracked very carefully as additional data are recorded, and as soon as an effect reaches some level of statistical significance, it’s going to be published just as quickly as possible, without delay.

These are just a few of my own musings, dashed off quickly — I invite our readers to offer ideas of their own. (OK, and now I click on the “Publish” button on the right….)

Share

Matter and anti-matter

Tuesday, May 18th, 2010

Recently the D0 collaboration at the Tevatron announced an interesting result.  Having come from the BaBar experiment and worked on CP violation, I found it exciting.  Our universe is dominated by matter.  It’s everywhere and there is almost no anti-matter to be found.  This is one of the principle questions in our sub-atomic understanding of the universe.  The answer to this was put forward many years ago by Sakharov; there has to be CP violation, meaning that the swapping a particle with its anti-particle and looking at the interaction in a mirror can’t be the same as the original.

CP violation was discovered several years ago and has already won Nobel prizes.  The B-factories have measured many of the CP violating parameters of the Standard Model and come up with a rather coherent picture.  These measurements and constraints are embodied in the CKM triangle, where the height of the triangle is a measure of the amount of CP violation.


CKM Triangle

Recent CKM fitter result

It’s beautiful really.  Like a piece of fine art.


Kandinsky composition VIII

Kandinsky Composition VIII

There is just one problem; the amount of CP violation is insufficient by about 10 orders of magnitude. This means that there has to be more CP violation out there that we don’t know anything about.

This is where D0 comes in. They have been looking at CP violation in decays that aren’t accessible at the B-factories, and they found something. The Standard Model says they shouldn’t find much at all, but they did. I think it’s exciting. This isn’t the first sign of stress on the Standard Model, and there will undoubtedly be more coming in the next few years from LHC and other experiments. I think it is an exciting time to be involved in fundamental science research since we will be revising and rewriting many long held theories in the coming decades.


D0 result

D0 asymmetry result is separated from the Standard Model representing the blue point.
Share

It’s an exciting time for your humble LHC blogger. She may just have a thesis topic… So what does that mean? (I often times wonder that myself).

With the recent success and in anticipation of high energy collisions (and therefore data), it’s time to figure out what can be found and what can’t given the projected amount of data. (We’re going to be running at ~7 TeV for the first part of the year, then ~10 TeV the latter half). Now lots of people are doing cross sections measurements – which is a different beast than searches (see below).  Cross section measurements take a particle that we know – Zs and Ws for example – and check to see if we measure what we predict. This is very important to do and I’m over simplifying but that’s the basic idea. Despite it’s importance, I personally feel like if I’m working on the highest energy accelerator in the world, I’d at least like to try to do a particle search.

The Cross Section Beast

The Cross Section Beast

This isn’t a completely trivial question because ever since the Tevatron turned on, theorists have been making predictions as to what was out of reach to our current experiments.  So what makes for a good early search? Lots of things, I’ll list some here:

  • Of interest…

Maybe this goes without saying, but I’m going to go ahead and say it anyway. A search has to be well defined and predicted. One doesn’t just look for the Higgs, or SUSY or Z’, they look for specific decay products that could come from the predicted particle and can not be explained by sometime else. Although we’re going to be 3.5x and 5x higher energy than the Tevatron, there has been years of data collected at the Fermilab experiments. Now there are some particles that are simply outside of their reach. For example just due to conservation of energy, nothing can be created >2TeV, but due to statistics (need that high fluctuation over the background again…) some limits are in the 100-200 GeV range. Increasing the energy will allow us, even with less data to raise limits.

  • High Signal:Background ratio

Since there is going to be a smaller data set (only 1 year of running), we simply won’t have enough statistics to say with confidence that we discovered certain particles. We need to say that the signal is an actual signal – not just a fluctuation of the background. I elaborate this in my Higgs post. This also means that there would be a distinct signature for example: something that would decay to 2 very high energy electrons and 2 very high energy jets. It could be di-boson production or W/Z+jets, but the electrons would come from a W/Z which was very far off mass shell – which is not impossible, but maybe not as probable.

  • Missing Energy (MET to be more specific)

This is a bit contentious, and maybe more a personal taste than anything. We won’t have a completely calibrated detector initially. The detector is calibrated by taking standard particles (Ws and Zs for example) and reconstructing them. We then convert the electrical signal out of the machine to energy and momentum. To do this, the more Z and W events the better – which like everything takes time. So the energy of the signal can be off. This isn’t a bad thing, but the way we calculate missing energy (say in the form of neutrinos) is by balancing the energy in the detector. For example if there is 40 GeV deposited in 1 part of the xy plane, then there has to be another 40 GeV in another part of the xy plane to balance it out. If we don’t really know if it’s 40 GeV or 45 GeV, then it’s hard to calculate missing energy. (I should also point out, it’s transverse energy, not just energy – which I can elaborate on if anyone is interested).

So these requirements gives us a whole range of particles to search for. I’m involved in a physics group called exotics. Exotics are a generic term for anything beyond the standard model and isn’t the Higgs or SUSY. This isn’t to say that Higgs/SUSY searches aren’t beyond the standard model… I guess they get their own groups since so many people are interested in them. It makes the exotics working group more intimate :-). My interests (and potential thesis) are in particles that would unite quarks and leptons (like how a W unites the family of quarks and the family of leptons). These generically are called leptoquarks.

So what’s wrong with the Higgs? It like the captain of the high school football team and head cheerleader all rolled into one particle to the high energy physics community. I don’t know… I’m just not that into it.

-Regina

Share

What We Might Find

Sunday, June 28th, 2009

I have been promising for a long time to talk about what the LHC experiments are looking for, if not just the Higgs boson.  There’s a tremendous amount of material available on this, but I am not going to look up or link to any of it; this will give you, at least, a snapshot of what I know and how I think about it.  If any theoretical particle physicists read this and feel the urge to slap their foreheads in anguish, I invite them to consider this an interesting study in how much information experimentalists actually retain from classes and seminars.

Edit (June 29, 9:30 EDT): As you might expect given the above, I made some oversimplifications and at least one outright error, which have been kindly pointed out by theoretical physicists in the comments.  For one mistake — mixing up two different kinds of extra dimensions — I have made some corrections with strikethroughs and italics in the appropriate section of the text.

To discuss what the LHC experiments are looking for, we need to understand what problems there are in our current understanding of particle physics — in other words, what makes us believe there ought to be any new particles at all?  The strongest case is for the Higgs boson or something like it; it plays a critical role in the behavior of the weak force and the masses of the associated W and Z bosons, which we already know behave exactly the way we would expect if there were a Higgs boson.    You might ask what happens if the W and Z boson masses and interactions are just a coincidence, and they just look like a Higgs Boson is involved, but actually there’s no such thing — the answer is that the Standard Model of particle physics becomes mathematically inconsistent, and makes senseless predictions at energies the LHC will investigate!  So there has to be something to make the theory behave.  However, that doesn’t mean there’s exactly one “Standard” Higgs boson, an issue I’ll get back to.

The next best clue to new physics — or the next biggest problem with what we know now, if you’d rather think of it that way — is called the Hierarchy Problem.  This is expressed most easily as the question, “why is gravity so much weaker than other forces?”  However, because the strength of each force changes as the energy of interactions changes — at different rates for different forces — we particle physicists prefer to frame the problem in terms of this question: “why are the W, Z, and (apparent) Higgs boson mass energies so much smaller than the energy at which the gravitational force becomes strong?”    If we take the Standard Model as the complete picture as far as we can, so that we assume there’s nothing for the LHC to find except the Higgs Boson, then the “desert” between the Higgs boson mass energy and the energy where gravity beccomes strong is a factor of about 10,000,000,000,000,000!   That’s aesthetically displeasing, but it’s actually worse than it sounds at first.  The reason is that the Standard Model has to include the effects of quantum fluctuations on the masses of particles — and the fluctuations have to be allowed to have any energy up to the energy where the Standard Model “breaks down.”  If the Standard Model works up until a theory of Quantum Gravity (for example, String Theory) kicks in, then we have to allow energies up to where gravity is strong — that factor of 10,000,000,000,000,000 really hurts, because the quantum fluctuations force the Higgs boson to have a much higher mass than it needs to for the theory to work!  Here are some solutions to this problem:

  1. The Higgs Boson has a “bare mass” — i.e. the mass it starts with before quantum fluctuations — that is very large and negative, and cancels out the quantum fluctuations almost perfectly.  This is allowed, but seems rather implausible.
  2. The quantum fluctuations get cancelled out because of new particles whose effects balance out the old ones.  This suggests Supersymmetry, in which every existing particle has a supersymmetric partner, and the pair’s effects on the Higgs Boson mass do indeed cancel.
  3. There isn’t really a Higgs boson.  Instead, there is a new force with a new set of particles that “pair up” to act like the Higgs boson at low energies.  These are called Technicolor theories, because the new force looks a lot like the “color charge” found in theories of the strong force.
  4. Gravity isn’t really as week as it seems.  Instead, it appears weak because it spreads out in several extra spatial dimensions that are curled up on themselves.  These dimensions would be something like a millimeter in size at most, but are called “Large Extra Dimensions” because they’re pretty big compared to the size of most things in particle physics.  So gravity would spread out in these dimensions, making us think that it’s so weak that it only becomes as strong as the other forces at very high energy — but actually it would surprise us by being strong at much lower energies, maybe even LHC energies.  This would mean that the range of energies allowed for quantum fluctuations affecting the Higgs Boson mass would be greatly reduced — if we want them to be “small enough,” that strongly suggests that gravity becomes strong at energies the LHC can investigate, and we can expect all kinds of new particles and phenomena.

I know that reasoning is rather complicated, but hopefully you’ll retain at least this basic idea: starting just by asking why gravity is so weak, and following the reasoning of our current theories of particle physics, we get that something very strange is going on with the Higgs boson — and the only way to fix it is to appeal to an amazing numerical cancelletion, or to make a change to our understanding of particle physics.  And most changes we can make turn out to add a bunch of new particles, at energies right around the mass of the W and Z bosons, or just above them — in other words, exactly the energies the Large Hadron Collider will explore!

Let’s look at these new ideas in more detail.

  • In Supersymmetry, we have a new particle for each existing fundamental particle.  We know they’re all as heavy or heavier than the particles we’ve seen before, because otherwise they would have shown up in previous experiments, but they also can’t be too heavy or they won’t cancel those quantum fluctuations properly.  So we’ll see some new particles decaying into particles we know — and maybe into a non-interacting Lightest Supersymmetric Particle, which might turn out to be dark matter (a nice bonus)!
  • If there are Large Extra Dimensions, then we would effectively see new particles also.  This is because an ordinary particle that was moving in a circle around such an extra dimension would appear, in our three dimensions, to have its energy of motion “acting like” mass energy.  Motion in the extra dimension would only be allowed at certain speeds — for essentially the same reason that Hydrogen atoms only have certain energy levels, if you remember that from chemistry — so we would see familiar particles, but with a certain amount of apparent extra mass, and then another “copy” with the same amount of extra mass added again, and so on.  This would actually be pretty hard to distinguish from Supersymmetry, except that where in Supersymmetry the new partners always have different spin from the original particle, in this case they’d have the same spin. All of that is right for a different kind of extra dimensions, but rather than get into that, let me just put down what we’d actually see for Large Extra Dimensions.  It’s fun too — basically, because gravity will be strong at the LHC, we’ll be able to directly explore whatever theory unifies gravity with the other forces.  This could result in some very dramatic objects, including microscopic black holes.  (To be reminded why we know that such black holes cannot be dangerous, whatever their properties might turn out to be, click here.)  Black holes would decay into all kinds of things, making spectacular events in our detectors, and could actually be one of the easiest things to find if they’re light enough!
  • If there’s technicolor instead of a Standard Model Higgs Boson, the LHC experiments might have a pretty big challenge.  The new particles might be too heavy to produce, and only through careful and detailed study of certain interactions would we get indirect clues about what was going on.

This is hardly a comprehensive look at all the the issues woth thinking about that might require new theories and new particles, but it perhaps gives you a bit of an idea of the possibilities that are out there.  Personally, I wouldn’t bet on any particular theory — but there are common features to new theories that make some kind of new “zoo” of particles at LHC energies a very real possibility.  Of course, finding all those new particles would raise all kinds of new questions; first we’d ask what theory described the new particles, and then there are all sorts of questions to ask about the new theory.  (For example, in Supersymmetry, we’d have to ask why the new particles are so much heavier than the ordinary particles they’re paired with.)   But answering old questions, and finding new ones to ask, is a particle physicist’s idea of heaven — it helps us understand a little more of the universe, and gives us lots more work to do!

Share

I’ve been thinking about it since this yesterday, and I’ve finally decided to take the plunge: I’m going to say a few words about the blogosphere debate on the CDF “ghost muon” paper.  I know that, by the demanding standards of the Internet, this is old news; the posts that started the mess were an eternity ago, last week.  In my defense, I have been traveling for the entire time, to Berlin and a few cities in Poland, in what now seems a confused blur of night trains and buses.  And in any case, I think my comments are universal enough that they’re worth making even if the debate is starting to die down.

I have relatively little to say about the paper itself, which was submitted last week but is not yet published.  Very briefly, the paper discusses a series of particle collisions seen by the CDF detector at the Tevatron Collider at Fermilab that appear to possibly contain muons which decayed from a very long-lived unknown particle — or maybe there’s a less dramatic explanation, and nobody’s figured it out yet exactly.  If you haven’t heard about this at all, I strongly recommend you go to Cosmic Variance for a more substantial summary.   One very big debate on the paper is whether it ought to have been submitted for publication in its present form; many experts who I know personally say that CDF should have been more careful in investigating the possible sources of the signal before publishing, and much of the CDF collaboration (including my colleagues at Berkeley) chose to take their names off of the paper’s author list.  The counter-argument, which won the day in the collaboration’s final decision, is that everything that could be done had been done, and that it was time to send the work out to the wider particle physics community to see if the signal could be understood and duplicated by other experiments.

A second “debate” is much more disturbing, centering on speculation that a group of theorists had written a new theory based on inside information from the paper before it was published.  When the group denied this, Tommaso Dorigo (who works on CDF and CMS) accused them point-blank of lying.  The exchange, originally in blog comments, is summarized here by Dr. Dorigo.  Although he qualifies his accusation a bit, he seems to stand by it and even reiterates it in the process of apologizing.

This kind of in-your-face accusation goes beyond the appropriate boundaries of professional discourse.  It seems to stem the bizarrely-prevalent idea that being really obnoxious in public is normal, as long as it’s on the Internet.  Would you, dear reader, put up a poster calling your boss an idiot, or give a newspaper interview in which you speculate that one of your coworkers is a liar?  No, you wouldn’t!  And nothing changes because our job happens to be physics, or the venue happens to be the World Wide Web.  Of course we all have the right to free speech, but what we choose to say has consequences; others have the right to choose whether or not to collaborate with me, whether at the personal level or the level of a large-scale experiment, and one thing they can and will think about is whether I’m going to publicly insult them.

One of the theory paper authors, Professor Nima Arkani-Hamed, wrote a several part response to these accusations, but one part of his comment really struck me.  It was about the physics blogosphere as a whole: he called it “brown muck” and said that he has “a very dim view of the physics blogosphere, and avoid[s] interacting with it.”  Upon reflection, this is a fair comment.  Many — though by no means all — of the physics blogs seem to spend a disturbing amount of time on personal “clashes” between “epic” personalities.  The ultimate example of this is found in the insults exchanged between Peter Woit and Lubos Motl, each of whom command large opposing followings (at least on the Internet) in the so-called “String Wars.”  The problem is that their extreme viewpoints and aggressive tactics don’t reflect what most physicists think about the issues; their drama, like these latest accusations about the ghost muons, is largely manufactured for consumption by the blogosphere.

I would like to think that the US/LHC Blogs offer a different vision, one that falls outside of Dr. Arkani-Hamed’s criticism.  We are, first and foremost, an outreach site.  We seek to explain the excitement of our work — the wonder of the Laws of Nature we’re trying to investigate, and the fantastic machines that we use for that investigation.  Of course we tell you about our lives in the process, to give you an understanding of what our work really involves.  We want to explain what our work means to you and why it’s worth your tax dollars, and we want to get young people excited about learning and maybe getting into careers in science.  Of course we also have interpersonal conflicts, nasty suspicions, and hallway rumors — just like anybody does — but in my opinion we’re not here to tell you about that stuff for two reasons: first, because all that nonsense is not what’s essential or exciting about our work, and second, because we owe our colleagues (and potential colleagues) the courtesy of not being rude to them in public.

I hope those of you who read our blog are looking for the stories that we think are important to tell; if not, sadly, it appears that you have a wealth of alternatives to choose from.  But I have been wondering about something, and in the words of Tommaso Dorigo, “I should like to open a poll for those heroic readers who came to the bottom of this post.”  Do you think all this infighting is valuable to know about?  Does it help the overall cause of expanding interest in, and knowledge about, our work?  (In fairness, Dorigo, Motl, and Woit are also known for writing very informative posts about subjects within their expertise.)  Or does the partisan warfare and discourtesy simply serve to distract readers seeking real knowledge?

You know my opinion on those questions, but I’d like to hear yours.  Until then, I’ll leave you with the words of Nima Arkani-Hamed: “I’m sure you’ll agree that there is more critical physics to do than there are hours in the day to do it, and I for one would like to get back to work.”

Share

In the recent mini-wave of “What Will We Find at the LHC” posts, no-one mentioned that the first actual measurements at the LHC will certainly not be of anything as exotic as the Higgs boson, supersymmetry, or large extra dimensions. This is not for any reason as prosaic as the fact that it’ll take time to get to the design energy and luminosity, which is true. If we define the very notion of “finding” something as being “measuring a quantity that we could not predict with current tools”, then the very first measurements at the LHC will count as discoveries of great interest to those not just focussed on what typically counts as exotic phenomena.


To boil it down to something concrete, consider the number of particles produced in a typical collision at the LHC. And to make things more straightforward, only consider the number of charged particles, the ones that leave curling tracks when moving through a magnetic field.
These are neat looking events, and they happen each and every time protons collide, at every energy, since the dawn of the accelerator era (you need a few GeV to even make a bunch of pions!). Thus, these particles are the “grass” that one sees in lego plots showing two huge jets, or the steel wool amidst which the two high energy muons emerge after a Higgs particle decays. For most people, this part of the collision is a background that needs to be cut away to see the interesting physics.

Here’s the rub: while it’s moderately easy to count the number of particles in each event, no-one has ever managed to come up with a bottom-up theoretical scheme by which one can predict this number. This is mainly due to the somewhat-scandalous situation that we know what protons “do” when they get close to each other (and can propagate that information into very precise predictions for the production of high pT particles, etc — the bread and butter of the LHC), but we don’t really get “why” they do it. Thus, we don’t have a very solid means to extrapolate our current knowledge into the LHC era, even if the Tevatron is only a factor of 7 lower in energy.

Thus, I promise that the first things you will see coming out of the LHC program are a bunch of measurements pertaining to “minimum bias” events, i.e. the events 99% of the experiment want to throw away so they can look for the needle in the haystack. Some of us (which include many of us directly interested in the heavy ion program at the LHC) want to see how the grass grows when those first collisions appear. We’ll count the particles emerging near 90 degrees, turn it into a “particle density” (by restricting the angle over which we count them), and put it on a plot with the rest of the data — probably with a few curves reflecting our favorite predictions. And everyone wants the first paper from the LHC, so it’ll be a real race, and so the results will appear almost as soon as real collision data is written to tape (which may well be this fall!)

For entertainment value, here’s my take.

This is a pretty straightforward application of the ancient (and bizarre) Landau hydrodynamical model to p+p minimum bias collisions and heavy ion collisions. It assumes that the two protons dump all of their energy into the collision as they overlap, and the system expands collectively like a relativistic fluid after that (sound familiar?). It describes the multiplicity (linear with the entropy) weirdly well for heavy ions (the top curve) above 20 GeV or so, and predicts a very high density at the LHC. It is pretty scratchy for p+p (the bottom curve — which may or may not be related to trigger bias issues — we’ll have to discuss that soon, too) but at least predicts something quite a bit higher than most popular models. But if this model has anything non-trivial to say about proton-proton collisions (something suggested by Landau and Fermi in the 1950’s, but which became controversial and even “heretical” during the rise of QCD, something I wrote about a few years ago), then we may have to start to take seriously the possibility that even small systems have “medium”-like aspects similar to what people already say about the QGP at RHIC. And how fun would that be?

Share

It’s very exciting to be lucky enough to be here at CERN as the LHC is finally set to turn on. People first started thinking about the LHC over 30 years ago (about when I was born), and magnet R&D began in 1988 (when I was in middle school). The CERN council approved it in 1994 (as I was graduating High School). I have seen presentations on the physics of the LHC since I entered the field 10 years ago as a graduate student. After the long build-up, a lot of people are starting to guess what will happen when the first protons finally collide. Everyone is eagerly expecting big discoveries, and many expect they will come fast.

I have seen talks in the past few years asking how many WEEKS it will take to discover supersymmetry. Here is a blog post from the APS meeting about a more conservative timeline of possible discoveries, some as early as 2009, some as late as 2019. My own experience on start-ups comes from being a member of the D0 Experiment during the start-up of Run II at the TeVatron at Fermilab. If that is any guide, the LHC won’t discover anything on Day One, and probably not after Week One, Month One, and maybe even Year One.
The official start-up of Run II at the TeVatron was on March 1, 2001. The first physics paper wasn’t submitted until March, 2003 by the CDF collaboration.

Here are the number of Run II publications per year from the CDF Experiment (link):
2001: 0
2002: 0
2003: 3
2004: 4
2005: 28

and from the D0 Experiment (link):
2001: 0
2002: 0
2003: 0
2004: 2
2005: 27

So it really took 4 years for the papers to start pouring out. To give you an idea of the kind of “discoveries” we were trying to make on D0 at first, here is an excerpt of an email to the collaboration from April 3, 2001:

Dear colleagues,
We are getting collisions, though not all the detectors
are timed in. We expect most the detectors get timed in
within the 1×8 stores. … we agreed to give out five
bottles of wine, one bottle each, for finding the following
objects:

First two jet event
First photon
First reconstructed track in the SMT+CFT system
First electron
First muon

Event displays of these objects are necessary…
This competition will go on till all the bottles
are exhausted…

So with a new detector, it is first necessary to re-discover what you already know should be there, and then one can move on to the real discoveries. In my opinion, the ATLAS detector is in better shape to collect data than D0 was at that time, and in fact I have been looking at some cosmic-ray data we collected recently and the detector is performing pretty well already. So things might go quicker here.

Another big difference is, with respect to discoveries, the LHC is a big leap in energy over previous accelerators, while the TeVatron in 2001 was only a slight upgrade in terms of energy over its previous run, so big discoveries were not expected quickly. A better analogy might be to the turn-on of the SPPS accelerator and the UA1 and UA2 detectors in 1981, which are often cited for making nobel-prize-worthy discoveries of the W and Z bosons, very quickly after turning on. How quickly? From here, I found an official CERN history:

In summer 1981 the first collision between protons and antiprotons was recorded. The first experiments began in November 1981. At the beginning of 1982 two accidents damaged the UA1 detector, so the experiment was stopped until summer 1982. UA1 and UA2 experiments started again in September 1982 until December 1982, when the accelerators were switched off for two months. During this time data were analyzed and physicists were convinced of having discovered the W boson. This was announced in a press conference held on 25 January 1983. The next step was the discovery of Z boson. The experiments on SPS began again on April 1983, and there were soon major results. On 1 June 1983 CERN formally announced the discovery of the Z boson.

So in this case, it still took almost 2 years from the first beam to the first discovery announcement. Although, really it only took 4 months from data taking to announcement of the W boson discovery after the UA1 detector was fixed, and then less than 6 months more until the Z boson discovery announcement.

So how quickly CERN makes headlines with a discovery depends on how smoothly the work of thousands of people comes together over the next months, as well as on what nature has in store for us. Since of course no one knows that, we will just have to wait and see. It should be fun.

Share