• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Flip Tanedo | USLHC | USA

View Blog | Read Bio

No love for low scale supersymmetry at the LHC

Happy Valentine’s Day everyone… well, unless you were expecting hints for supersymmetry (SUSY) at the LHC. Last night the ATLAS collaboration posted the results for one of its supersymmetry searches to the arXiv. They corroborate last month’s results from CMS on a similar type of search. (The CDF site has an excellent public summary that should be at the right level for physics enthusiasts with no formal background.)

What is supersymmetry?

Supersymmetry is an extension of the Standard Model in which every particle and anti-particle has a superpartner particle with a silly name, such as “gluinos” as the partners of gluons and “squarks” as the partners of quarks. The neat thing about supersymmetry is that the partner of a matter particle is a force particle (with a prefix s-), while the partner of a force particle is a matter particle (with a suffix -ino). SUSY does a lot of great stuff for us theoretically, but it must be broken so that the Standard Model particles and the SUSY particles are split up and have different masses. Because this is Valentine’s day, let’s leave the details of this splitting up to another post.

What is the LHC telling us?

Here’s one of the key plots from the ATLAS paper (which includes the CMS result):

I’ll not get into the details here and will keep the discussion as accessible as possible. The axes of the plot are parameters in a particular supersymmetric model. The horizontal axis is the “universal scalar mass” m0 (related to the mass of the squarks) while the vertical axis is the “universal gaugino masses” (related to the mass of the gluinos and its cousins). The area inside the curves (lighter masses) are ruled out. The red line is the ATLAS result, the black line is the recent CMS result, and the other lines are various exclusions from older experiments.

These parameters aren’t quite the same as the masses of the superparnters, but they are related by some formulae which experts in the field have memorized. A good estimate for the stringency of the bounds on the actual superpartner masses come from the conclusion of the paper:

For a chosen set of parameters within MSUGRA/CMSSM, and for equal squark and gluino masses, gluino masses below 700 GeV are excluded at 95% CL.

Some translations:

  • MSUGRA/CMSSM: These stand for “minimal supergravity” and “constrained minimal supersymmetric Standard Model.” The most general supersymmetric version of the Standard Model has over 115 free parameters… this would be a nightmare to plot. For simplicity, experimentalists typically plot their results against simplified reference models with much smaller parameter spaces.
  • Squark and gluino masses: squarks are the partners of quarks and gluinos are the partners of gluons. The experiment is setting a lower bound on these masses. (Recall: heavier things are harder to produce.) The 700 GeV lower bound on the squark/gluino mass (in the case where they’re equal) is much heavier than any particle in the Standard Model—recall that the top quark is ‘only’ 172 GeV.
  • 95% CL. This is a confidence level that explains the statistical strength of this bound. Roughly it answers the question, “based on the data, how sure are you of the statement you’re making?” Here’s a great explanation.

What’s actually happening at the LHC?

The general idea is that a common feature of most SUSY models is that when supersymmetric partner is produced at a collider, it will eventually decay into familiar stuff and a particle which escapes undetected. This escaping particle is called the lightest supersymmetric particle (LSP) and is a natural dark matter candidate, but its presence is only experimentally determined because the measured momenta of all the familiar stuff doesn’t balance. Thus a good way to search for the presence of supersymmetric partners is to look for:

  1. A high energy “normal” particles (typically QCD “jets”)
  2. Large “missing energy,” i.e. momentum that doesn’t add up

The high energies are important to tell us that something heavy (like a new particle) may have been involved, and the missing energy is important to tell us that something escaped undetected. By looking for decays of this type, ATLAS and CMS are able to constrain the existence of supersymmetric partners up to a certain mass. In fact, the reason why the LHC has been able to greatly improve the bounds on SUSY—even at such an early stage of running—is that the previous constraints from the Tevatron were limited not by how much data they could take, but by the energy scale of the collision.

Here’s an example, another plot from the ATLAS paper:

This plot shows the number of events in a particular range of “effective mass,” a kind of kinematic variable which characterizes the energy of an event. Here’s what’s happening:

  1. ATLAS records a bunch of data over the past year or so. For each recorded particle collision (“event“), ATLAS records information about what its detectors see (“signal“).
  2. Physicists go through this data when they want to search for new particles. The set of physicists who worked on this search focused only on the events whose signals included a lepton (e or μ), QCD jets (quarks and gluons), and missing energy.
  3. They then plot the number of events whose “effective mass” is in a certain energy range. This gives the data points on the plot above.
  4. In order to compare to the Standard Model, they run a “Monte Carlo” simulation of the kind of signal that known physics would produce in this particular channel. These are all of the different colored pieces of the histogram—they represent events that we expect to be counted even if there is no new physics in these events.
  5. If the data points line up with the sum of expected events, then we conclude (up to a certain statistical significance) that there was no new physics observed.

For reference, the dotted line is the expected contribution from one particular choice of SUSY parameters. That line would have to be added to the Standard Model sum (shown as a thin red line); clearly the data points do not show this excess.

What does this mean for supersymmetry?

This isn’t great news for supersymmetry. One of the appealing features of supersymmetry is that it can solve the hierarchy problem of the Higgs mass. This problem is only really solved, however, if the SUSY particles are not that much heavier than their Standard Model partners. Thus the more we push up the lower bound on the super partner masses, the more trouble we have explaining the Higgs paradigm within the Standard Model.

I think I am not yet enough of an expert to comment on how severe the recent ATLAS/CMS results are in terms of current favorite models of supersymmetry. However, I will note that the particular model that was used to make these bounds represents a very narrow subset of possible supersymmetric extensions of the Standard Model. As explained above, this is by necessity: a plot over a 115-dimensional parameter space is simply not possible. Most of these parameters are related in plausible ways and the bounds from ATLAS and CMS are probably farily robust over huge swaths of parameter space, but in principle there is a lot of freedom to tweak a parameter here or there to try to evade particular experimental bounds. [For experts: last I heard there was some nit picking about the tan-β dependence of these results?]

This is actually a fairly important point. For the past two decades theorists have worked hard to come up with clever supersymmetric models which can either give novel experimental signatures or which are otherwise “generic” in a way that is not captured by the usual models used to experimentally constrain SUSY. With the advent of the LHC era, however, more thought has gone into better interfacing with our experimental colleagues to connect the results of the LHC to a more robust set of SUSY parameters. (This is part of a larger shift in the particle physics community over the past decade to have better communication between our theoretical and experimental practitioners.)

Anyway, there’s one thing that’s for sure: the Standard Model particles will be without super partners once again on Valentine’s day.

PS — [from Cosmic Variance] apparently the White House is also due to release its FY2012 budget request this Valentine’s day. Given the push towards spending cuts, it’s not looking like fundamental science will get much love… but I’m crossing my fingers anyway. (I don’t want to get political, but fundamental research is an investment in the American science and engineering infrastructure and the future of the American economy.)

  • I should also say that this is only one of many searches for new physics (including many other SUSY searches).

    For an update on science in the White House FY2012 budget proposal, here’s a summary from Peter Woit: http://www.math.columbia.edu/~woit/wordpress/?p=3455

  • Dear Flip, a good text. Do you have a simple enough explanation why the same 35/pb collected by ATLAS gave you bounds that seem so substantially stronger than the bounds from the same amount of 35/pb collected by the CMS? Is that due to a difference in the detector, or the methods?

    If you want to see the parameters where one really waits for the LHC to make a verdict – and where it will start to bite the “real meat” – see e.g. the parameters of the surviving Indian supersymmetric island:


    Gauginos around 900 GeV. You are not too far from this point. The LHC could have discovered SUSY in the very early months. It hasn’t happened. But it’s still very far from falsifying the points that were likely based on the latest pre-LHC, including Fermilab, data. If and when you publish an upper limit that goes to a TeV, I will begin to be nervous.

  • Hi Lubos, thanks for the link. I don’t know much about comparing the ATLAS vs CMS bounds, but at some level we’re comparing apples and oranges, right? The ATLAS paper is looking at jets + MET + lepton, while the CMS paper is only looking at jets + MET. Should we be expecting a jet + MET paper from ATLAS soon?

    I know you’ve already seen (and commented) on Jester’s post, but for others: Resonaances has a very nice blog post on these results:



  • Thanks! I kind of saw that the sets of channels were different but wanted to hear the precise difference from a professional. 😉

  • Jonathan Clift

    Can I ask a question about the second plot?

    Does the meff[GeV] axis scale roughly linearly with collision energy? What I mean is, if the collision energy goes up to 14TeV does that then give you data right up to the 1200 to 1400 area? If so, it must have made the decision to stay at 7TeV for another year a really difficult one.

  • Hi Jonathan — that’s an *excellent* question. The most honest answer I can give you is “I don’t know.” To be even more honest, I’m probably not nearly as qualified as any of the other bloggers on this site to begin to answer this.

    I suspect the answer is no. Broadly speaking there are two things that we need to study new at high scales: (1) high energies, (2) lots of data, i.e. “high luminosity.” I imagine that just taking more data will make a difference by decreasing statistical uncertainties.

    Of course, if the squarks/gluinos are really heavy, then our only shot is to make sure that we’re getting enough events at high enough energies. Since the actual “pointlike” interactions are between quarks which don’t necessarily carry the entire momentum of the proton, each proton/proton collision has less energy that 3.5 + 3.5 TeV. So having more luminosity at 7 TeV will increase the number of high energy events… but certainly not as much as having more luminosity at a higher center of mass energy.

    The relation between center of mass energy and sensitivity is not obvious to me, though. (It might be obvious… just not to me!) The background is different at different energy scales, so maybe it gets much better, maybe it gets worse.

    As Lubos says, though, when we start pushing the bounds to around a TeV scale or so, people will start sweating about the fate of low energy supersymmetry.

    Anyway, the summary is that I’m sorry I can’t answer this better. Perhaps the following links (and links therein) might have more to say:


  • I take it you’ve not yet seen the (more imressive?) ATLAS 0-lepton results, part of which have been released in preliminary form at ASPEN last Thursday.

    See slides 12, 21 and 23 of


  • Shamino

    The reason why ATLAS has a farther limit setting curve than CMS is because ATLAS relies on MC and that is good enough, while CMS is being more conservative and relies on data-driven methods. The reaches when comparing similar type methods is very comparable.

    It is unfortunately misleading.

  • Many thanks “Rutterbasher” and Shamino for the comments. I was not aware of the preliminary plot at Aspen, but was delighted that you directed me to it. So it seems like the “m(gaugino) = m(squark)” bound within mSUGRA is now closer to 800 GeV?

    I wasn’t aware that ATLAS and CMS used such different methods—when you say MC vs. data-driven, I’m assuming you mean for calculating the background? Is there a way to quantify how reliable/unreliable the Monte Carlo is relative to the data-driven technique?


  • It is not correct to suggest that the ATLAS result has a better reach “because it uses MC”.

    It is true that ATLAS and CMS have used totally different approaches, though, for their first papers. It is true that CMS, in its first paper, elected to use “alphaT”, a variable that it acnkowledges has strong QCD rejection but only weak sensitivity to SUSY. This was (it seems to me) a descision based on a desire to be able to create a robust analysis that could discover SUSY very quickly if it was very easy to see. Nothing wrong with that. You could call it a “conservative approach”.

    ATLAS, on the other hand, decided to go all out (from the beginning) with an analyses that was designed to go for something approaching maximum reach, right from the start. As a consequence, ATLAS used a totally different approach, eschewing alphaT in favour of variables like m_eff and m_T2 with much greater sensitivity to SUSY. Indeed, at ICHEP in Paris, last year, CMS people suggested to me that CMS was likely to produce a m_eff or m_T2 based paper thenselves on the first year’s data. Whether they still will do this, I do not know. If they do produce such a paper it will (most probably) have a very similar reach to that of ATLAS.

    So to conclude, the difference in the reach between ATLAS and CMS is nothing to do with “MC” versus “Data-Driven” BG estimates — it is to do with one having done an analysis using “Apples” and the other having used “Oranges”. AlphaT is just not optimised for SUSY discovery. CMS openly admit this in section 4.4 their paper


    where they say

    “Both these variables [Meff and deltaPhi [a surrogate for m_T2]] exhibit differences between SUSY signal events and events from SM backgrounds and could, therefore, be used to improve the limits extracted in the following section. We have chosen not to do so because the current search has been optimized for the demonstration of a potential new signal, rather than for the extraction of the most stringent limits in the SUSY parameter space.”

  • Hello Rutterbasher! Thanks very much for these insights, it’s really helped me (and I imagine many other readers) put these results into some context.

  • Supersymmetry has been suggested independently in 1971 by Juri Gol’fand and Evgeni Likhtman, in 1973 by Dmitri Volkov and V. Akulov, and in 1974 by Julius Wess and Bruno Zumino. In 1976 Peter van Nieuwenhuizen, Sergio Ferrara, Daniel Z. Freedman, Stanley Deser, and Bruno Zumino suggested a local supersymmetry called supergravity. In 1981 Edward Witten has shown that supersymmetry can solve several shortcomings of Grand Unified theories. In 1984 Michael Green and John Schwarz have shown that string theory and supersymmetry can be combined. This is the superstring theory. In 1995 Edward Witten has shown that the membrane concept can agree the 11-dimensional supergravity with the 10-dimensional superstring theory. Both theories are limit cases of an 11-dimensional M-theory.

    Supersymmetric theories predicted that the elementary particles of the standard theory of particle physics (leptons, quarks, photon, gluons, W- and Z-boson, Higgs boson) have supersymmetric partners. This supersymmetric particles (called neutralinos, photino, gluinos, Winos, Zinos, squarks, and sleptons) were all predicted to have rest masses between 50 and 300 GeV (billion electron volts).

    Now the ATLAS Collaboration of the LHC (Large Hadron Collider) presented data (arXiv: 1102.2357) which do not confirm the gluino. It would have been detected if its rest mass were less than 700 GeV.

    I am not so surprised that signs of light supersymmetric particles have not been detected. I predict that supersymmetry will not be confirmed. My arguments are the following.

    (1) The main reason for supersymmetry is that it can explain some shortcomings of minimal Grand Unified Theories, i. e. the mass-hierarchy problem (i. e. the fact that W- and Z-boson do not have rest masses of 10^15 GeV, although they should have “eaten” (coupled to) the Higgs bosons of Grand Unification) and the non-observation of the proton decay (lower limit: mean proton lifetime of 10^33 years).

    But this argument requires that there is Grand Unification.

    In 1997 I suggested (Modern Physics Letters A 12, 3153 – 3159 = hep-ph/9708394) a generalization of quantum electrodynamics, called quantum electromagnetodynamics. This theory is based on the gauge group U(1) x U’(1). In contrast to QED it describes electricity and magnetism as symmetrical as possible. Moreover it explains the quantization of electric charge. It includes electric and magnetic charges (Dirac magnetic monopoles) and two kinds of photon, the conventional Einstein electric photon and the hypothetical Salam magnetic photon. The electric-magnetic duality of this theory reads:

    electric charge — magnetic charge
    electric current — magnetic current
    electric conductivity — magnetic conductivity
    electric field strength — magnetic field strength
    electric four-potential — magnetic four-potential
    electric photon — magnetic photon
    electric field constant — magnetic field constant
    dielectricity number — magnetic permeability

    Because of the U(1) x U’(1) group structure and the Dirac quantization condition e * g = h (unit electric charge times unit magnetic charge is equal to the Planck constant), this theory is hard to agree with Grand Unification. Although a group such as SU(5) x SU’(5) is in principle not impossible.

    (2) Another reason for supersymmetry is that it can explain the existence of (anti-symmetrical) fermions in an otherwise symmetrical theory (such as Special Relativity and General Relativity).

    However, it has long been known that a generalization of General Relativity which includes anti-symmetry is Einstein-Cartan theory. The affine connection of this theory includes not only the non-Lorentz invariant symmetrical Christoffel symbol but also the Lorentz invariant anti-symmetrical Torsion tensor.

    Within the framework of a quantum field theory, the Torsion tensor corresponds to a spin-three boson called tordion, which was introduced in 1976 by F. W. Hehl et al.: Reviews of Modern Physics 48 (1976) 393 – 416.

    In 1999 I discussed (International Journal of Modern Physics A 14, 2531-2535 = arXiv: gr-qc/9806026) the properties of the tordion. Moreover I sugested that the electric-magnetic duality is analogous to the mass-spin duality. This analogy reads:

    electric charge — magnetic charge – mass — spin

    electric field constant — magnetic field constant — gravitational constant — reduced Planck constant

    electric four-potential — magnetic four-potential — metric tensor — torsion tensor

    electric photon — magnetic photon — graviton — tordion

    (3) Supersymmetric theories including superstring and M theory have not much predictive power. For example, so far no one has shown that these theories predict the empirically obvious Naturkonstanten-Gleichung (fundamental equation of unified field theory, Modern Physics Letters A 14, 1917-1922 = arXiv: astro-ph/9908356):

    ln (kappa * c * H * M) = −1 / alpha

    where kappa is the Einstein field constant, c is the speed of light, H is the Hubble constant, M is the Planck mass, and alpha is the fine-structure constant. By using the WMAP−5 value

    H = (70.5 +/- 1.3) km / (s * Mpc)

    (E. Komatsu et al.: Astrophys. J. Suppl. Series 180 (2009) 330 – 376) the left-hand side yields

    ln (kappa * c * H * M) = – 137.025(19)

    which is within the error bars equal to

    – 1 / alpha = – 137.035 999 679(94)

    The Naturkonstanten-Gleichung predicts the Hubble constant to be

    H = 69.734(4) km / (s * Mpc)

  • Thailand Surrogacy

    Supersymmetry is heady stuff to a layman! Thanks to all here, for writing this up and helping us “mere mortals” to get a grasp on such weighty matters. 😉