• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Posts Tagged ‘first results’

The CUORE-0 collaboration just announced a result: a new limit of 2.7 x1024 years (90%C.L.) on the halflife of neutrinoless double beta decay in 130Te. Or, if you combine it with the data from Cuorecino, 4.0×1024 years. A paper has been posted to the arXiv preprint server and submitted to the journal Physical Review Letters.

Screen Shot 2015-04-09 at 5.26.55 PM

Bottom: Energy spectrum of 0νββ decay candidates in CUORE-0 (data points) and the best-fit model from the UEML analysis (solid blue line). The peak at ∼2507 keV is attributed to 60Co; the dotted black line shows the continuum background component of the best-fit model. Top: The nor-369 malized residuals of the best-fit model and the binned data.370 The vertical dot-dashed black line indicates the position of371 Qββ. From arXiv.

CUORE-0 is an intermediate step between the upcoming full CUORE detector and its prototype, Cuoricino. The limit from Cuoricino was 2.8×1024 years**, but this was limited by background contamination in the detector, and it took a long time to get to that result. For CUORE, the collaboration developed new and better methods (which are described in detail in an upcoming detector paper) for keeping everything clean and uniform, plus increased the amount of tellurium by a factor of 19. The results coming out now test and verify all of that except the increased mass: CUORE-0 uses all the same cleaning and assembly procedures as CUORE, but with only the first of 19 towers of crystals. It took data while the rest of the towers were being built. We stopped taking CUORE-0 data when the sensitivity was slightly better than Cuoricino, which only took half the exposure time of the Cuoricino run. The resulting background was 6 times lower in the continuum parts of the spectrum, and all the energy resolutions (which were calibrated individually for each crystal each month) were more uniform. So this is a result to be proud of: even before the CUORE detector starts taking data, we have this result to herald its success.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates. From the seminar slides of L. Canonica.


The result was announced in the first seminar in a grand tour of talks about the new result. I got to see the announcement at Gran Sasso today–perhaps you, dear reader, can see one of the talks too! (and if not, there’s video available from the seminar today) Statistically speaking, out of these presentations you’re probably closest to the April APS meeting if you’re reading this, but any of them would be worth the effort to see. There was also a press release today and coverage in the Yale News and Berkley Labs news, because of which I’m making this post pretty short.


The Upcoming Talks:

There are also two more papers in preparation, which I’ll post about when they’re submitted. One describes the background model, and the other describes the technical details of the detector. The most comprehensive coverage of this result will be in a handful of PhD theses that are currently being written.

(post has been revised to include links with the arXiv post number: 1504.02454)

**Comparing the two limits to each other is not as straightforward as one might hope, because there were different statistical methods used to obtain them, which will be covered in detail in the papers. The two limits are roughly similar no matter how you look, and still the new result has better (=lower) backgrounds and took less time to achieve. A rigorous, apples-to-apples comparison of the two datasets would require me to quote internal collaboration numbers.


The summer conference season may be winding down, but that doesn’t mean we are quite done yet.  Today was the first day of the Lepton Photon 2011 (LP2011) Conference; which is taking place in Mumbai, India all this week.  The proceedings of LP2011 are available via webcast from CERN (although Mumbai is ~10 hours ahead if you are in the Eastern Standard Timezone).  But if you’re a bit of a night owl and wish to participate in the excitement, then this is the link for the webcast.

The complete schedule for the conference can be found here.

But what was shown today?  Today was a day of Higgs & QCD Physics.  I’ll try to point out some of the highlights of the day in this post.  So let’s get to it.

The Hunt for the Higgs

Today’s update on the CMS Collaboration’s search for the ever elusive Higgs boson made use of ~110-170 trillion proton-proton collisions (1.1-1.7 fb -1); covering eight separate decay channels and a Higgs mass range of 110-600 GeV.   The specific channels studied and the corresponding amount of data used for each are shown in the table at left.  Here l represents a charged lepton and v represents a neutrino.

The CMS Collaboration has not reported a significant excess of events in the 110-600 GeV range at LP2011.  However, the exclusion limits for the Higgs boson mass range were updated from our previously reported values at EPS2011.  By combining the results of the eight analyses mentioned above the CMS Collaboration produced the following plot summarizing the current state of Higgs exclusion (which I have taken from the Official CMS Press Release, Ref. 1; and CMS PAS HIG-11-022, Ref. 2.  Please see the PAS for full analysis details):


Standard Model Higgs boson combined confidence levels showing current exclusion regions, image courtesy of the CMS Collaboration (Ref 1 & 2).


But how do you interpret this plot?  Rather than re-inventing the wheel, I suggest you take a quick look at Aidan‘s nice set of instructions in this post here.

Now then, from the above plot we can see that the Standard Model Higgs boson has been excluded at 95% confidence level (C.L.) in the ranges of 145-216, 226-288 and 310-400 GeV [1,2].  At a lower CL of 90%, the Collaboration has excluded the SM Higgs boson for a mass window of 144-440 GeV [1,2].

These limits shown at LP2011 improve the previous limits shown at EPS2011 (using 1.1 fb-1).  The previous exclusion limits were 149-206 and 300-440 GeV at 95% C.L., or 145-480 GeV at 90% C.L.

While the LP2011 results did not show a Higgs discovery, the CMS Collaboration is removing places for this elusive boson to hide.

QCD Physics

Today’s other talks focused on quantum chromodynamics (QCD).  With the CMS Collaboration’s results shown for a variety of QCD related measurements.

One of the highlights of these results is the measurement of the inclusive jet production cross section.  The measurement was made for a jet transverse momentum over a range of ~20-1100 GeV.  The range in cross-section covers roughly ten orders of magnitude!

Measurement of the inclusive jet cross-section made with the CMS Collaboration, here data are the black points, the theoretical prediction is given by the red line. Image courtesy of the CMS Collaboration (Ref. 3).

In this plot above each of the data series are “binned” by what is known as a jet’s rapidity (denoted by the letter y). Or in this case the absolute value of the jets rapidity.  Rapidity is a measure of where a jet is located in space.

The CMS detector is a giant cylinder, with the collisions taking place in the center of the cylinder.  If I bisect the detector at the center with a plane (perpendicular to the cylinder’s axis), objects with lower rapidities make a small angle with this plane.  Whereas objects with higher rapidities make a large angle with this plane.

As we can see from the above plot, the theoretical prediction of QCD matches the experimental data rather well.

Another highlight of CMS Collaboration’s results shown at LP2011 is the measurement of di-jet production cross-section

Measurement of the dijet production cross-section made with the CMS Collaboration.  Again, data are the black points, the theoretical prediction is given by the red line.  Image courtesy of the CMS Collaboration (Ref. 3).

Here the CMS results shown cover an invariant dijet mass of up to ~4 TeV, that’s over half the CoM collision energy!  Again, the theory is in good agreement with the experimental data!

And the last highlight I’d like to show is the production cross section of isolated photons as recorded by the CMS Detector (this is a conference about leptons and photons after all!).

Measurement of the isolated photon production cross-section made with the CMS Collaboration. Again, data are the black points, the theoretical prediction is given by the red line.  Image courtesy of the CMS Collaboration (Ref. 3).

What happens in isolated photon production is a quark in one proton interacts with a gluon in the other proton.  This interaction is mediated by a quark propogrator (which is a virtual quark).  The outgoing particles are a quark and photon.  Essentially this process is a joining of QCD and QED, an example of the Feynman Diagram for isolated photon production is shown below (with time running vertically):

From the above plot, the theoretical predictions for isolated photon production are, again, in good agreement with the experimental data!

These and other experimental tests of QCD shown at LP2011 (and other conferences) are illustrating that the theory is in good agreement with the data, even at the LHC’s unprecedented energy level.  Some tweaks are still needed, but the theorists really deserve a round of applause.



But I encourage anyone with the time or interest to tune into the live webcast all this week!  Perhaps I’ll be able to provide an update on the other talks/poster sessions in the coming days (If not check out the above links!).

Until Next Time,




[1] CMS Collaboration, “New CMS Higgs Search Results for the Lepton Photon 2011 Conference,” http://cms.web.cern.ch/cms/News/2011/LP11/, August 22nd 2011.

[2] CMS Collaboration, “Combination of Higgs Searches,” CMS Physics Analysis Summary, CMS-PAS-HIG-11-022, http://cdsweb.cern.ch/record/1376643/, August 22nd 2011.

[3] James Pilcher, “QCD Results from Hadron Colliders,” Proceedings of the Lepton Photon 2011 Conference, http://www.ino.tifr.res.in/MaKaC/contributionDisplay.py?contribId=122&sessionId=7&confId=79, August 22nd 2011.


For all our electro-weak enthusiasts, this past week has been a very exciting time.  The CMS Collaboration has just published our first study measuring the W+W production cross section at 7 TeV. This study, titled “Measurement of W+W Production and Search for the Higgs Boson in pp Collisions at sqrt(s) = 7 TeV,” is available on arXiv.org and has been accepted for publication by Physics Letters B (a peer-review journal for those who are wondering).

But before we delve into the paper proper, let’s take a moment and ask ourselves: “Why study W+Wproduction at all?”

For this answer, let’s take a page out of one of Flip Tanedo’s posts, “An Idiosyncratic Introduction to the Higgs,” and study the Higg’s Branching Ratios (or the percent of Higgs particles decaying by method X out of all possible decays Y).  Now one question you might be asking is: “Do particles really decay in more then one way?”

The answer is, yes.  The name of the game is that heavy particles always decay into lighter particles, unless a conservation law prevents it.  And particles will decay by different methods based on probability; it’s all random and up to chance.  However, some decay methods for a particle are more likely then others.  Looking at the plot of the Branching Ratio for the Higgs boson (which I took from Flip), the curves that are above everything else in the plot represent the decay methods that are more likely then the others (So the Branching Ratio is also a statement about probability!).

The Higgs, being a theoretically massive particle (since we have yet to observe it in a collider), can decay in numerous ways.  In the plot below, the possible ways the Higgs can decay are:

  • A quark an anti-quark (b and the b with a bar over it; c and the c with a bar over it).
  • Two W bosons (with opposite electric charged because the Higgs has zero electric charge).
  • Two tau (ττ) leptons (with opposite electric charge).
  • Two gluons (the gg symbol).
  • Two Z bosons (the Z is also electrically neutral).
  • Two photons (γγ)
  • A Z and a photon



But, since the Higgs hasn’t been found yet experimentally, physicists are unsure of its actual mass (and thus what it will decay into).  The theory does give us clues though (as do other experiments, more on this later).

But how do particle physicists look for a new particle?  One way to find them is to search for peaks in a mass distribution.

Now since energy and mass are related, the more energetic an object is, the more mass it has.  This doesn’t really matter in our everyday lives, because the increase is very, very small; but if you’re a sub-atomic particle (or an atomic scale particle) it matters a lot!  As an example, when protons are accelerated in the LHC, they become several thousand times more massive to when they are at rest!

But how do you make a mass distribution?  Well, physicists take two objects (say a W+W, or a μ+μ) and look at the sum of their masses.  On the x-axis you plot the mass of the pair, and on the y-axis you plot the number of times you found a pair with that mass value.  Here is an example of what you would get for a pair of muons (μ+μ):

Courtsey of the CMS Collaboration, arXiv:1012.5545v1, 26 Dec 2010


So in this plot of “Events” vs. “Muon Pair (μ+μ) Mass” (read Y vs. X), we see three peaks!  These peaks correspond to three different mesons (a meson is a class of particles made up of a quark and anti-quark); in order they are the Υ(1S), Υ(2S), and Υ(3S).  Sadly in particle physics we started to run out of symbols. It is common now to name particles based on how the quarks bind together to form the particle, hence the (1S), (2S) and (3S) after the symbol “Υ”.  These represent three different “bound-states” (and thus three different particles) of the quarks making up the Upsilon (“Υ”).




Now back to our Branching Ratio plot above.  Notice how the W+Wline is above all the other lines for masses greater than ~130 GeV/c2.  This means that the Higgs has a higher probability of decaying into a W+W pair for this region (mass > 130 GeV/c2)!  Therefore, if you’re an experimentalist looking to find the Higgs, one of the best places to look is in events with a W+Wpair coming out of the proton-proton collision!!!

Perhaps its also interesting to take a look at the current constraints placed on the Higgs mass.  The LHC’s ancestor, the Large Electron Positron (LEP) Collider, has placed a lower limit on the Standard Model (SM) Higgs Boson mass of 114.4 GeV/c2 with a 95% Confidence Level (C.L.).  Previous precision electroweak measurements have constrained the SM Higgs mass to be less than 185 GeV/c2 (95% C.L.).  And the US’s Tevatron Collider has excluded the mass range of 158-175 GeV/c2 (95% C.L.).  In summary, the current unexplored regions of the SM Higgs mass are 114.4-158 GeV/c2 and 175-185 GeV/c2. Or more precisely, if the SM Higgs boson does exist, then it will most likely have a mass between 114.4-158 GeV/c2 or 175-185 GeV/c^2, and for some portions of these ranges the Higgs will decay over 90% of the time to a W+W Pair!!!

As a note on book keeping, this study used all of the data collected by the CMS Detector in 2010!

So, what did my colleagues in the electro-weak sector of CMS look for?  Since a W will decay into a charged lepton and the corresponding lepton-neutrino, (i.e. W± → l±vl); CMS Physicists looked for events containing e+e, μ+μ, e+μ- (or eμ+) pairs which have a large component of their momentum in the plane perpendicular to the two proton beams.  In addition, CMS Physicists also looked for events containing large missing transverse energy.

Since two neutrinos are present in these W+Wevents; and the CMS Detector cannot detect neutrinos directly (they just interact too weakly!), physicists must infer their presence by looking for this “missing transverse energy”.

But what is missing transverse energy?  To measure missing transverse energy, we look at the energy coming out in all directions in the transverse plane, the plane perpendicular to the beam pipe where the protons collide.  If the energy going out in one direction does not balance the energy going out in the opposite direction, we know that a particle escaped detection.

Or more simply, Ta-da, a neutrino went that-a-way. This is also how we would detect other particles that do not interact with matter in an ordinary way.

Now that’s the basics of Event Selection, the full details can be found in section 3 of the paper (and if anyone has any questions I will try to answer them!), but let’s move on for now.

CMS Researchers found 13 events total, in which a W+Wpair was produced (this is in agreement with simulation, where 13.5 events where found).  Now, let’s take a moment to ponder this.  Researchers looked at all of the data from 2010, and only found 13 events! This shows that W+Wproduction is an incredibly rare process!


Now, for some results!  Experimentalists have found the W+W production cross-section at a center of mass energy of 7 TeV in proton-proton collisions to be σ = 41.1 ± 15.3 ± 5.8 ± 4.5 pico-barn (pb, for an idea of what a barn is, see this post by Ken Bloom).  The uncertainties listed on this cross-section value are due to statistical, systematic, and luminosity factors, respectively.

So what!?  Well, this is in agreement with the theoretical prediction given by the Standard Model (SM) at Next-to-Leading Order (NLO).  The SM prediction was 43.0 ± 2.0 pb.

So our theory is correct!  It matches the experimental data!


Also, to reduce uncertainties, CMS Physicists also took the ratio of the W+W to W± production cross sections.  In this case, the uncertainty in the proton beam’s luminosity cancels out.  The experimental ratio of these two cross sections was found to be 4.46 ± 1.66 ± 0.64 ·10-4 (uncertainties are again due to statistical & systematic factors, respectively), whereas the theoretical value of this ratio was given to be 4.45 ± 0.30 · 10-4.  Now this is even better agreement! Which is why experimentalists choose to compare these two ratios instead.


Now onto the “Glorious Higgs!”  The process we are now interested in is:

H → W+W → 2l 2vl

Where: l is a lepton, and vl is the corresponding neutrino.

CMS Physicists modified the event selection slightly for this.  The theory tells us when the Higgs decays into a W+Wpair the angle between the two outgoing oppositely charged (electric) leptons is very small (close to zero degrees), whereas when we are just looking at background processes (such as pure W+Wproduction, top quark events, ect…) the angle is very large (close to 180 degrees).  So experimentalists made a measurement of the angle between these two leptons to get an idea if a Higgs boson had decayed into a W+Wpair, shown here:

So in this plot we have our 13 selected W+Wevents, they are the black experimental data points (and their uncertainties), the colored portions are the theoretical predictions given by the SM for various known processes.

Now for our  Higgs search, W+Wis a background (shown in brown)! Our other backgrounds being:

  • Dark Blue: production of a Z boson plus jets (hadronic activity).
  • Pink:  top quark pair production or single top quark with a W boson production
  • Green: di-boson production, like WZ, or ZZ, or γZ, etc…
  • Light Blue: W plus jets (hadronic activity).


Now if we assume the Higgs Boson has a mass of 160 GeV/c2 then the theoretical prediction of the angle between the two charged leptons in our events is shown as the solid black line (which has a peak near zero, the angle between the outgoing leptons is small for Higgs production). So from this plot, we see that we haven’t found any evidence that a Higgs Boson with a mass of 160 GeV/c2 has been found (i.e. there are not a lot of points near the peak in the black solid line).


But that was just one value for the Higgs mass.  What about the others?  As particle physicists we need to look at all possible ranges for the mass of the Higgs.  CMS Physicists decided to look at a large range, of 120-600 GeV/c2.  This is shown here:



So this is a very colorful plot, but what does it mean?


The Y-Axis is the Higgs production cross section multiplied by the Higgs Branching Ratio to W+Wpairs.  The X-axis is the Higgs Mass.  The blue line is experimental observation.  This is the region of the “phase-space” we where able to see with this study.  The “phase-space” is the possible ways something can happen.  When you’re playing Monopoly, and you roll two dice, the most likely outcome for the sum of the dice roll is 7.  This has a large phase space….you can make 7 with a (1,6), or (2,5) or (3,4) on each dice.  Whereas having both dice add to 12 has a very small phase space, this only happens when each die comes up 6.


So the blue line represents how much of the phase space we were able to see.  The green and yellow lines are the 95% C.L. bands on the blue lines.


The solid red line near the bottom of the graph is the theoretical prediction given by the current Standard Model for how the Higgs boson’s “phase space” may behave.   Notice the experimental blue line, and the theoretical red line are nowhere near each other! Since we didn’t see many data points in the Higgs (160 GeV/c2) region in the graph of the angle between our two charged leptons above, it shouldn’t shock us that the blue line is nowhere near the solid red line (at 160 GeV/c2).  But this doesn’t mean that there was anything wrong with the experiment.  On the contrary, what this means is that CMS Physicists did not have conclusive evidence to say whether or not the Higgs will or will not decay into a W+Wpair (based on a statistically significant dataset).

So the current theory (the Standard Model) tells us that the Higgs can decay into a W+W pair; but with the current data CMS Physicists where unable to prove or disprove the Standard Model’s theoretical prediction.


But, the final line is, what I think, the most interesting.  This is the red portion with the criss-crossing pattern around it (second item on the legend in the above graph).

Currently in the Standard Model there are three generations of quarks that have been experimentally confirmed.  Theorists have often wondered if this is the full story.   Meaning, could a possible 4th generation of quarks/leptons exist and we just haven’t seen them yet?  This criss-crossing red line gives the phase space for how the Higgs would decay if this 4th generation did exist.

Now notice the blue line is underneath the criss-crossed red line for a Higgs mass of 144-207 GeV/c2 when you assume there is a 4th generation of quarks/leptons.

The fact that the blue line is  under the criss-crossed red line means that we were conclusively able to probe this portion of the phase space for the 4th generation hypothesis.  Since we did not see any conclusive evidence (again, reference angle between our charged leptons in the graph above) of Higgs decaying into a W+W pair for the mass region of 144-207 GeV/c2, we were able to make a definitive statement:


If the Standard Model has a 4th Generation of Quarks/Leptons, and the Higgs boson has a mass between 144-207 GeV/c2, the it does not decay to a W+Wpair.


But the jury is still out on the current three generation cases.  We weren’t able to probe that region of the phase space (blue line nowhere near solid red line).  As is often the case in all fields of science, we need more data.

Until next time,



(I would like to thank Kathryn Grim for her helpful advice regarding the presentation of this material)


Very early this morning we got the first lead-lead collisions at the LHC!  I am all a twitter.  This is a very exciting time.  I just arrived at CERN today and I am very, very jet-lagged, so I’ll keep this short.

Pictures.  What you all want to see is pictures.

Here are some event displays with the first Pb+Pb collisions seen by ALICE.  This is an example:

These event displays only show information with the Inner Tracking System (ITS).  Our main tracking detector, the Time Projection Chamber (TPC), was off for these collisions.  The reason is that the beams were not perfectly stable for the first collisions and we did not want to damage our TPC.

And check out this video of an event display (the original video is here):

And now that we have lead-lead data, we have a lot of work to do.  Expect the first lead-lead paper soon.  It will be a multiplicity paper like ALICE’s first few proton-proton papers.  We will just measure the number of charged particles in an event.  This information alone will tell us a lot about heavy ion collisions – the first estimates for how many particles we should see in an event varied by a factor of 4, from 2000-8000 tracks.


Exciting new results from CMS

Tuesday, September 21st, 2010

I’m giddy today because CMS just came out with some very exciting results.  I don’t think we understand what they mean at all – and as a scientist, there is nothing I love better than shocking data, data that challenge what we think we understand.  (For the technical audience, the slides from the talk at CERN are here and the paper is here.)  I might be biased because this topic is very closely related to my doctoral thesis, but I think it’s safe to say this is the first surprising result from the LHC, something that changes our paradigm.

In heavy ion collisions at the relativistic heavy ion collider we observed something called the ridge (from this paper):

We more or less understand the peak – called the “jet-like correlation” – but we don’t understand the broad structure the peak is sitting on.  This broad structure is called the ridge.  What I mean when I say we don’t understand the ridge is that we haven’t settled in the field how this structure is formed, where it comes from.  We have a lot of models that can produce something similar, but they can’t describe the ridge quantitatively.

Here’s what CMS saw:

It’s a slightly different type of measurement – I’ve put a box around the part with the ridge.  We see the same peak as we saw before – again, we pretty much understand where this comes from.  But there’s a broad structure beneath this peak.  It’s smaller than what we saw in heavy ion collisions above, but it’s there – the fact that it’s there is surprising.

In the models we have from heavy ion collisions the ridge is from:

  • A high energy quark or gluon losing energy in the Quark Gluon Plasma,
  • Collective motion of particles in the Quark Gluon Plasma, or
  • Remnants of the initial state (meaning the incoming particles)

In our current understanding of what goes on in a proton-proton collision, there is no Quark Gluon Plasma – so the conservative interpretation of these data would mean that the ridge is somehow some remnant of the initial state. Even conservatively, this would severely constrain our models.  Some physicists, such as Mike Lisa at Ohio State University, have proposed that there may be collective motion of particles in proton-proton collisions, similar to what we see in heavy ion collisions.  This would imply that we also see a medium in proton-proton collisions.  That would be a huge discovery.  (Just to be clear, CMS is not making this claim, at least at this point.)  It will take a while for the community to debate the meaning of these data and come to a consensus on what they mean.  But these data are definitely very exciting – this is the most exciting day for me since the first collisions!


ICHEP: what to watch for

Wednesday, July 21st, 2010

At long last, the 35th International Conference on High Energy Physics begins tomorrow. It’s the largest particle-physics conference of the year, and the first major conference since the start of LHC operations at 7 TeV, If the US LHC blog has seemed to be a bit quiet lately, it might be because so many bloggers have been working hard to get results ready. Now, it’s highly unlikely that there will be any surprising LHC discoveries announced there; we just don’t have nearly enough data yet. But that doesn’t mean that this conference will be boring! Here are a few things that you might want to be watching for:

  • How well are the experiments keeping up with the LHC? The LHC has now delivered about 350 nb-1 of integrated luminosity to the experiments. What fraction of that data will the experiments show? This is a measure of the operational efficiency of the experiments, and of their ability to get the data through reconstruction and analysis. If the experiments are able to show a large fraction of the delivered data, then we can be optimistic about how quickly results will come out as the collision rates rise.
  • How competitive is the LHC with the Tevatron? The Tevatron experiments have collected a huge amount of data over the past nine years, and have an excellent understanding of how their detectors work. They will still be in the lead on many, many physics topics. (Disclaimer: I also work on one of the Tevatron experiments.) However, because of the LHC’s higher collision energy, there might be a few measurements for which the LHC can produce stronger results, even with a tiny amount of data. Will there be any such results, and what will they be?
  • How competitive is the Tevatron with the LHC? Everyone is eager to hear the latest limits on the standard-model Higgs boson from the Tevatron. The excluded Higgs masses are the ones that would have been the easiest for the LHC to see too. How much harder will new Higgs limits make it to find a Higgs at the LHC?
  • Any surprises from elsewhere? Let’s not forget that this conference covers all of particle physics, and there’s a lot more going on out there than just the LHC!
  • How tired do the presenters look? A lot of that 350 nb-1 came at the last minute — did everyone stay up all night to finish their data analysis?

I won’t be attending the conference, but I’ll try to provide some commentary from lovely Lincoln as events unfold. Good luck to all involved — this is going to be a lot of fun!


ALICE has just submitted its fourth paper, on the anti-proton to proton ratio in p+p collisions, to Physical Review Letters.  This is a really cool measurement because it is one way of quantifying how many of the particles we create in our collisions – as opposed to how many of the particles we see are remnants of the beam.

A proton has three valence quarks, two up quarks and one down quark.  The proton’s electric charge is +1.  An anti-proton has three valence anti-quarks, two anti-up quarks and one anti-down quark.  The anti-proton’s electric charge is -1.  The anti-proton is the proton’s anti-particle.  When a proton and an anti-proton come together, they annihilate.

A baryon has three valence quarks –  examples are protons (two up quarks and a down quark) and neutrons (two down quarks and an up quark).  There are many more exotic baryons – my favorites are the Λ (an up quark, a down quark, and a strange quark) and the Ω (three strange quarks) . A proton is a baryon, while an anti-proton is an anti-baryon.  Baryon number is the net number of baryons in a system and it is conserved in all processes we have observed in the laboratory.  In our p+p collisions, the baryon number is 2 because there are two incoming baryons.  Because the anti-proton is an anti-baryon, it had to be created in the collision.  Moreover, because there were no (net) anti-quarks in our incoming protons, all three anti-quarks in any anti-proton we see had to be created in the collision.  If we just look at protons, we can’t tell if they were created in the collision or if they are remnants of the beam.

Since anti-protons don’t exist prior to collision, one way of quantifying how many particles were created in the collisions, as opposed to how many are beam remnants, is their ratio.  If this is near zero, most of the particles we observe are remnants of the beam.  If this is near one, most of the particles we see were created in the collision.  At low energies, the anti-proton to proton ratio is closer to zero, but we expect it to be almost one at LHC energies.  Here you can see the collision energy dependence of the anti-proton to proton ratio (Figure 4 of the new paper):

The y-axis is the anti-proton to proton ratio.  The upper x-axis is the center-of-mass energy of the collision.  The different data points are measurements from different experiments.  The line shows a fit to the data.  The lower y-axis is a little more complicated – I’ve put an explanation below, but you can skip it and just look at the top x-axis.  You can see that the anti-proton to proton ratio is very close to one at LHC energies.  But of course, we have to quantify how close the anti-proton to proton ratio is to one.  Specifically, we measured it to be 0.957 ± 0.006(statistical) ± 0.014(systematic) at 0.9 TeV and 0.991 ± 0.005(statistical) ± 0.014(systematic) at 7 TeV.  Most of the work went into determining the uncertainty.  We could reduce the statistical uncertainty by just taking more data, but the systematic uncertainty is limited by the method and the experiment.

What do we learn from this measurement?  It helps us test and refine our understanding of baryon production in proton-proton collisions.  We can compare to models for proton and anti-proton production and this lets us constrain some models and exclude others.

To give a feel for how complicated it can be to do the measurement, I’ll explain one of the details that has to be considered to do this measurement right.  If we see an anti-proton, we’re pretty sure it was really created in the collision.  But we have billions and billions of protons in our detector.  A very fast particle created in the collision could knock a proton out of our detector.  If we measure a proton, how can we be sure that it didn’t come from our detector?  We have accurate enough charged particle tracking to see where the proton came from.  This figure (Figure 2 from the paper)

shows the distribution of the distance of closest approach (dca) of protons and anti-protons to the collision vertex.  Real protons and anti-protons created in the collision will mostly be close to the collision point (near a dca of 0), so this shows up as a peak around a dca of 0.  Our largest background is from protons knocked out of the beam pipe by a fast particle created in the collision.  These protons don’t get close to the collision vertex – their dca is larger.  This is why the proton peak on the left sits on top of a plateau.  But we can’t knock anti-protons out of the beam pipe – so we don’t see the same plateau under the anti-proton peak.  Protons knocked out of the beam pipe will also be slower on average than protons created in the collision.  This is why we see the plateau from protons knocked out of the beam pipe on the left (for protons with momentum p≈0.5 GeV/c) but we don’t see it on the right (for protons with roughly twice the momentum, p≈1.0 GeV/c).  To get an accurate anti-proton to proton ratio, we have to subtract off the protons knocked out of the beam pipe.  We can tell where particles travelling practically at the speed of light went to within a few mm – and we need to in order to do our measurements.

Isn’t that cool?  ALICE is a wonderful detector!

Explanation of the lower x-axis of the anti-proton to proton ratio plot:

This is the difference between the beam rapidity, y, and the rapidity where the measurement is done (|y|<0.5).  You can calculate the beam rapidity using

y = 1/2 ln((E+pz)/(E-pz))

where pz is the momentum along the beam axis and E=√(E2+m2) is the total energy.  If you plug in the numbers, you’ll see that the beam rapidity is about 7.6 for 900 GeV and about 9.6 for 7 TeV.  I have fudged over a detail, which is that it matters where we do the measurement.  If we look closer to the beam axis, we’ll see a lower anti-proton to proton ratio and we’ll get the highest anti-proton to proton ratio at rapidities close to zero (roughly perpendicular to the beam axis).


ALICE’s second paper!

Tuesday, April 20th, 2010

ALICE’s second paper has been submitted!  If you’ve been following carefully, you’ll have heard that it’ll take at least a couple of years to get enough statistics to see the Higgs (if it’s there) – but we don’t have to wait that long for other results.  This paper presents a measurement of the number of charged particles produced in proton-proton collisions at center of mass energies of 0.9 TeV and 2.36 TeV.  (CMS actually published their paper on the same subject first.)  Proton-proton collisions are actually pretty complicated and we still don’t understand everything about them.  Protons are made up of quarks and gluons, so when we slam them together we get a combination of quark-quark, quark-gluon, and gluon-gluon interactions.  We can describe the products of these interactions pretty well when both particles hit each other hard, but not if they barely interact.  They can also interact multiple times.  So proton-proton collisions are really difficult to model.  Theorists have come up with models that try to describe proton-proton collisions, but these models still need improvement.  Counting the number of particles produced in a collision is a relatively straight forward measurement.  (Not to say it’s easy – there’s still a lot of work that has to be done for this, but there’s even more work needed for other measurements.)  And this measurement gives us data we can compare to models.   The models seem to be underestimating the number of particles produced – only by a few percent, but they’re still not quite right.

ALICE and CMS’s measurements also agree.  This is very important.  These measurements are very complicated and many things can go wrong.  Since two experiments did the same measurement with very different detectors, different methods, different code, different people, etc. and still agree, this gives us greater confidence in the measurement.  This is one of the reasons for having multiple collaborations doing the same measurement.

You won’t read about these results in the newspapers because there have been no dramatic paradigm shifts in our understanding of proton-proton collisions, but these are very important basic measurements that improve our understanding incrementally and they have to be done before we can hope to discover new physics.

Update April 21 – the third ALICE paper, on the 7 TeV data, was submitted today!


Marathons and sprints

Sunday, March 28th, 2010

I thought it best to write a post now, as I won’t have a chance to during this Tuesday’s excitement — not because I’ll be so wrapped up in first 7 TeV collisions, but because it’s going to be the first day of Passover, which will take me partially offline. (Who exactly thought that this would be a good day for the big event? Well, it had to be on some day or another.) Just like last time, I plan on sleeping through the big event, as I thoroughly expect it to be uneventful.

For instance, don’t expect any radically new science to emerge from the first days of collisions. While it appears that the experiments are really in excellent shape, based on the work done with the December collisions, it will take a long time to accumulate and analyze enough data before we can definitively say that we have observed any new physics. The amount of data we expect to take in these next two years is enough to make the LHC experiments competitive in discovering new phenomena, or constraining what new phenomena might look like, but that’s still two years worth of data. So, as the old saying goes, this is a marathon, not a sprint, and we have to pace ourselves.

But on the other hand, everyone is motivated to get out some kind of result as soon as possible, to demonstrate that the experiments do work and that we’ve got what it takes to complete the marathon. The major milestone is the International Conference on High Energy Physics, which starts on July 22. By then, everyone is hoping to have a bunch of real physics results (even if they are merely confirmation of known phenomena rather than discoveries) that can set the baseline for the performance of the experiments. July 22 is sixteen weeks from this Thursday. To go from having no data at all to high-quality measurements in sixteen weeks is going to be quite a feat. Put on top of that the uncertainty of just how well the LHC will perform over this time — by ICHEP, we definitely expect to have a million times as much data as we recorded in December. But it could turn out to be be ten million times as much! Whether any particular measurement is feasible or not could depend on which end of that range we end up on, and there might be many course corrections to make as we go along as a result.

So even though the real LHC physics program is a marathon, on your marks, get set….



How much data, how soon?

Sunday, February 7th, 2010

First off, we should mention here that CMS’s first paper from collision data has now been accepted for publication by the Journal of High Energy Physics. It’s a measurement of the angular distribution and momentum spectrum of charged particles produced in proton collisions at 0.9 and 2.36 TeV, using about 50,000 collision events recorded in December. It is really wonderful that this result could be turned around so quickly! The first of many papers to come, we hope.

Meanwhile, as already mentioned here, we now have the news of the run plan for the LHC. CERN is preparing for the longest continuous accelerator run of its history, 18 to 24 months. The inverse femtobarn of data to be recorded in that time is a lot, and will give us an opportunity to make many interesting measurements. Whether any of them will be evidence of new physics, I for one am not going to speculate! But if nothing else, this plan sets out what our LHC life for the next ~three years is going to look like.

But a shorter-term question comes to mind — 1 fb-1 over 18 to 24 months is one thing. But what about just the next few months? There is a major international conference coming up in July. What sort of LHC results might be ready by then? That will depend in part on how many collisions are delivered. I’ve seen various estimates for that, but they vary by an order of magnitude depending on the level of optimism, so I’d rather not guess. It will also depend on the experiments’ performance. How efficiently can we record those collisions? How quickly can we process them? How soon will we understand various parts of the detectors well enough to make quality measurements? How smart and clever can we be throughout the entire process? How much sleep is everyone going to get?

Ask me again in July. Meanwhile, game on.