• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Karen
  • Andeen
  • Karlsruhe Institute of Technology

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Alexandre
  • Fauré
  • CEA/IRFU
  • FRANCE

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

Hot Topics

LUX

Going Underground

Some experiments go deep underground in search of very quiet lab space with little to no background noise. Beneath kilometers of rock, they search for the quiet whispers of dark matter, neutrinos and other rare processes.  

Searching for Dark Matter With the Large Underground Xenon Experiment

By Nicole Larsen (US) | April 17, 2014
A null-result from the underground Large Underground Xenon (LUX) experiment was featured in Nature’s Year In Review as one of the most important scientific results of 2013. Why all the hype?  

NOvA Experiment Sees First Long-distance Neutrinos

By Fermilab | February 11, 2014
Scientists on the world’s longest-distance neutrino experiment announced that they have seen their first neutrinos.
Share

Latest Posts

The CMS Collaboration, of which I am a member, has submitted 335 papers to refereed journals since 2009, including 109 such papers in 2013. Each of these papers had about 2130 authors. That means that the author list alone runs 15 printed pages. In some cases, the author list takes up more space than the actual content of the paper!

One might wonder: How do 2130 people write a scientific paper for a journal? Through a confluence of circumstances, I’ve been directly involved in the preparation of several papers over the last few months, so I have been thinking a lot about how this gets done, and thought I might use this opportunity to shed some light on the publication process. What I will not discuss here is why a paper should have 2130 authors and not more (or fewer)—this is a very interesting topic, but for now we will work from the premise that there are 2130 authors who, by signing the paper, take scientific responsibility for the correctness of its contents. How can such a big group organize itself to submit a scientific paper at all, and how can it turn out 109 papers in a year?

Certainly, with this many authors and this many papers, some set of uniform procedures are needed, and some number of people must put in substantial effort to maintain and operate the procedures. Each collaboration does things a bit differently, but all have the same goal in mind: to submit papers that are first correct (in the scientific sense of “correct” as in “not wrong with a high level of confidence”), and that are also timely. Correct takes precedence over timely; it would be quite an embarrassment to produce a paper that was incorrect because the work was done quickly and not carefully. Fortunately, in my many years in particle physics, I can think of very few cases when a correction to a published paper had to be issued, and never have I seen a paper from an experiment I have worked be retracted. This suggests that the publication procedures are indeed meeting their goals.

But even though being correct trumps everything, having an efficient publication process is still important. It would also be a shame to be scooped by a competitor on an interesting result because your paper was stuck inside your collaboration’s review process. So there is an important balance to be struck between being careful and being efficient.

One thing that would not be efficient would be for every one of the 2130 authors to scrutinize every publishable result in detail. If we were to try to do this, everyone would soon become consumed by reviewing data analyses, rather than working on the other necessary tasks of the experiment, from running the detector to processing the data to designing upgrades of the experiment. And it’s hard to imagine that, say, once 1000 people have examined a result carefully, another thousand would uncover a problem. That being said, everyone needs to understand that even if they decline to take part in the review of a particular paper, they are still responsible for it, in accordance with generally accepted guidelines for scientific authorship.

Instead, the review of each measurement or set of measurements destined for publication in a single paper is delegated by the collaboration to a smaller group of people. Different collaborations have different ways of forming these review committees—some create a new committee for a particular paper that dissolves when that paper is published, while others have standing panels that review multiple analyses within a certain topic area. These committees usually include several people with expertise in that particular area of particle physics or data analysis techniques, but one or two who serve as interested outsiders who might look at the work in a different way and come up with new questions about it. The reviewers tend to be more senior physicists, but some collaborations have allowed graduate students to be reviewers too. (One good way to learn how to analyze data is to carefully study how other people are doing it!)

The scientists who are performing a particular measurement with the data are typically also responsible for producing a draft of the scientific paper that will be submitted to the journal. The review committee is then responsible for making sure that the paper accurately describes the work and will be understandable to physicists who are not experts on this particular topic. There can also be a fair amount of work at this stage to shape the message of the paper; measurements produce results in the form of numerical values of physical quantities, but scientific papers have to tell stories about the values and how they are measured, and expressing the meaning of a measurement in words can be a challenge.

Once the review committee members think that a paper is of sufficient quality to be submitted to a journal, it is circulated to the entire collaboration for comment. Many collaborations insert a “style review” step at this stage, in which a physicist who has a lot of experience in the matter checks that the paper conforms to the collaboration’s style guidelines. This ensures some level of uniformity in terminology across the all of the collaboration’s papers, and it is also a good chance to check that the figures and tables are working as intended.

The circulation of a paper draft to the collaboration is a formal process that has potential scaling issues, given how many people might submit comments and suggestions. On relatively small collaborations such as those at the Tevatron (my Tevatron-era colleagues will find the use of the word “small” here ironic!), it was easy enough to take the comments by email, but the LHC collaborations have a more structured system for collecting and archiving comments. Collaborators are usually given about two weeks to read the draft paper and make comments. How many people send feedback can vary greatly with each paper; hotter topics might attract more attention. Some conscientious collaborators do in fact read every paper draft (as far as I can tell). To encourage participation, some collaborations do make explicit requests to a randomly-chosen set of institutes to scrutinize the paper, while some institutes have their own traditions of paper review. Comments on all aspects of the paper are typically welcome, from questions about the physics or the veracity of the analysis techniques, to suggestions on the organization of the paper and descriptions of data analysis, to matters like the placement of commas.

In any case, given the number of people who read the paper, the length of the comments can often exceed the length of the paper itself. The scientists who wrote the paper draft then have to address all of the comments. Some comments lead to changes in the paper to explain things better, or to additional cross-checks of the analysis to address a point that was raised. Many textual suggestions are implemented, while others are turned down with an explanation of why they are not necessary or harmful to the paper. The analysis review committee then verifies that all significant comments have been properly considered, and checks that the resulting revised paper draft is in good shape for submission.

Different collaborations have different final steps before the paper is actually submitted to a journal. Some have certain leaders of the collaboration, such as the spokespersons and/or physics coordinators, read the draft and make a final set of recommendations that are to be implemented before submission. Others have “publication committees” that organize public final readings of a paper that can lead to changes. At this stage the authors of the original draft very much hope that things go smoothly and that paper submission will be imminent.

And this whole process comes before the scientific tradition of independent, blind peer review! Journals have their own procedures for appointing referees who read the paper and give the journal editors advice on whether a paper should be published, and what changes or checks they might require before recommending publication. The interaction with the journal and its referees can also take quite some time, but almost always it ends with a positive result. The paper has gone through so many levels of scrutiny already that the output is really a high-quality scientific product that describes reproducible results, and that will ultimately stand the test of time.

A paper that describes a measurement in particle physics is the last step of a long journey, from the conception of the experiment, the design and subsequent construction of the apparatus, its operation over the course of years to collect the data sample, the processing of the data, and the subsequent analysis that leads to numerical values of physical quantities and their associated uncertainties. The actual writing of the papers, and process of validating them and bringing 2130 physicists to agree that the paper has told the right story about the whole journey is an important step in the creation of scientific knowledge.

Share

In December, a result from the Large Underground Xenon (LUX) experiment was featured in Nature’s Year In Review as one of the most important scientific results of 2013. As a student who has spent the past four years working on this experiment I will do my best to provide an introduction to this experiment and hopefully answer the question: why all the hype over what turned out to be a null result?

The LUX detector, deployed into the water tank shield

The LUX detector, deployed into its water tank shield 4850 feet underground.

Direct Dark Matter Detection

Weakly Interacting Massive Particles (WIMPs), or particles that interact only through the weak nuclear force and gravity, are a particularly compelling solution to the dark matter problem because they arise naturally in many extensions to the Standard Model. Quantum Diaries did a wonderful series last summer on dark matter, located here, so I won’t get into too many details about dark matter or the WIMP “miracle”, but I would however like to spend a bit of time talking about direct dark matter detection.

The Earth experiences a dark matter “wind”, or flux of dark matter passing through it, due to our motion through the dark matter halo of our galaxy. Using standard models for the density and velocity distribution of the dark matter halo, we can calculate that there are nearly 1 billion WIMPs per square meter per second passing through the Earth. In order to match observed relic abundances in the universe, we expect these WIMPs to have a small yet measurable interaction cross-section with ordinary nuclei.

In other words, there must be a small-but-finite probability of an incoming WIMP scattering off a target in a laboratory in such a way that we can detect it. The goal of direct detection experiments is therefore to look for these scattering events. These events are characterized by recoil energies of a few to tens of keV, which is quite small, but it is large enough to produce an observable signal.

So here’s the challenge: How do you build an experiment that can measure an extremely small, extremely rare signal with very high precision amid large amounts of background?

Why Xenon?

The signal from a recoil event inside a direct detection target typically takes one of three forms: scintillation light, ionization of an atom inside the target, or heat energy (phonons). Most direct detection experiments focus on one (or two) of these channels.

Xenon is a natural choice for a direct detection medium because it is easy to read out signals from two of these channels. Energy deposited in the scintillation channel is easily detectable because xenon is transparent to its own characteristic 175-nm scintillation. Energy deposited in the ionization channel is likewise easily detectable, since ionization electrons under the influence of an applied electric field can drift through xenon for distances up to several meters. These electrons can then be read out by any one of a couple different charge readout schemes.

Furthermore, the ratio of the energy deposited in these two channels is a powerful tool for discriminating between nuclear recoils such as WIMPs and neutrons, which are our signal of interest, and electronic recoils such as gamma rays, which are a major source of background.

Xenon is also particularly good for low-background science because of its self-shielding properties. That is, because liquid xenon is so dense, gammas and neutrons tend to attenuate within just a few cm of entering the target. Any particle that does happen to be energetic enough to reach the center of the target has a high probability of undergoing multiple scatters, which are easy to pick out and reject in software. This makes xenon ideal not just for dark matter searches, but also for other rare event searches such as neutrinoless double-beta decay.

The LUX Detector

The LUX experiment is located nearly a mile underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. LUX rests on the 4850-foot level of the old Homestake gold mine, which was turned into a dedicated science facility in 2006.

Besides being a mining town and a center of Old West culture (The neighboring town, Deadwood, is famed as the location where Wild Bill Hickok met his demise in a poker game), Lead has a long legacy of physics. The same cavern where LUX resides once held Ray Davis’s famous solar neutrino experiment, which provided some of the first evidence for neutrino flavor oscillations and later won him the Nobel Prize.

A schematic of the LUX detector.

A schematic of the LUX detector.

The detector itself is what is called a two-phase time projection chamber (TPC). It essentially consists of a 370-kg xenon target in a large titanium can. This xenon is cooled down to its condensation point (~165 K), so that the bulk of the xenon target is liquid, and there is a thin layer of gaseous xenon on top. LUX has 122 photomultiplier tubes (PMTs) in two different arrays, one array on the bottom looking up into the main volume of the detector, and one array on the top looking down. Just inside those arrays are a set of parallel wire grids that supply an electric field throughout the detector. A gate grid located between the cathode and anode grid that lies close to the liquid surface allows the electric field in the liquid and gas regions to be separately tunable.

When an incident particle interacts with a xenon atom inside the target, it excites or ionizes the atom. In a mechanism common to all noble elements, that atom will briefly bond with another nearby xenon atom. The subsequent decay of this “dimer” back into its two constituent atoms causes a photon to be emitted in the UV. In LUX, this flash of scintillation light, called primary scintillation light or S1, is immediately detected by the PMTs. Next, any ionization charge that is produced is drifted upwards by a strong electric field (~200 V/cm) before it can recombine. This charge cloud, once it reaches the liquid surface, is pulled into the gas phase and accelerated very rapidly by an even stronger electric field (several kV/cm), causing a secondary flash of scintillation called S2, which is also detected by the PMTs. A typical signal read out from an event in LUX therefore consists of a PMT trace with two tell-tale pulses. 

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

As in any rare event search, controlling the backgrounds is of utmost importance. LUX employs a number of techniques to do so. By situating the detector nearly a mile underground, we reduce cosmic muon flux by a factor of 107. Next, LUX is deployed into a 300-tonne water tank, which reduces gamma backgrounds by another factor of 107 and neutrons by a factor of between 103 and 109, depending on their energy. Third, by carefully choosing a fiducial volume in the center of the detector, i.e., by cutting out events that happen near the edge of the target, we can reduce background by another factor of 104. And finally, electronic recoils produce much more ionization than do the nuclear recoils that we are interested in, so by looking at the ratio S2/S1 we can achieve over 99% discrimination between gammas and potential WIMPs. All this taken into account, the estimated background for LUX is less than 1 WIMP-like event throughout 300 days of running, making it essentially a zero-background experiment. The center of LUX is in fact the quietest place in the world, radioactively speaking.

Results From the First Science Run

From April to August 2013, LUX ran continuously, collecting 85.3 livedays of WIMP search data with a 118-kg fiducial mass, resulting in over ten thousand kg-days of data. A total of 83 million events were collected. Of these, only 6.5 million were single scatter events. After applying fiducial cuts and cutting on the energy region of interest, only 160 events were left. All of these 160 events were consistent with electronic recoils. Not a single WIMP was seen – the WIMP remains as elusive as the unicorn that has become the unofficial LUX mascot.

So why is this exciting? The LUX limit is the lowest yet – it represents a factor of 2-3 increase in sensitivity over the previous best limit at high WIMP masses, and it is over 20 times more sensitive than the next best limit for low-mass WIMPs.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

Over the past few years, experiments such as DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si have each reported signals that are consistent with WIMPs of mass 5-10 GeV/c2. This is in direct conflict with the null results from ZEPLIN, COUPP, and XENON100, to name a few, and was the source of a fair amount of controversy in the direct detection community.

The LUX result was able to fairly definitively close the door on this question.

If the low-mass WIMPs favored by DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si do indeed exist, then statistically speaking LUX should have seen 1500 of them!

What’s Next?

Despite the conclusion of the 85-day science run, work on LUX carries on.

Just recently, there was a LUX talk presenting results from a calibration using low-mass neutrons as a proxy for WIMPs interacting within the detector, confirming the initial results from last autumn. Currently, LUX is gearing up for its next run, with the ultimate goal of collecting 300 livedays of WIMP-search data, which will extend the 2013 limit by a factor of five. And finally, a new detector called LZ is in the design stages, with a mass twenty times that of LUX and a sensitivity far greater.

***

For more details, the full LUX press release from October 2013 is located here:

Share

A version of this article appeared in symmetry on April 14, 2014.

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

Despite their uptick in popularity, particle accelerators still have secrets to share. With input from scientists at laboratories and institutions worldwide, symmetry has compiled a list of 10 things you might not know about particle accelerators.

There are more than 30,000 accelerators in operation around the world.

Accelerators are all over the place, doing a variety of jobs. They may be best known for their role in particle physics research, but their other talents include: creating tumor-destroying beams to fight cancer; killing bacteria to prevent food-borne illnesses; developing better materials to produce more effective diapers and shrink wrap; and helping scientists improve fuel injection to make more efficient vehicles.

One of the longest modern buildings in the world was built for a particle accelerator.

Linear accelerators, or linacs for short, are designed to hurl a beam of particles in a straight line. In general, the longer the linac, the more powerful the particle punch. The linear accelerator at SLAC National Accelerator Laboratory, near San Francisco, is the largest on the planet.

SLAC’s klystron gallery, a building that houses components that power the accelerator, sits atop the accelerator. It’s one of the world’s longest modern buildings. Overall, it’s a little less than 2 miles long, a feature that prompts laboratory employees to hold an annual footrace around its perimeter.

Particle accelerators are the closest things we have to time machines, according to Stephen Hawking.

In 2010, physicist Stephen Hawking wrote an article for the UK paper the Daily Mail explaining how it might be possible to travel through time. We would just need a particle accelerator large enough to accelerate humans the way we accelerate particles, he said.

A person-accelerator with the capabilities of the Large Hadron Collider would move its passengers at close to the speed of light. Because of the effects of special relativity, a period of time that would appear to someone outside the machine to last several years would seem to the accelerating passengers to last only a few days. By the time they stepped off the LHC ride, they would be younger than the rest of us.

Hawking wasn’t actually proposing we try to build such a machine. But he was pointing out a way that time travel already happens today. For example, particles called pi mesons are normally short-lived; they disintegrate after mere millionths of a second. But when they are accelerated to nearly the speed of light, their lifetimes expand dramatically. It seems that these particles are traveling in time, or at least experiencing time more slowly relative to other particles.

The highest temperature recorded by a manmade device was achieved in a particle accelerator.

In 2012, Brookhaven National Laboratory’s Relativistic Heavy Ion Collider achieved a Guinness World Record for producing the world’s hottest manmade temperature, a blazing 7.2 trillion degrees Fahrenheit. But the Long Island-based lab did more than heat things up. It created a small amount of quark-gluon plasma, a state of matter thought to have dominated the universe’s earliest moments. This plasma is so hot that it causes elementary particles called quarks, which generally exist in nature only bound to other quarks, to break apart from one another.

Scientists at CERN have since also created quark-gluon plasma, at an even higher temperature, in the Large Hadron Collider.

The inside of the Large Hadron Collider is colder than outer space.

In order to conduct electricity without resistance, the Large Hadron Collider’s electromagnets are cooled down to cryogenic temperatures. The LHC is the largest cryogenic system in the world, and it operates at a frosty minus 456.3 degrees Fahrenheit. It is one of the coldest places on Earth, and it’s even a few degrees colder than outer space, which tends to rest at about minus 454.9 degrees Fahrenheit.

Nature produces particle accelerators much more powerful than anything made on Earth.

We can build some pretty impressive particle accelerators on Earth, but when it comes to achieving high energies, we’ve got nothing on particle accelerators that exist naturally in space.

The most energetic cosmic ray ever observed was a proton accelerated to an energy of 300 million trillion electronvolts. No known source within our galaxy is powerful enough to have caused such an acceleration. Even the shockwave from the explosion of a star, which can send particles flying much more forcefully than a manmade accelerator, doesn’t quite have enough oomph. Scientists are still investigating the source of such ultra-high-energy cosmic rays.

Particle accelerators don’t just accelerate particles; they also make them more massive.

As Einstein predicted in his theory of relativity, no particle that has mass can travel as fast as the speed of light—about 186,000 miles per second. No matter how much energy one adds to an object with mass, its speed cannot reach that limit.

In modern accelerators, particles are sped up to very nearly the speed of light. For example, the main injector at Fermi National Accelerator Laboratory accelerates protons to 0.99997 times the speed of light. As the speed of a particle gets closer and closer to the speed of light, an accelerator gives more and more of its boost to the particle’s kinetic energy.

Since, as Einstein told us, an object’s energy is equal to its mass times the speed of light squared (E=mc2), adding energy is, in effect, also increasing the particles’ mass. Said another way: Where there is more “E,” there must be more “m.” As an object with mass approaches, but never reaches, the speed of light, its effective mass gets larger and larger.

The diameter of the first circular accelerator was shorter than 5 inches; the diameter of the Large Hadron Collider is more than 5 miles.

In 1930, inspired by the ideas of Norwegian engineer Rolf Widerøe, 27-year-old physicist Ernest Lawrence created the first circular particle accelerator at the University of California, Berkeley, with graduate student M. Stanley Livingston. It accelerated hydrogen ions up to energies of 80,000 electronvolts within a chamber less than 5 inches across.

In 1931, Lawrence and Livingston set to work on an 11-inch accelerator. The machine managed to accelerate protons to just over 1 million electronvolts, a fact that Livingston reported to Lawrence by telegram with the added comment, “Whoopee!” Lawrence went on to build even larger accelerators—and to found Lawrence Berkeley and Lawrence Livermore laboratories.

Particle accelerators have come a long way since then, creating brighter beams of particles with greater energies than previously imagined possible. The Large Hadron Collider at CERN is more than 5 miles in diameter (17 miles in circumference). After this year’s upgrades, the LHC will be able to accelerate protons to 6.5 trillion electronvolts.

In the 1970s, scientists at Fermi National Accelerator Laboratory employed a ferret named Felicia to clean accelerator parts.

From 1971 until 1999, Fermilab’s Meson Laboratory was a key part of high-energy physics experiments at the laboratory. To learn more about the forces that hold our universe together, scientists there studied subatomic particles called mesons and protons. Operators would send beams of particles from an accelerator to the Meson Lab via a miles-long underground beam line.

To ensure hundreds of feet of vacuum piping were clear of debris before connecting them and turning on the particle beam, the laboratory enlisted the help of one Felicia the ferret.

Ferrets have an affinity for burrowing and clambering through holes, making them the perfect species for this job. Felicia’s task was to pull a rag dipped in cleaning solution on a string through long sections of pipe.

Although Felicia’s work was eventually taken over by a specially designed robot, she played a unique and vital role in the construction process—and in return asked only for a steady diet of chicken livers, fish heads and hamburger meat.

Particle accelerators show up in unlikely places.

Scientists tend to construct large particle accelerators underground. This protects them from being bumped and destabilized, but can also make them a little harder to find.

For example, motorists driving down Interstate 280 in northern California may not notice it, but the main accelerator at SLAC National Accelerator Laboratory runs underground just beneath their wheels.

Residents in villages in the Swiss-French countryside live atop the highest-energy particle collider in the world, the Large Hadron Collider.

And for decades, teams at Cornell University have played soccer, football and lacrosse on Robison Alumni Fields 40 feet above the Cornell Electron Storage Ring, or CESR. Scientists use the circular particle accelerator to study compact particle beams and to produce X-ray light for experiments in biology, materials science and physics.

Sarah Witman

Share

Même avant mon départ pour La Thuile (Italie), les résultats des Rencontres de Moriond remplissaient déjà les fils d’actualités. La session de cette année sur l’interaction électrofaible, du 15 au 22 mars, a débuté avec la première « mesure mondiale » de la masse du quark top, basée sur la combinaison des mesures publiées jusqu’à présent par les expériences Tevatron et LHC. La semaine s’est poursuivie avec un résultat spectaculaire de CMS sur la largeur du Higgs.

Même si elle approche de son 50e anniversaire, la conférence de Moriond est restée à l’avant-garde. Malgré le nombre croissant de conférences incontournables en physique des hautes énergies, Moriond garde une place de choix dans la communauté, pour des raisons en partie historiques : cette conférence existe depuis 1966 et elle s’est imposée comme l’endroit où les théoriciens et les expérimentateurs viennent pour voir et être vus. Regardons maintenant ce que les expériences du LHC nous ont réservé cette année…

Nouveaux résultats­­­

Cette année, le clou du spectacle à Moriond a bien entendu été l’annonce de la meilleure limite à ce jour pour la largeur du Higgs, à < 17 MeV avec 95 % de confiance, présentée aux deux sessions de Moriond par l’expérience CMS. La nouvelle mesure, obtenue par une nouvelle méthode d’analyse basée sur les désintégrations du Higgs en deux particules Z, est environ 200 fois plus précise que les précédentes. Les discussions sur cette limite ont porté principalement sur la nouvelle méthode utilisée pour l’analyse. Quelles hypothèses étaient nécessaires ? La même technique pouvait-elle être appliquée à un Higgs se désintégrant en deux bosons W ? Comment cette nouvelle largeur allait-elle influencer les modèles théoriques pour la nouvelle physique ? Nous le découvrirons sans doute à Moriond l’année prochaine…

L’annonce du premier résultat mondial conjoint pour la masse du quark top a aussi suscité un grand enthousiasme. Ce résultat, qui met en commun les données du Tevatron et du LHC, constitue la meilleure valeur jusqu’ici, au niveau mondial, à 173,34 ± 0,76 GeV/c2. Avant que l’effervescence ne soit retombée à la session de QCD de Moriond, CMS a annoncé un nouveau résultat préliminaire fondé sur l’ensemble des données collectées à 7 et 8 TeV. Ce résultat est à lui seul d’une précision qui rivalise avec celle de la moyenne mondiale, ce qui démontre clairement que nous n’avons pas encore atteint la plus grande précision possible pour la masse du quark top.

ot0172hCe graphique montre les quatre mesures de la masse du quark top publiées respectivement par les collaborations ATLAS, CDF, CMS et D0, ainsi que la mesure la plus précise à ce jour obtenue grâce à l’analyse conjointe.

D’autres nouveautés concernant le quark top, entre autres les nouvelles mesures précises de son spin et de sa polarisation issues du LHC, ainsi que les nouveaux résultats d’ATLAS pour la section efficace du quark top isolé dans le canal de désintégration t, ont été présentés par Kate Shaw le mardi 25 mars. La période II du LHC permettra d’approfondir encore notre compréhension du sujet.

Une mesure fondamentale et délicate permettant d’explorer la nature de la brisure de la symétrie électrofaible portée par le mécanisme de Brout-Englert-Higgs est celle de la diffusion de deux bosons vecteurs massifs. Cet événement est rare, mais en l’absence du boson de Higgs sa fréquence augmenterait fortement avec l’énergie de la collision, jusqu’à enfreindre les lois de la physique. Un indice de la collision d’un boson vecteur de force électrofaible a été détecté pour la première fois par ATLAS dans des événements impliquant deux leptons de même charge et deux jets présentant une grande différence de rapidité.

S’appuyant sur l’augmentation du volume de données et une meilleure analyse de celles-ci, les expériences du LHC s’attaquent à des états finaux multi-particules rares et difficiles qui font intervenir le boson de Higgs. ATLAS en a présenté un excellent exemple, avec un nouveau résultat dans la recherche de la production d’un Higgs associé à deux quarks top et se désintégrant en une paire de quarks b. Avec une limite prévue de 2,6 fois la prédiction du Modèle standard pour ce seul canal et une intensité de signal relative observée de 1,7 ± 1,4, la future exploitation à haute énergie du LHC, avec laquelle la fréquence de cet événement augmentera, suscite de grands espoirs.

Dans le même temps, dans le monde des saveurs lourdes, l’expérience LHCb a présenté des analyses supplémentaires de l’état exotique X(3872). L’expérience a confirmé de manière non ambiguë que ses nombres quantiques Jpc sont 1++ et a mis en évidence sa désintégration en ψ(2S)γ.

L’étude du plasma de quarks et de gluons se poursuit dans l’expérience ALICE, et les discussions ont porté surtout sur les résultats de l’exploitation du LHC en mode proton-plomb (p-Pb). En particulier, la « double crête » nouvellement observée dans les collisions p-Pb est étudiée en détail, et des analyses du pic de ses jets, de sa distribution de masse et de sa dépendance à la charge ont été présentées.

Nouvelles explorations

Grâce à notre nouvelle compréhension du boson de Higgs, le LHC est entré dans l’ère de la physique du Higgs de précision. Notre connaissance des propriétés du Higgs – par exemple les mesures de son spin et de sa largeur – s’est améliorée, et les mesures précises des interactions et des désintégrations du Higgs ont elles aussi bien progressé. Des résultats relatifs à la recherche d’une physique au-delà du Modèle standard ont également été présentés, et les expériences du LHC continuent de s’investir intensément dans la recherche de la supersymétrie.

En ce qui concerne le secteur de Higgs, de nombreux chercheurs espèrent trouver les cousins supersymétriques du Higgs et des bosons électrofaibles, appelés neutralinos et charginos, par l’intermédiaire de processus électrofaibles. ATLAS a présenté deux nouveaux articles résumant de multiples recherches en quête de ces particules. L’absence d’un signal significatif a été utilisée pour définir des limites d’exclusion pour les charginos et les neutralinos, soit 700 GeV – s’ils se désintègrent via des partenaires supersymétriques intermédiaires de leptons – et 420 GeV – quand ils se désintègrent seulement via des bosons du Modèle standard.

Par ailleurs, pour la première fois, une recherche du mode électrofaible le plus difficile à observer, produisant une paire de charginos qui se désintègrent en bosons W, a été entreprise par ATLAS. Ce mode ressemble à celui de la production de paires de W du Modèle standard, dont le taux mesuré actuellement paraît légèrement plus élevé que prévu.

Dans ce contexte, CMS a présenté de nouveaux résultats dans la recherche de la production d’une paire électrofaible de higgsinos via leur désintégration en un Higgs (à 125 GeV) et un gravitino de masse presque nulle. L’état final montre une signature caractéristique de jets de quatre quarks b, compatible avec une cinématique de double désintégration du Higgs. Un léger excès du nombre d’événements candidats signifie que l’expérience ne peut pas exclure un signal de higgsino. On établit des limites supérieures de l’intensité du signal d’environ deux fois la prédiction théorique pour des masses du higgsino comprises entre 350 et 450 GeV.

Dans plusieurs scénarios de supersymétrie, les charginos peuvent être métastables et ils pourraient potentiellement être détectés sous la forme de particules à durée de vie longue. CMS a présenté une recherche innovante de particules génériques chargées à durée de vie longue, effectuées en cartographiant l’efficacité de détection en fonction de la cinématique de la particule et de la perte d’énergie dans le trajectographe. Cette étude permet non seulement d’établir des limites strictes pour divers modèles supersymétriques qui prédisent une durée de vie du chargino (c*tau) supérieure à 50 cm mais elle fournit également un puissant outil à la communauté des théoriciens pour tester de manière indépendante les nouveaux modèles prédisant des particules chargées à durée de vie longue.

Afin d’être aussi général que possible dans la recherche de la supersymétrie, CMS a également présenté les résultats de nouvelles recherches, dans lesquelles un grand sous-ensemble des paramètres de la supersymétrie, tels que les masses du gluino et du squark, sont testés pour vérifier leur compatibilité statistique avec différentes mesures expérimentales. Cela a permis d’établir une carte des probabilités dans un espace à 19 dimensions. Cette carte montre notamment que les modèles prédisant des masses inférieures à 1,2 TeV pour le gluino et inférieures à 700 GeV pour le sbottom et le stop sont fortement défavorisés.

mais pas de nouvelle physique

Malgré toute ces recherches minutieuses, ce qu’on a le plus entendu à Moriond, c’était: « pas d’excès observé » – « cohérent avec le Modèle standard ». Tous les espoirs reposent maintenant sur la prochaine exploitation du LHC, à 13 TeV. Si vous souhaitez en savoir davantage sur les perspectives ouvertes par la deuxième exploitation du LHC, consultez l’article suivant du Bulletin du CERN: “La vie est belle à 13 TeV“.

En plus des divers résultats des expériences du LHC qui ont été présentés, des nouvelles ont aussi été rapportées à Moriond par les expériences du Tevatron, de BICEP, de RHIC et d’autres expériences. Pour en savoir plus, consultez les sites internet de la conférence, Moriond EW et Moriond QCD.

Share

On the Shoulders of…

Monday, April 14th, 2014

My first physics class wasn’t really a class at all. One of my 8th grade teachers noticed me carrying a copy of Kip Thorne’s Black Holes and Time Warps, and invited me to join a free-form book discussion group on physics and math that he was holding with a few older students. His name was Art — and we called him by his first name because I was attending, for want of a concise term that’s more precise, a “hippie” school. It had written evaluations instead of grades and as few tests as possible; it spent class time on student governance; and teachers could spend time on things like, well, discussing books with a few students without worrying about whether it was in the curriculum or on the tests. Art, who sadly passed some years ago, was perhaps best known for organizing the student cafe and its end-of-year trip, but he gave me a really great opportunity. I don’t remember learning anything too specific about physics from the book, or from the discussion group, but I remember being inspired by how wonderful and crazy the universe is.

My second physics class was combined physics and math, with Dan and Lewis. The idea was to put both subjects in context, and we spent a lot of time on working through how to approach problems that we didn’t know an equation for. The price of this was less time to learn the full breadth subjects; I didn’t really learn any electromagnetism in high school, for example.

When I switched to a new high school in 11th grade, the pace changed. There were a lot more things to learn, and a lot more tests. I memorized elements and compounds and reactions for chemistry. I learned calculus and studied a bit more physics on the side. In college, where the physics classes were broad and in depth at the same time, I needed to learn things fast and solve tricky problems too. By now, of course, I’ve learned all the physics I need to know — which is largely knowing who to ask or which books to look in for the things I need but don’t remember.

There are a lot of ways to run schools and to run classes. I really value knowledge, and I think it’s crucial in certain parts of your education to really buckle down and learn the facts and details. I’ve also seen the tremendous worth of taking the time to think about how you solve problems and why they’re interesting to solve in the first place. I’m not a high school teacher, so I don’t think I can tell the professionals how to balance all of those goods, which do sometimes conflict. What I’m sure of, though, is that enthusiasm, attention, and hard work from teachers is a key to success no matter what is being taught. The success of every physicist you will ever see on Quantum Diaries is built on the shoulders of the many people who took the time to teach and inspire them when they were young.

Share

Major harvest of four-leaf clover

Wednesday, April 9th, 2014

The LHCb Collaboration at CERN has just confirmed the unambiguous observation of a very exotic state, something that looks strangely like a particle being made of four quarks. As exotic as it might be, this particle is sternly called Z(4430)-, which gives its mass at 4430 MeV, roughly four times heavier than a proton, and indicates it is has a negative electric charge. The letter Z shows that it belongs to a strange series of particles that are referred to as XYZ states.

So what’s so special about this state? The conventional and simple quark model states that there are six different quarks, each quark coming with its antiparticle.  All these particles form bound states by either combining two or three of them. Protons and neutrons for example are made of three quarks. All states made of three quarks are called baryons. Other particles like pions and kaons, which are often found in the decays of heavier particles, are made of one quark and one antiquark. These form the mesons category. Until 2003, the hundreds of particles observed were classified either as mesons or baryons.

And then came the big surprise: in 2003, the BELLE experiment found a state that looked like a bound state of four quarks. Many other exotic states have been observed since. These states often look like charmonium or bottomonium states, which contain a charm quark and a charm antiquark, or a bottom and antibottom quarks. Last spring, the BESIII collaboration from Beijing confirmed the observation of the Zc(3900)+ state also seen by BELLE.

On April 8, the LHCb collaboration reported having found the Z(4430)- with ten times more events than all other groups before. The data sample is so large that it enabled LHCb to measure some of its properties unambiguously. Determining the exact quantum numbers of a particle is like getting its fingerprints: it allows physicists to find out exactly what kind of particle it is. Hence, the Z(4430)- state appears to be made of a charm, an anti-charm, a down and an anti up quarks. Their measurement rules out several other possibilities.

LHCb-Z(4430)

The squared mass distribution for the 25,200 B meson decays to ψ’ π- found by LHCb in their entire data set. The black points represent the data, the red curve the result of the simulation when including the presence of the Z(4430)- state. The dashed light brown curve below shows that the simulation fails to reproduce the data if no contribution from Z(4430)- is included, establishing the clear presence of this particle with 13.9σ (that is, the signal is 13.9 times stronger than all possible combined statistical fluctuations. These are the error bars represented by the small vertical line attached to each point).

Theorists are hard at work now trying to come up with a model to describe these new states. Is this a completely new tetraquark, a bound state of four quarks, or some strange combination of two charmed mesons (mesons containing at least one charm quark)? The question is still open.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 or sign-up on this mailing list to receive and e-mail notification.

For more information, see the LHCb website

Share

La collaboration LHCb du CERN vient de confirmer hors de tout doute l’existence d’un état très exotique, quelque chose qui ressemble étrangement à une particule formée de quatre quarks. Aussi exotique qu’elle puisse paraître, cette particule porte le nom très pragmatique de Z(4430)-. Ce nom indique sa masse à 4430 MeV, soit  environ quatre fois celle d’un proton, et signale qu’elle a une charge électrique négative. La lettre Z montre qu’elle appartient à une étrange série de particules communément regroupées sous l’appellation d’états XYZ.

Mais qu’est-ce que cet état a donc de si spécial? Le modèle conventionnel des quark est tout simple: il existe six quarks différents, chacun venant avec son antiparticule. Ces douze particules peuvent se combiner pour former des états liés en regroupant deux ou trois d’entre eux. Par exemple, les protons et des neutrons sont composés de trois quarks. Tous les états faits de trois quarks sont appelés baryons. D’autres particules comme les pions et les kaons, qu’on retrouve souvent dans les désintégrations de particules plus lourdes, sont formées d’un quark et d’un antiquark. Elles appartiennent à la catégorie des mésons. Les centaines de particules observées jusqu’en 2003 étaient toutes classifiées soit comme mésons, soit comme baryons.

Puis vint la grande surprise: en 2003, l’expérience BELLE trouva le premier état lié fait en apparence de quatre quarks. Beaucoup d’autres états exotiques similaires ont été observés depuis. Ces états ressemblent souvent à des états de charmonium ou de bottomonium, des particules qui contiennent respectivement un quark charmé et un antiquark charmé, ou un quark bottom et un anti-bottom. Au printemps dernier, la collaboration BESIII de Beijing a confirmé l’observation du Zc(3900)+, un état aussi détecté par BELLE.

Le 8 avril, la collaboration LHCb a rapporté avoir trouvé l’état Z(4430)- avec dix fois plus d’événements que tous les autres groupes précédents. Leur échantillon de données est si grand qu’il a permis à LHCb de mesurer certaines de ses propriétés sans équivoque. La détermination des nombres quantiques exacts d’une particule équivaut à l’obtention de ses empreintes digitales: cela permet aux physicien-ne-s de cerner plus exactement à quelle particule on a affaire. Il en ressort que l’état Z(4430)- serait formé d’un quark charmé, d’un antiquark charmé, d’un quark bottom et un anti-bottom. Leur mesure exclut toutes autres possibilités.

LHCb-Z(4430)

La distribution de la masse (au carré) des 25200 mésons B se désintégrant en paires de ψ’ π- trouvés par LHCb dans l’ensemble de leurs données. Les points noirs représentent les données expérimentales et la courbe en rouge, le résultat de la simulation lorsqu’on inclut la présence du Z(4430)-. La courbe en pointillés juste en dessous en brun clair montre que la simulation ne peut reproduire les données si on supprime la contribution du Z(4430)-. Ceci établit clairement la présence de cette particule avec 13.9σ (c’est-à-dire le signal est 13.9 fois plus fort que toutes les fluctuations statistiques combinées possible. La fluctuation de chaque point est représentée par la petite ligne verticale qui lui est attachée).

Les théoricien-ne-s sont à pied d’oeuvre pour essayer d’imaginer un modèle pouvant décrire ces nouveaux états. S’agit-il d’états complètement nouveaux faits de quatre quarks liés ensemble, des tétraquarks, ou est-ce une étrange combinaison de deux mésons charmés (des mésons contenant au moins un quark charmé)? La question est toujours ouverte.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

Pour plus de tails (en anglais) voir le site de l’expérience LHCb

 

Share

A version of this article appeared in symmetry on April 8, 2014.

Physicist Aaron Chou keeps the Holometer experiment—which looks for a phenomenon whose implications border on the unreal—grounded in the realities of day-to-day operations. Photo: Reidar Hahn

Physicist Aaron Chou keeps the Holometer experiment—which looks for a phenomenon whose implications border on the unreal—grounded in the realities of day-to-day operations. Photo: Reidar Hahn

The beauty of the small operation—the mom-and-pop restaurant or the do-it-yourself home repair—is that pragmatism begets creativity. The industrious individual who makes do with limited resources is compelled onto paths of ingenuity, inventing rather than following rules to address the project’s peculiarities.

As project manager for the Holometer experiment at Fermilab, physicist Aaron Chou runs a show that, though grandiose in goal, is remarkably humble in setup. Operated out of a trailer by a small team with a small budget, it has the feel more of a scrappy startup than of an undertaking that could make humanity completely rethink our universe.

The experiment is based on the proposition that our familiar, three-dimensional universe is a manifestation of a two-dimensional, digitized space-time. In other words, all that we see around us is no more than a hologram of a more fundamental, lower-dimensional reality.

If this were the case, then space-time would not be smooth; instead, if you zoomed in on it far enough, you would begin to see the smallest quantum bits—much as a digital photo eventually reveals its fundamental pixels.

In 2009, the GEO600 experiment, which searches for gravitational waves emanating from black holes, was plagued by unaccountable noise. This noise could, in theory, be a telltale sign of the universe’s smallest quantum bits. The Holometer experiment seeks to measure space-time with far more precision than any experiment before—and potentially observe effects from those fundamental bits.

Such an endeavor is thrilling—but also risky. Discovery would change the most basic assumptions we make about the universe. But there also might not be any holographic noise to find. So for Chou, managing the Holometer means building and operating the apparatus on the cheap—not shoddily, but with utmost economy.

Thus Chou and his team take every opportunity to make rather than purchase, to pick up rather than wait for delivery, to seize the opportunity and take that measurement when all the right people are available.

“It’s kind of like solving a Rubik’s cube,” Chou says. “You have an overview of every aspect of the measurement that you’re trying to make. You have to be able to tell the instant something doesn’t look right, and tell that it conflicts with some other assumption you had. And the instant you have a conflict, you have to figure out a way to resolve it. It’s a lot of fun.”

Chou is one of the experiment’s 1.5 full-time staff members; a complement of students rounds out a team of 10. Although Chou is essentially the overseer, he runs the experiment from down in the trenches.

Aaron Chou, project manager 
for Fermilab’s Holometer, tests the experiment’s instrumentation. Photo: Reidar Hahn

Aaron Chou, project manager 
for Fermilab’s Holometer, tests the experiment’s instrumentation. Photo: Reidar Hahn

The Holometer experimental area, for example, is a couple of aboveground, dirt-covered tunnels whose walls don’t altogether keep out the water after a heavy rain. So any time the area needs the attention of a wet-dry vacuum, he and his team are down on the ground, cheerfully squeegeeing, mopping and vacuuming away.

“That’s why I wear such shabby clothes,” he says. “This is not the type of experiment where you sit behind the computer and analyze data or control things remotely all day long. It’s really crawling-around-on-the-floor kind of work, which I actually find to be kind of a relief, because I spent more than a decade sitting in front of a computer for more well-established experiments where the installation took 10 years and most of the resulting experiment is done from behind a keyboard.”

As a graduate student at Stanford University, Chou worked on the SLD experiment at SLAC National Accelerator Laboratory, writing software to help look for parity violation in Z bosons. As a Fermilab postdoc on the Pierre Auger experiment, he analyzed data on ultra-high-energy cosmic rays.

Now Chou and his team are down in the dirt, hunting for the universe’s quantum bits. In length terms, these bits are expected to be on the smallest scale of the universe, the Planck scale: 1.6 x 10-35 meters. That’s roughly 10 trillion trillion times smaller than an atom; no existing instrument can directly probe objects that small. If humanity could build a particle collider the size of the Milky Way, we might be able to investigate Planck-scale bits directly.

The Holometer instead will look for a jitter arising from the cosmos’ minuscule quanta. In the experiment’s dimly lit tunnels, the team built two interferometers, L-shaped configurations of tubes. Beginning at the L’s vertex, a laser beam travels down each of the L’s 40-meter arms simultaneously, bounces off the mirrors at the ends and recombines at the starting point. Since the laser beam’s paths down each arm of the L are the same length, absent a holographic jitter, the beam should cancel itself out as it recombines. If it doesn’t, it could be evidence of the jitter, a disruption in the laser beam’s flight.

And why are there two interferometers? The two beam spots’ particular brightening and dimming will match if it’s the looked-for signal.

“Real signals have to be in sync,” Chou says. “Random fluctuations won’t be heard by both instruments.”

Should the humble Holometer find a jitter when it looks for the signal—researchers will soon begin the initial search and expect results by 2015—the reward to physics would be extraordinarily high, especially given the scrimping behind the experiment and the fact that no one had to build an impossibly high-energy, Milky Way-sized collider. The data would support the idea that the universe we see around us is only a hologram. It would also help bring together the two thus-far-irreconcilable principles of quantum mechanics and relativity.

“Right now, so little experimental data exists about this high-energy scale that theorists are unable to construct any meaningful models other than those based on speculation,” Chou says. “Our experiment is really a mission of exploration—to obtain data about an extremely high-energy scale that is otherwise inaccessible.”

What’s more, when the Holometer is up and running, it will be able to look for other phenomena that manifest themselves in the form of high-frequency gravitational waves, including topological defects in our cosmos—areas of tension between large regions in space-time that were formed by the big bang.

“Whenever you design a new apparatus, what you’re doing is building something that’s more sensitive to some aspect of nature than anything that has ever been built before,” Chou says. “We may discover evidence of holographic jitter. But even if we don’t, if we’re smart about how we use our newly built apparatus, we may still be able to discover new aspects of our universe.”

Share

I’ve just been watching the first couple of episodes of the new, reborn, perhaps rebooted, Cosmos. About 4 million people have been watching each of the episodes when broadcast. Out of a US population of about 300 million. Said that way, it doesn’t sound like a huge success, but science has much less of a grip on the American public than science fiction (or at least folks in spandex hitting each other over the head) or comedy about scientists. Over the years, it’s said that Sagan’s Cosmos has been the most watched PBS series world-wide, ever, and I have confidence that the new one, with current special effects, and its hooks to the 2010s rather than the late 1970s, will be watched for many years to come.

Different times and different shows. It’s worth thinking about why this isn’t a PBS show today. Why is that? And why are there still creationists around to poke holes in our schools?

Anyway, what I’ve seen so far, I’ve liked quite a bit. There are plenty of eloquent positive reviews out there, so let me highlight one thing of which I am not a fan. With the excellent special effects, along with the excellent astronomical images available, it’s not always clear in the show what is a real image and what is artwork. In Sagan’s Cosmos, we see visualizations and we see telescopic views, and we can know which is which. With the current Cosmos, it’s a lot harder to tell. And a third category, simulations also poke in somewhere between the models and true imaging. Simulations based on the physics, so therefore “true” and “correct,” but not real images of objects in the sky. I’ve seen NASA artist renditions clearly marked in the corner. It would be a nice addition to the show, not to justify the scientific validity but to clarify, to mark the boundaries of what we see, what we know, and what we conjecture. Three different parts of the science.

Share

Even before my departure to La Thuile in Italy, results from the Rencontres de Moriond conference were already flooding the news feeds. This year’s Electroweak session from 15 to 22 March, started with the first “world measurement” of the top quark mass, from a combination of the measurements published by the Tevatron and LHC experiments so far. The week went on to include a spectacular CMS result on the Higgs width.

Although nearing its 50th anniversary, Moriond has kept its edge. Despite the growing numbers of must-attend HEP conferences, Moriond retains a prime spot in the community. This is in part due to historic reasons: it’s been around since 1966, making a name for itself as the place where theorists and experimentalists come to see and be seen. Let’s take a look at what the LHC experiments had in store for us this year…

New Results­­­

Stealing the show at this year’s Moriond was, of course, the announcement of the best constraint yet of the Higgs width at < 17 MeV with 95% confidence reported in both Moriond sessions by the CMS experiment. Using a new analysis method based on Higgs decays into two Z particles, the new measurement is some 200 times better than previous results. Discussions surrounding the constraint focussed heavily on the new methodology used in the analysis. What assumptions were needed? Could the same technique be applied to Higgs to WW bosons? How would this new width influence theoretical models for New Physics? We’ll be sure to find out at next year’s Moriond…

The announcement of the first global combination of the top quark mass also generated a lot of buzz. Bringing together Tevatron and LHC data, the result is the world’s best value yet at 173.34 ± 0.76 GeV/c2.  Before the dust had settled, at the Moriond QCD session, CMS announced a new preliminary result based on the full data set collected at 7 and 8 TeV. The precision of this result alone rivals the world average, clearly demonstrating that we have yet to see the ultimate attainable precision on the top mass.

ot0172hThis graphic shows the four individual top quark mass measurements published by the ATLAS, CDF, CMS and DZero collaborations, together with the most precise measurement obtained in a joint analysis.

Other news of the top quark included new LHC precision measurements of its spin and polarisation, as well as new ATLAS results of the single top-quark cross section in the t-channel presented by Kate Shaw on Tuesday 25 March. Run II of the LHC is set to further improve our understanding of this

A fundamental and challenging measurement that probes the nature of electroweak symmetry breaking mediated by the Brout–Englert–Higgs mechanism is the scattering of two massive vector bosons against each other. Although rare, in the absence of the Higgs boson, the rate of this process would strongly rise with the collision energy, eventually breaking physical law. Evidence for electroweak vector boson scattering was detected for the first time by ATLAS in events with two leptons of the same charge and two jets exhibiting large difference in rapidity.

With the rise of statistics and increasing understanding of their data, the LHC experiments are attacking rare and difficult multi-body final states involving the Higgs boson. ATLAS presented a prime example of this, with a new result in the search for Higgs production in association with two top quarks, and decaying into a pair of b-quarks. With an expected limit of 2.6 times the Standard Model expectation in this channel alone, and an observed relative signal strength of 1.7 ± 1.4, the expectations are high for the forthcoming high-energy run of the LHC, where the rate of this process is enhanced.

Meanwhile, over in the heavy flavour world, the LHCb experiment presented further analyses of the unique exotic state X(3872). The experiment provided unambiguous confirmation of its quantum numbers JPC to be 1++, as well as evidence for its decay into ψ(2S)γ.

Explorations of the Quark-Gluon Plasma continue in the ALICE experiment, with results from the LHC’s lead-proton (p-Pb) run dominating discussions. In particular, the newly observed “double-ridge” in p-Pb is being studied in depth, with explorations of its jet peak, mass distribution and charge dependence presented.

New explorations

Taking advantage of our new understanding of the Higgs boson, the era of precision Higgs physics is now in full swing at the LHC. As well as improving our knowledge of Higgs properties – for example, measuring its spin and width – precise measurements of the Higgs’ interactions and decays are well underway. Results for searches for Beyond Standard Model (BSM) physics were also presented, as the LHC experiments continue to strongly invest in searches for Supersymmetry.

In the Higgs sector, many researchers hope to detect the supersymmetric cousins of the Higgs and electroweak bosons, so-called neutralinos and charginos, via electroweak processes. ATLAS presented two new papers summarising extensive searches for these particles. The absence of a significant signal was used to set limits excluding charginos and neutralinos up to a mass of 700 GeV – if they decay through intermediate supersymmetric partners of leptons – and up to a mass of 420 GeV – when decaying through Standard Model bosons only.

Furthermore, for the first time, a sensitive search for the most challenging electroweak mode producing pairs of charginos that decay through W bosons was conducted by ATLAS. Such a mode resembles that of Standard Model pair production of Ws, for which the currently measured rates appear a bit higher than expected.

In this context, CMS has presented new results on the search for the electroweak pair production of higgsinos through their decay into a Higgs (at 125 GeV) and a nearly massless gravitino. The final state sports a distinctive signature of 4 b-quark jets compatible with a double Higgs decay kinematics. A slight excess of candidate events means the experiment cannot exclude a higgsino signal. Upper limits on the signal strength at the level of twice the theoretical prediction are set for higgsino masses between 350 and 450 GeV.

In several Supersymmetry scenarios, charginos can be metastable and could potentially be detected as a long-lived particle. CMS has presented an innovative search for generic long-lived charged particles by mapping their detection efficiency in function of the particle kinematics and energy loss in the tracking system. This study not only allows to set stringent limits for a variety of Supersymmetric models predicting chargino proper lifetime (c*tau) greater than 50cm, but also gives a powerful tool to the theory community to independently test new models foreseeing long lived charged particles.

In the quest to be as general as possible in the search for Supersymmetry, CMS has also presented new results where a large subset of the Supersymmetry parameters, such as the gluino and squark masses, are tested for their statistical compatibility with different experimental measurements. The outcome is a probability map in a 19-dimension space. Notable observations in this map are that models predicting gluino masses below 1.2 TeV and sbottom and stop masses below 700 GeV are strongly disfavoured.

… but no New Physics

Despite careful searches, the most heard phrase at Moriond was unquestionably: “No excess observed – consistent with the Standard Model”. Hope now lies with the next run of the LHC at 13 TeV. If you want to find out more about the possibilities of the LHC’s second run, check out the CERN Bulletin article: “Life is good at 13 TeV“.

In addition to the diverse LHC experiment results presented, Tevatron experiments, BICEP, RHIC and other experiments also reported their breaking news at Moriond. Visit the Moriond EW and Moriond QCD conference websites to find out more.

Katarina Anthony-Kittelsen

Share