• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘LHC’

The Large Hadron Collider (LHC) at CERN has already delivered more high energy data than it had in 2015. To put this in numbers, the LHC has produced 4.8 fb-1, compared to 4.2 fb-1 last year, where fb-1 represents one inverse femtobarn, the unit used to evaluate the data sample size. This was achieved in just one and a half month compared to five months of operation last year.

With this data at hand, and the projected 20-30 fb-1 until November, both the ATLAS and CMS experiments can now explore new territories and, among other things, cross-check on the intriguing events they reported having found at the end of 2015. If this particular effect is confirmed, it would reveal the presence of a new particle with a mass of 750 GeV, six times the mass of the Higgs boson. Unfortunately, there was not enough data in 2015 to get a clear answer. The LHC had a slow restart last year following two years of major improvements to raise its energy reach. But if the current performance continues, the discovery potential will increase tremendously. All this to say that everyone is keeping their fingers crossed.

If any new particle were found, it would open the doors to bright new horizons in particle physics. Unlike the discovery of the Higgs boson in 2012, if the LHC experiments discover a anomaly or a new particle, it would bring a new understanding of the basic constituents of matter and how they interact. The Higgs boson was the last missing piece of the current theoretical model, called the Standard Model. This model can no longer accommodate new particles. However, it has been known for decades that this model is flawed, but so far, theorists have been unable to predict which theory should replace it and experimentalists have failed to find the slightest concrete signs from a broader theory. We need new experimental evidence to move forward.

Although the new data is already being reconstructed and calibrated, it will remain “blinded” until a few days prior to August 3, the opening date of the International Conference on High Energy Physics. This means that until then, the region where this new particle could be remains masked to prevent biasing the data reconstruction process. The same selection criteria that were used for last year data will then be applied to the new data. If a similar excess is still observed at 750 GeV in the 2016 data, the presence of a new particle will make no doubt.

Even if this particular excess turns out to be just a statistical fluctuation, the bane of physicists’ existence, there will still be enough data to explore a wealth of possibilities. Meanwhile, you can follow the LHC activities live or watch CMS and ATLAS data samples grow. I will not be available to report on the news from the conference in August due to hiking duties, but if anything new is announced, even I expect to hear its echo reverberating in the Alps.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN », which can already be ordered from Oxford University Press. In bookstores after 21 July. Easy to read: I understood everything!

CMS-lumi-17juin

The total amount of data delivered in 2016 at an energy of 13 TeV to the experiments by the LHC (blue graph) and recorded by CMS (yellow graph) as of 17 June. One fb-1 of data is equivalent to 1000 pb-1.

Share

Le Grand collisionneur de hadrons (LHC) du CERN a déjà produit depuis avril plus de données à haute énergie qu’en 2015. Pour quantifier le tout, le LHC a produit 4.8 fb-1 en 2016, à comparer aux 4.2 fb-1 de l’année dernière. Le symbole fb-1 représente un femtobarn inverse, l’unité utilisée pour évaluer la taille des échantillons de données. Tout cela en à peine un mois et demi au lieu des cinq mois requis en 2015.

Avec ces données en réserve et les 20-30 fb-1 projetés d’ici à novembre, les expériences ATLAS et CMS peuvent déjà repousser la limite du connu et, entre autres, vérifier si les étranges événements rapportés fin 2015 sont toujours observés. Si cet effet était confirmé, il révèlerait la présence d’une nouvelle particule ayant une masse de 750 GeV, soit six fois plus lourde que le boson de Higgs. Malheureusement en 2015, il n’y avait pas suffisamment de données pour obtenir une réponse claire. Après deux ans de travaux majeurs visant à accroître sa portée en énergie, le LHC a repris ses opérations l’an dernier mais à faible régime. Si sa performance actuelle se maintient, les chances de faire de nouvelles découvertes seront décuplées. Tout le monde garde donc les doigts croisés.

Toute nouvelle particule ouvrirait la porte sur de nouveaux horizons en physique des particules. Contrairement à la découverte du boson de Higgs en 2012, si les expériences du LHC révèlent une anomalie ou l’existence d’une nouvelle particule, cela modifierait notre compréhension des constituants de base de la matière et des forces qui les régissent. Le boson de Higgs constituait la pièce manquante du Modèle standard, le modèle théorique actuel. Ce modèle ne peut plus accommoder de nouvelles particules. On sait pourtant depuis des décennies qu’il est limité, bien qu’à ce jour, les théoriciens et théoriciennes n’aient pu prédire quelle théorie devrait le remplacer et les expérimentalistes ont échoué à trouver le moindre signe révélant cette nouvelle théorie. Une évidence expérimentale est donc absolument nécessaire pour avancer.

Bien que les nouvelles données soient déjà en cours de reconstruction et de calibration, elles resteront “masquées” jusqu’à quelques jours avant le 3 août, date d’ouverture de la principale conférence de physique cet été. D’ici là, la région où la nouvelle particule pourrait se trouver est masquée afin de ne pas biaiser le processus de reconstruction des données. A la dernière minute, on appliquera aux nouvelles données les mêmes critères de sélection que ceux utilisés l’an dernier. Si ces évènements sont toujours observés à 750 GeV dans les données de 2016, la présence d’une nouvelle particule ne fera alors plus aucun doute.

Mais même si cela s’avérait n’être qu’une simple fluctuation statistique, ce qui arrive souvent en physique de par sa nature, la quantité de données accumulée permettra d’explorer une foule d’autres possibilités. En attendant, vous pouvez suivre les activités du LHC en direct ou voir grandir les échantillons de données de CMS et d’ATLAS. Je ne pourrai malheureusement pas vous rapporter ce qui sera présenté à la conférence en août, marche en montagne oblige, mais si une découverte quelconque est annoncée, même moi je m’attends à entendre son écho résonner dans les Alpes.

Pauline Gagnon

Pour en apprendre plus sur la physique des particules, ne manquez pas mon livre « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels » disponible en librairie au Québec et en Europe, de meme qu’aux Editions MultiMondes. Facile à lire : moi, j’ai tout compris!

CMS-lumi-17juin

Graphe cumulatif montrant la quantité de données produites à 13 TeV en 2016 par le LHC (en bleu) et récoltées par l’expérience CMS (en jaune) en date du 17 juin.

Share

There has been a lot of press about the recent DØ result on the possible \(B_s \pi\) state. This was also covered on Ricky Nathvani’s blog. At Moriond QCD, Jeroen Van Tilburg showed a few plots from LHCb which showed no signal in the same mass regions as explored by D∅. Tomorrow, there will be a special LHC seminar on the LHCb search for purported tetraquark, where we will get the full story from LHCb. I will be live blogging the seminar here! It kicks off at 11:50 CET, so tune in to this post for live updates.


Mar 22, 2016 -12:23. Final answer. LHCb does not confirm the tetraquark. Waiting for CMS, ATLAS, CDF.


Mar 22, 2016 – 12:24. How did you get the result out so fast? A lot of work by the collaboration to get MC produced and to expedite the process.


Mar 22, 2016 – 12:21. Is the \(p_T\) cut on the pion too tight? The fact that you haven’t seen anything anywhere else gives you confidence that the cut is safe. Also, cut is not relative to \(B_s\).


Mar 22, 2016 – 12:18. Question: What are the fractions of multiple candidates which enter? Not larger than 1.2. If you go back to the cuts. What selection killed the combinatoric background the most? Requirement that the \(\pi\) comes from the PV, and the \(p_T\) cut on the pion kill the most. How strong the PV cut? \(\chi^2\) less than 3.5 for the pion at the PV, you force the \(B_s\) and the pion to come from the PV, and constrain the mass of \(B_s\) mass.


Mar 22, 2016 – 12:17: Can you go above the threshold? Yes.


Mar 22, 2016 – 12:16. Slide 9: Did you fit with a floating mass? Plan to do this for the paper.


Mar 22, 2016 – 12:15. Wouldn’t \(F_S\) be underestimated by 8%? Maybe maybe not.


Mar 22, 2016 – 12:13. Question: Will LHCb publish? Most likely yes, but a bit of politics. Shape of the background in the \(B_s\pi\) is different in LHCb and DØ. At some level, you expect a peak from the turn over. Also CMS is looking.


Mar 22, 2016 – 12:08-12:12. Question: did you try the cone cut to try to generate a peak? Answer: Afraid that the cut can give a biased estimate of the significance. From DØ seminar, seems like this is the case. For DØ to answer. Vincenzo Vagnoni says that DØ estimation of significance is incorrect. We also don’t know if there’s something that’s different between \(pp\) and \(p \bar{p}\).


Mar 22, 2016 – 12:08. No evidence of \(X(5568)\) state, set upper limit. “We look forward to hearing from ATLAS, CMS and CDF about \(X(5568)\)”


Mar 22, 2016 – 12:07. What if the production of the X was the same at LHCb? Should have seen a very large signal. Also, in many other spectroscopy plots, e.g. \(B*\), look at “wrong sign” plots for B and meson. All results LHCb already searched for would have been sensitive to such a state.


Mar 22, 2016 -12:04. Redo the analysis in bins of rapidity. No significant signal seen in any result. Do for all pt ranges of the Bs.


Mar 22, 2016 – 12:03. Look at \(B^0\pi^+\) as a sanity check. If X(5568) is similar to B**, then the we expect order 1000 events.


Mar 22, 2016 – 12:02.Upper limits on production given.


Mar 22, 2016 – 12:02. Check for systematics: changing mass and width of DØ range, and effect of efficiency dependence on signal shape are the dominant sources of systematics. All measurements dominated by statistics.


Mar 22, 2016 – 12:00. Result of the fits all consistent with zero. The relative production is also consistent with zero.


Mar 22, 2016 – 11:59. 2 fits with and without signal components, no difference in pulls. Do again with tighter cut on the transverse momentum of the \(B_s\). Same story, no significant signal seen.


Mar 22, 2016 – 11:58. Fit model: S-wave Breit-Wigner, mass and width fixed to DØ result. Backgrounds: 2 sources. True \(B_s^0\) with random track, and fake \(B_s\).


Mar 22, 2016 – 11:56.  No “cone cut” applied because it is highly correlated with reconstructed mass.


Mar 22, 2016 – 11:55. LHCb strategy: Perform 3 independent searches, confirm a qualitative approach, move forward with single approach with Run 1 dataset. Cut based selection to match D∅ strategy. Take home point. Statistics is 20x larger and much cleaner.


Mar 22, 2016 – 11:52. Review of DØ result. What could it be? Molecular model is disfavored. Diquark-Antidiquark models are popular. But could not fit into any model. Could also be feed down of  radiative decays. All predictions have large uncertainties


Mar 22, 2016 –  11:49. LHCb-CONF-2016-004 posted at cds.cern.ch/record/2140095/


Mar 22, 2016 – 11:47. The speaker is transitioning to Marco Pappagallo .


Mar 22, 2016 – 11:44. People have begun entering the auditorium for the talk, at the end of Basem Khanji’s seminar on \(\Delta m_d\)

 

Share

Finding a five-leafed clover

Wednesday, July 15th, 2015
Photo Credit: Cathy Händel, Published on http://www.suttonelms.org.uk/olla12.html

Photo Credit: Cathy Händel, Published on http://www.suttonelms.org.uk/olla12.html

Sometimes when you’re looking for something else, you happen across an even more exciting result. That’s what’s happened at LHCb, illustrated in the paper “Observation of \(J/\psi p\) resonances consistent with pentaquark states in \(\Lambda_b^0\to J/\psi K^-p\) decays”, released on the arXiv on the 14th of July.

I say this is lucky because the analysts found these states while they were busy looking at another channel; they were measuring the branching fraction of \(B^0\to J/\psi K^+ K^-\). As one of the analysts, Sheldon Stone, recalled to me, during the review of the \(B^0\) analysis, one reviewer asked if there could be a background from the decay \(\Lambda_b^0\to J/\psi K^- p\), where the proton was misidentified as a kaon. As this was a viable option, they looked at the PDG to see if the mode had been measured, and found that it had not. Without a certain knowledge of how large this contribution would be, the analysts looked. To their surprise, they found a rather large rate of the decay, allowing for a measurement of the lifetime of the \(\Lambda_b^0\). At the same time, they noticed a peak in the \(J/\psi p\) spectrum. After completing the above mentioned analysis of the \(B^0\), they returned to the channel.

It’s nice to put yourself in the analysts shoes and see the result for yourself. Let’s start by looking at the decay \(\Lambda_b^0\to J/\psi p K^-\). As this is a three body decay, we can look at the Dalitz Plots.

Dalitz plots from the decay Lambda_b^0\to J/\psi K p. Compiled from http://arxiv.org/abs/1507.03414

Dalitz plots from the decay \(\Lambda_b^0\to J/\psi K^- p\). Compiled from http://arxiv.org/abs/1507.03414

The above Dalitz plots show all combinations of possible axes to test. In the one on the left, around \(m^2=2.3\) GeV\(^2\), running vertically, we see the \(\Lambda(1520)\) resonance, which decays into a proton and a kaon. Running horizontally is a band which does not seem to correspond to a known resonance, but which would decay into a \(J/\psi\) and a proton. If this is a strong decay, then the only option is to have a hadron whose minimum quark content is \(uud\bar{c}c\). The same band is seen on the middle plot as a vertical band, and on the far right as the sloping diagonal band. To know for sure, one must perform a complete amplitude analysis of the system.

You might be saying to yourself “Who ordered that?” and think that something with five quarks hadn’t been postulated. This is not the case. Hadrons with quark content beyond the minimum were already thought about by Gell-Mann and Zweig in 1964 and quantitatively modeled by Jaffe in 1977  to 4 quarks and 5 quarks by Strottman in 1979. I urge you to go look at the articles if you haven’t before.

It appears as though a resonance has been found, and in order to be sure, a full amplitude analysis of the decay was performed. The distribution is first modeled without any such state, shown in the figures below.

Projections of the fits of the Lambda_b^0\to J/\psi K^- p spectrum without any additional components. From http://arxiv.org/abs/1507.03414

Projections of the fits of the\( \Lambda_b^0\to J/\psi K^- p\) spectrum without any additional components. Black is the data, and red is the fit. From http://arxiv.org/abs/1507.03414

Try as you might, the models are unable to explain the invariant mass distribution of the \(J/\psi p\). Without going into too much jargon, they wrote down from a theoretical standpoint what type of effect a five quark particle would have on the Dalitz plot, then put this into their model. As it turns out, they were unable to successfully model the distribution without the addition of two such pentaquark states. By adding these states, the fits look much better, as shown below.

Mass projection onto the J/\psi p axis of the total fit to the Dalitz plot. Again, Black is data, red is the fit. The inset image is for the kinematic range...  From http://arxiv.org/abs/1507.03414

Mass projection onto the \(J/\psi p\) axis of the total fit to the Dalitz plot. Again, Black is data, red is the fit. The inset image is for the kinematic range \(m(K p)>2 GeV\).
From http://arxiv.org/abs/1507.03414

The states  are called the \(P_c\) states. Now, as this is a full amplitude analysis, the fit also covers all angular information. This allows for determination of the total angular momentum and parity of the states. These are defined by the quantity \(J^P\), with \(J\) being the total angular momentum and \(P\) being the parity. All values for both resonances are tried from 1/2 to 7/2, and the best fit values are found to be with one resonance having \(J=3/2\) and the other with \(J=5/2\), with each having the opposite parity as the other. No concrete distinction can be made between which state has which value.

Finally, the significance of the signal is described by under the assumption \(J^P=3/2^-,5/2^+\) for the lower and higher mass states; the significances are 9 and 12 standard deviations, respectively.

The masses and widths turn out to be

\(m(P_c^+(4380))=4380\pm 8\pm 29 MeV\)

\(m(P_c^+(4450))=4449.8\pm 1.7\pm 2.5 MeV\)

With corresponding widths

Width\((P_c^+(4380))=205\pm 18\pm 86 MeV\)

Width\((P_c^+(4450))=39\pm 5\pm 19 MeV\)

Finally, we’ll look at the Argand Diagrams for the two resonances.

Argand diagrams for the two P_c states. From http://arxiv.org/abs/1507.03414

Argand diagrams for the two \(P_c\) states.
From http://arxiv.org/abs/1507.03414

 

Now you may be saying “hold your horses, that Argand diagram on the right doesn’t look so great”, and you’re right. I’m not going to defend the plot, but only point out that the phase motion is in the correct direction, indicated by the arrows.

As pointed out on the LHCb public page, one of the next steps will be to try to understand whether the states shown are tightly bound 5 quark objects or rather loosely bound meson baryon molecule. Even before that, though, we’ll see if any of the other experiments have something to say about these states.

Share

This article appeared in Fermilab Today on June 22, 2015.

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Last month, a group collaborating across four national laboratories completed the first successful tests of a superconducting coil in preparation for the future high-luminosity upgrade of the Large Hadron Collider, or HL-LHC. These tests indicate that the magnet design may be adequate for its intended use.

Physicists, engineers and technicians of the U.S. LHC Accelerator Research Program (LARP) are working to produce the powerful magnets that will become part of the HL-LHC, scheduled to start up around 2025. The plan for this upgrade is to increase the particle collision rate, or luminosity, by approximately a factor of 10, so expanding the collider’s physics reach by creating 10 times more data.

“The upgrade will help us get closer to new physics. If we see something with the current run, we’ll need more data to get a clear picture. If we don’t find anything, more data may help us to see something new,” said Technical Division’s Giorgio Ambrosio, leader of the LARP magnet effort.

LARP is developing more advanced quadrupole magnets, which are used to focus particle beams. These magnets will have larger beam apertures and the ability to produce higher magnetic fields than those at the current LHC.

The Department of Energy established LARP in 2003 to contribute to LHC commissioning and prepare for upgrades. LARP includes Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and SLAC. Its members began developing the technology for advanced large-aperture quadrupole magnets around 2004.

The superconducting magnets currently in use at the LHC are made from niobium titanium, which has proven to be a very effective material to date. However, they will not be able to support the higher magnetic fields and larger apertures the collider needs to achieve higher luminosities. To push these limits, LARP scientists and engineers turned to a different material, niobium tin.

Niobium tin was discovered before niobium titanium. However, it has not yet been used in accelerators because, unlike niobium titanium, niobium tin is very brittle, making it susceptible to mechanical damage. To be used in high-energy accelerators, these magnets need to withstand large amounts of force, making them difficult to engineer.

LARP worked on this challenge for almost 10 years and went through a number of model magnets before it successfully started the fabrication of coils for 150-millimeter-aperture quadrupoles. Four coils are required for each quadrupole.

LARP and CERN collaborated closely on the design of the coils. After the first coil was built in the United States earlier this year, the LARP team successfully tested it in a magnetic mirror structure. The mirror structure makes possible tests of individual coils under magnetic field conditions similar to those of a quadrupole magnet. At 1.9 Kelvin, the coil exceeded 19 kiloamps, 15 percent above the operating current.

The team also demonstrated that the coil was protected from the stresses and heat generated during a quench, the rapid transition from superconducting to normal state.

“The fact that the very first test of the magnet was successful was based on the experience of many years,” said TD’s Guram Chlachidze, test coordinator for the magnets. “This knowledge and experience is well recognized by the magnet world.”

Over the next few months, LARP members plan to test the completed quadrupole magnet.

“This was a success for both the people building the magnets and the people testing the magnets,” said Fermilab scientist Giorgio Apollinari, head of LARP. “We still have a mountain to climb, but now we know we have all the right equipment at our disposal and that the first step was in the right direction.”

Diana Kwon

Share

I know what you are thinking. The LHC is back in action, at the highest energies ever! Where are the results? Where are all the blog posts?

Back in action, yes, but restarting the LHC is a very measured process. For one thing, when running at the highest beam energies ever achieved, we have to be very careful about how we operate the machine, lest we inadvertently damage it with beams that are mis-steered for whatever reason. The intensity of the beams — how many particles are circulating — is being incrementally increased with successive fills of the machine. Remember that the beam is bunched — the proton beams aren’t continuous streams of protons, but collections that are just a few centimeters long, spaced out by at least 750 centimeters. The LHC started last week with only three proton bunches in each beam, only two of which were actually colliding at an interaction point. Since then, the LHC team has gone to 13 bunches per beam, and then 39 bunches per beam. Full-on operations will be more like 1380 bunches per beam. So at the moment, the beams are of very low intensity, meaning that there are not that many collisions happening, and not that much physics to do.

What’s more, the experiments have much to do also to prepare for the higher collision rates. In particular, there is the matter of “timing in” all the detectors. Information coming from each individual component of a large experiment such as CMS takes some time to reach the data acquisition system, and it’s important to understand how long that time is, and to get all of the components synchronized. If you don’t have this right, then you might not be getting the optimal information out of each component, or worse still, you could end up mixing up information from different bunch crossings, which would be disastrous. This, along with other calibration work, is an important focus during this period of low-intensity beams.

But even if all these things were working right out of the box, we’d still have a long way to go until we had some scientific results. As noted already, the beam intensities have been low, so there aren’t that many collisions to examine. There is much work to do yet in understanding the basics in a revised detector operating at a higher beam energy, such as how to identify electrons and muons once again. And even once that’s done, it will take a while to make measurements and fully vet them before they could be made public in any way.

So, be patient, everyone! The accelerator scientists and the experimenters are hard at work to bring you a great LHC run! Next week, the LHC takes a break for maintenance work, and that will be followed by a “scrubbing run”, the goal of which is to improve the vacuum in the LHC beam pipe. That will allow higher-intensity beams, and position us to take data that will get the science moving once again.

Share

LHC-page-1-3juin2015

Today begins the second operation period of the Large Hadron Collider (LHC) at CERN. By declaring “stable beams”, the LHC operators signal to physicists it is now safe to turn all their detectors on. After more than two years of intensive repair and consolidation work, the LHC now operates at higher energy. What do we hope to achieve?

The discovery of the Higgs boson in July 2012 completed the Standard Model of particle physics. This theoretical model describes all matter seen around us, both on Earth and in all stars and galaxies. But this is precisely the problem: this model only applies to what is visible in the Universe, namely 5% of its content in matter and energy. The rest consists of dark matter (27%) and dark energy (68%), two absolutely unknown substances. Hence the need for a more encompassing theory. But what is it and how can it be reached?

By operating the LHC at 13 TeV, we now have much more energy available to produce new particles than during the 2010-2012 period, when the proton collisions occurred at 8 TeV. Given that energy and mass are two forms of the same essence, the energy released during these collisions materialises, producing new particles. Having more energy means one can now produce heavier particles. It is as if one’s budget just went from 8000 euro to 13000 euro. We can “afford” bigger particles if they exist in Nature.

The Standard Model tells us that all matter is built from twelve basic particles, just like a construction set consisting of twelve basic building blocks and some “connectors” linking them together. These connectors are other particles associated with the fundamental forces. Since none of these particles has the properties of dark matter, there must still be undiscovered particles.

Which theory will allow us to go beyond the Standard Model? Will it be Supersymmetry, one of the numerous theoretical hypotheses currently under study. This theory would unify the particles of matter with the particles associated with the fundamental forces. But Supersymmetry implies the existence of numerous new particles, none of which has been found yet.

Will the LHC operating at 13 TeV allow us to produce some of these supersymmetric particles? Or will the entrance of the secret passage towards this “new physics” be revealed by meticulously studying a plethora of quantities, such as the properties of the Higgs boson. Will we discover that it establishes a link between ordinary matter (everything described by the Standard Model) and dark matter?

These are some of the many questions the LHC could clarify in the coming years. An experimental discovery would reveal the new physics. We might very well be on the verge of a huge scientific revolution.

For more information about particle physics and my book, see my website

Share

All those super low energy jets that the LHC cannot see? LHC can still see them.

Hi Folks,

Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.

myproton_profmattstrassler

This means that when two protons collide things are sprayed about and get very messy.

atlas2009-collision-vp1-142308-482137-web

One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this

feynmanDiagram_DrellYan_Simple

By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson  momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution

cms_DY_7TeV

Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.

However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.

feynmanDiagram_DrellYan_wRad

Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.

feynmanDiagram_DrellYan_3j

Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.

atlas_158466_4174272_Zmumu3jets

 

As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).

cms_StairwayHeaven_2014

How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.

feynmanDiagram_DrellYan_wRadx6

There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.

As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.

Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.

feynmanDiagram_DrellYan_wRadx6_Text

 

If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.

cdf_pTZ dzero_pTZ

In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.

Happy Colliding,

Richard (@bravelittlemuon)

 

Share

This article appeared in Fermilab Today on April 3, 2015.

This magnet recently achieved an important milestone, reaching its design field of 11.5 Tesla. It is the first successful niobium-3-tin, twin-aperture accelerator magnet in the world. Photo: Sean Johnson

This magnet recently achieved an important milestone, reaching its design field of 11.5 Tesla. It is the first successful niobium-3-tin, twin-aperture accelerator magnet in the world. Photo: Sean Johnson

Last month, a new superconducting magnet developed and fabricated at Fermilab reached its design field of 11.5 Tesla at a temperature nearly as cold as outer space. It is the first successful twin-aperture accelerator magnet made of niobium-3-tin in the world.

The advancements in niobium-3-tin, or Nb3Sn, magnet technology and the ongoing U.S. collaboration with CERN on the development of these and other Nb3Sn magnets are enabling the use of this innovative technology for future upgrades of the Large Hadron Collider (LHC). They may also provide the cornerstone for future circular machines of interest to the worldwide high-energy physics community. Because of the exceptional challenges — Nb3Sn is brittle and requires high-temperature processing — this important milestone was achieved at Fermilab after decades of worldwide R&D efforts both in the Nb3Sn conductor itself and in associated magnet technologies.

Superconducting magnets are at the heart of most particle accelerators for fundamental science as well as other scientific and technological applications. Superconductivity is also being explored for use in biosensors and quantum computing.

Thanks to Nb3Sn’s stronger superconducting properties, it enables magnets of larger field than any in current particle accelerators. As a comparison, the niobium-titanium dipole magnets built in the early 1980s for the Tevatron particle collider produced about 4 Tesla to bend the proton and antiproton beams around the ring. The most powerful niobium-titanium magnets used in the LHC operate at roughly 8 Tesla. The new niobium-3-tin magnet creates a significantly stronger field.

Because the Tevatron accelerated positively charged protons and negatively charged antiprotons, its magnets had only one aperture. By contrast, the LHC uses two proton beams. This requires two-aperture magnets with fields in opposite directions. And because the LHC collides beams at higher energies, it requires larger magnetic fields.

In the process of upgrading the LHC and in conceiving future particle accelerators and detectors, the high-energy physics community is investing as never before in high-field magnet technologies. This creative process involves the United States, Europe, Japan and other Asian countries. The latest strategic plan for U.S. high-energy physics, the 2014 report by the Particle Physics Project Prioritization Panel, endorses continued U.S. leadership in superconducting magnet technology for future particle physics programs. The U.S. LHC Accelerator Research Program (LARP), which comprises four DOE national laboratories — Berkeley Lab, Brookhaven Lab, Fermilab and SLAC — plays a key role in this strategy.

The 15-year investment in Nb3Sn technology places the Fermilab team led by scientist Alexander Zlobin at the forefront of this effort. The Fermilab High-Field Magnet Group, in collaboration with U.S. LARP and CERN, built the first reproducible series in the world of single-aperture 10- to 12-Tesla accelerator-quality dipoles and quadrupoles made of Nb3Sn, establishing a strong foundation for the LHC luminosity upgrade at CERN.

The laboratory has consistently carried out in parallel an assertive superconductor R&D program as key to the magnet success. Coordination with industry and universities has been critical to improve the performance of the next generation of high-field accelerator magnets.

The next step is to develop 15-Tesla Nb3Sn accelerator magnets for a future very high-energy proton-proton collider. The use of high-temperature superconductors is also becoming a realistic prospect for generating even larger magnetic fields. An ultimate goal is to develop magnet technologies based on combining high- and low-temperature superconductors for accelerator magnets above 20 Tesla.

The robust and versatile infrastructure that was developed at Fermilab, together with the expertise acquired by the magnet scientists and engineers in design and analysis tools for superconducting materials and magnets, makes Fermilab an ideal setting to look to the future of high-field magnet research.

Emanuela Barzi

Share

I don’t usually get to spill the beans on a big discovery like this, but this time, I DO!

CERN Had Dark Energy All Along!!

That’s right. That mysterious energy making up ~68% of the universe was being used all along at CERN! Being based at CERN now, I’ve had a first hand glimpse into the dark underside of Dark Energy. It all starts at the Crafted Refilling of Empty Mugs Area (CREMA), pictured below.

One CREMA station at CERN

 

Researchers and personnel seem to stumble up to these stations at almost all hours of the day, looking very dreary and dazed. They place a single cup below the spouts, and out comes a dark and eerie looking substance, which is then consumed. Some add a bit of milk for flavor, but all seem perkier and refreshed after consumption. Then they disappear from whence they came. These CREMA stations seem to be everywhere, from control rooms to offices, and are often found with groups of people huddled around them. In fact, they seem to exert a force on all who use them, keeping them in stable orbits about the stations.

In order to find out a little bit more about this mysterious substance and its dispersion, I asked a graduating student, who wished to remain unnamed, a little bit about their experiences:

Q. How much of this dark stuff do you consume on a daily basis?

A. At least one cup in the morning to fuel up, I don’t think I could manage to get to lunchtime without that one. Then multiple other cups distributed over the day, depending on the workload. It always feels like they help my thinking.

Q. Do you know where it comes from?

A. We have a machine in our office which takes capsules. I’m not 100% sure where those capsules are coming from, but they seem to restock automatically, so no one ever asked.

Q. Have you been hiding this from the world on purpose?

A. Well our stock is important to our group, if we would just share it with everyone around we could run out. And no one of us can make it through the day without. We tried alternatives, but none are so effective.

Q. Do you remember the first time you tried it?

A. Yes, they hooked me on it in university. From then on nothing worked without!

Q. Where does CERN get so much of it?

A. I never thought about this question. I think I’m just happy that there is enough for everyone here, and physicist need quite a lot of it to work.

In order to gauge just how much of this Dark Energy is being consumed, I studied the flux of people from the cafeteria as a function of time with cups of Dark Energy. I’ve compiled the results into the Dark Energy Consumption As Flux (DECAF) plot below.

Dark Energy Consumption as Flux plot. Taken March 31, 2015. Time is given in 24h time. Errors are statistical.

 

As the DECAF plot shows, there is a large spike in consumption, particularly after lunch. There is a clear peak at times after 12:20 and before 13:10. Whether there is an even larger peak hiding above 13:10 is not known, as the study stopped due to my advisor asking “shouldn’t you be doing actual work?”

There is an irreducible background of Light Energy in the cups used for Dark Energy, particularly of the herbal variety. Fortunately, there is often a dangly tag hanging off of the cup  to indicate to others that they are not using the precious Dark Energy supply, and provide a clear signal for this study to eliminate the background.

While illuminating, this study still does not uncover the exact nature of Dark Energy, though it is clear that it is fueling research here and beyond.

Share