• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for August, 2011

Fermilab distributed this press release Aug. 25.

This graph demonstrates that the new MINOS antineutrino result (blue) is more precise than last year’s result (red), as reflected by the smaller oval, and the new result is in better agreement with the mass range of the 2010 neutrino result (black), reflected by the overlap of the blue and red ovals. The ovals represent the 90 percent statistical confidence levels for each result. A 90 percent confidence level means that if scientists were to repeat the measurement many times, they would expect to obtain a result that lies within the contour 90 percent of the time. The points inside the ovals show the best, or most likely, value for each of the three measurements. The best value for the 2011 measurement of the squared mass difference for the antineutrinos is 2.62 x 10-3 eV2.

The physics community got a jolt last year when results showed for the first time that neutrinos and their antimatter counterparts, antineutrinos, might be the odd man out in the particle world and have different masses. This idea was something that went against most commonly accepted theories of how the subatomic world works.

A result released today (August 25) from the MINOS experiment at the Department of Energy’s Fermi National Accelerator Laboratory appears to quell concerns raised by a MINOS result in June 2010 and brings neutrino and antineutrino masses more closely in sync.

By bringing measurements of neutrinos and antineutrinos closer together, this new MINOS result allows physicists to lessen the potential ramifications of this specific neutrino imbalance. These ramifications include: a new way neutrinos interact with other particles, unseen interactions between neutrinos and matter in the earth and the need to rethink everything known about how the universe works at the tiniest levels.

“This more precise measurement shows us that these particles and their antimatter partners are very likely not as different as indicated earlier. Within our current range of vision it now seems more likely that the universe is behaving the way most people think it does,” said Rob Plunkett, Fermilab scientist and co-spokesman of MINOS. “This new, additional information on antineutrino parameters helps put limits on new physics, which will continue to be searched for by future planned experiments.”

University College London Physics Professor and MINOS co-spokesperson Jenny Thomas presented this new result – the world’s best measurement of muon neutrino and antineutrino mass comparisons — at the International Symposium on Lepton Photon Interactions at High Energies in Mumbai, India.

MINOS nearly doubled its data set since its June 2010 result from 100 antineutrino events to 197 events. While the new results are only about one standard deviation away from the previous results, the combination rules out concerns that the previous results could have been caused by detector or calculation errors. Instead, the combined results point to a statistical fluctuation that has lessened as more data is taken.

Physicists measured the differences between the squared masses between two types of neutrinos and compared them to the squared masses between two types of antineutrinos, a quantity called delta m squared. The 2010 result found, as a whole, that the range of mass difference in the neutrinos was about 40 percent less for antineutrinos, while the new result found a 16 percent difference.

“The previous results left a 2 percent chance that the neutrino and antineutrino masses were the same. This disagrees with what theories of how neutrinos operate predicted,” Thomas said. “So we have spent almost a year looking for some instrumental effect that could have caused the difference. It is comforting to know that statistics were the culprit.”

Because several neutrino experiments operating and planned across the globe rely on neutrino and antineutrino measurements being the same as part of their calculations, the new MINOS result hopefully removes a potential hurdle for them.

Fermilab’s accelerator complex is capable of producing intense beams of either muon antineutrinos or muon neutrinos to send to the two MINOS detectors, one at Fermilab and one in Minnesota. This capability allows the experimenters to measure the mass difference parameters. The measurement also relies on the unique characteristics of the MINOS far detector, particularly its magnetic field, which allows the detector to separate the positively and negatively charged muons resulting from interactions of antineutrinos and neutrinos, respectively.

The antineutrinos’ extremely rare interactions with matter allow most of them to pass through the Earth unperturbed. A small number, however, interact in the MINOS detector, located 735 km away from Fermilab in Soudan, Minnesota. During their journey, which lasts 2.5 milliseconds, the particles oscillate in a process governed by a difference between their mass states.

Further analysis will be needed by the upcoming Fermilab neutrino experiments NOvA and MINOS+ to close the mass difference even more. Both experiments will use an upgraded accelerator beam generated at Fermilab that will emit more than double the number of neutrinos. This upgraded beam is expected to start operating in 2013.

The MINOS experiment involves more than 140 scientists, engineers, technical specialists and students from 30 institutions, including universities and national laboratories, in five countries: Brazil, Greece, Poland, the United Kingdom and the United States. Funding comes from: the Department of Energy’s Office of Science and the National Science Foundation in the U.S., the Science and Technology Facilities

The 1,000-ton MINOS near detector sits 350 feet underground at Fermilab. The detector consists of 282 octagonal-shaped detector planes, each weighing more than a pickup truck. Scientists use the near detector to verify the intensity and purity of the muon neutrino beam leaving the Fermilab site. Photo: Peter Ginter.

Council in the U.K; the University of Minnesota in the U.S.; the University of Athens in Greece; and Brazil’s Foundation for Research Support of the State of São Paulo (FAPESP) and National Council of Scientific and Technological Development (CNPq).

Fermilab is a national laboratory supported by the Office of Science of the U.S. Department of Energy, operated under contract by Fermi Research Alliance, LLC.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov.

Share

LHCb, une des expériences du Grand Collisionneur de Hadrons (LHC) a été conçue spécialement pour étudier la violation de charge-parité (CP), ou en d’autres mots, expliquer pourquoi on trouve plus de matière que d’antimatière dans notre univers. Ceci est vraiment difficile à comprendre parce qu’en laboratoire, on observe une toute petite préférence pour la création de matière par rapport à l’antimatière mais insuffisante pour expliquer pourquoi, aujourd’hui, on ne voit que de la matière autour de nous dans l’univers. Alors pourquoi la matière a-t-elle pris le dessus lors de la formation de l’univers après le Big Bang?

Une façon indirecte mais efficace d’aborder ce problème est d’étudier les désintégrations des quarks b. Etant lourds, ils peuvent se désintégrer de multiples façons, mais ils sont assez légers pour qu’on puisse les produire copieusement contrairement aux quarks les plus lourds, les quarks « top ». De plus, le Modèle Standard, la théorie actuelle infaillible jusqu’ici, prédit avec une grande exactitude les taux de ces différents types de désintégrations. Il suffit donc de regarder si ce qu’on mesure avec le détecteur LHCb concorde avec ces prédictions. La moindre déviation indiquerait une faille du Modèle Standard, preuve que l’on n’a encore jamais réussi à produire, même si tous les physiciennes et physiciens sont convaincu-e-s que ce modèle est incomplet. Mais jusqu’à présent, personne n’a encore réussi à le prendre en défaut.

Voici comment la collaboration LHCb entend s’y prendre : en étudiant les types de désintégrations rares avec la plus grande précision jamais atteinte.

Les collisions d’électrons ou de protons dans les accélérateurs de particules produisent des quarks b qui s’accompagnent toujours d’autres quarks plus légers (u, d ou s) pour former des particules composites appelées mésons B. Cela a souvent été fait et en abondance dans des « fabriques de b » aux Etats-Unis et au Japon, mais aussi avec le Tevatron, un accélérateur similaire au LHC près de Chicago, et maintenant à Genève  avec le LHC.

Dans les fabriques de b, on a étudié en détail les modes de désintégration de ces mésons B mais sans jamais réussir à trouver la brèche dans l’édifice du Modèle Standard, et ce malgré les 470 millions de paires de mésons B passées au peigne fin. Tout concordait avec les prédictions. En s’attaquant aux désintégrations les plus rares, celles qui ne devraient se produire qu’une fois sur un milliard, LHCb a une meilleure chance de détecter la moindre anomalie, ce qui révèlerait pour une première fois ce qui se cache au-delà du Modèle Standard. Mais pour ça, il faut étudier des milliards de collisions.

Récemment, les expériences du Tevatron, D0 et CDF, ont pris les devants en mesurant des désintégrations très rares de Bs à m m, quand un méson Bs (fait d’un quark anti-b et d’un quark s) se brise en une paire de muons (dénoté par m), une particule semblable à l’électron mais plus lourde. La collaboration CDF a mesuré un taux légèrement supérieur aux prédictions théoriques, mais l’imprécision de cette mesure ne permet pas d’en avoir le cœur net.

On peut aussi mesurer un autre paramètre, Φs, qui est relié à la distribution angulaire des débris de désintégrations quand Bs à J/Ψ Φ , i.e. quand un méson Bs se brise en deux autres mésons de type J/Ψ et Φ. Ce paramètre devrait être zéro mais CDF et D0 ont mesuré une valeur différente de zéro, mais avec une précision encore une fois insuffisante pour en être sûr.

Et c’est ici que LHCb entre en scène. Grâce au nombre impressionnant de collisions fournies par le LHC,  LHCb surpasse déjà  la précision atteinte par le Tevatron non seulement pour  Bs à μμ, mais aussi pour Φs. Déjà en juillet, LHCb et CMS, une autre expérience du LHC, ont réfuté le résultat de CDF. Et samedi,  LHCb dévoilera ses résultats sur Φs à une grande conférence de physique actuellement en cours à Mumbai en Inde.

Est-ce que Φs, sera en accord avec le Modèle Standard ou pas? Chose certaine, avec la précision sans précédent déjà atteinte par LHCb, si ce n’est pas pour cette fois-ci, ce pourrait être que partie remise!

Je reproduirai ici leurs résultats lundi.

————————————————————————————————————————
Addition:
A la conférence Lepton-Photon tenue à Mumbai, la collaboration LHCb a annoncé samedi ses résultats sur les désintégrations Bs → J/ψ φ . La valeur du paramètre φs semble très proche de zéro en accord avec les prédictions du modèle standard. Comme cette mesure est beaucoup plus précise que celle effectuée plus tôt cette année par D0 et CDF, ce résultat indique que le modèle standard résiste à la pression même lorsque scruté avec une précision jamais atteinte auparavant.

Il restera toutefois encore bien des chances de découvrir de nouveaux phénomènes lorsque LHCb aura analysé plus de données, augmentant encore la rigueur des tests imposés au modèle standard. Avec trois fois plus de données prévues d’ici la fin de l’année, le modèle standard sera mis à rude épreuve.

Résultat de LHCb

Résultat de LHCb

Les cercles de couleurs indiquent la mesure effectuée par LHCb avec différents degrés de précision. Pour l’instant, la mesure expérimentale et la prévision théorique sont en bon accord. Lorsque plus de données seront analysées, la marge d’erreur expérimentale diminuera, accroissant ainsi la rigueur des tests imposés au modèle standard. (La présence d’une deuxième série de cercles en bas à gauche provient du fait que l’équation résolue avait deux solutions).

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline

Share

LHCb, one of the Large Hadron Collider (LHC) experiments, was designed specifically to study charge-parity (or CP) violation. In simple words, its goal is to explain why more matter than anti-matter was produced when the Universe slowly cooled down after the Big Bang, leading to a world predominantly composed of matter. This is quite puzzling since in laboratory experiments, the measured preference for the creation of matter over antimatter is too small to explain why we only see matter around us today in the Universe. So why did the Universe evolve this way?

One of the best ways to study this phenomenon is with b quarks. Since they are heavy, they can decay (i.e break down in smaller parts) in many different ways, but are light enough for us to produce in copious amounts (unlike the heaviest quark, the top quark). In addition, theorists can make very precise predictions on their decay rates using the Standard Model, the theoretical framework we have to describe most phenomena observed to this day.  Once we have predictions on how often b quarks should decay into one or another decay mode, we can compare this with what is measured with the LHCb detector, and see if there are any deviations from the Standard Model predictions. Such deviations would indicate that this model is incomplete, as every physicist suspects, even though we have not been able to define the nature of the more complex theoretical layer that must be hidden or measure anything in contradiction with the Standard Model.

Here is how LHCb wants to do it: by studying rare decays with a precision never achieved before.

When electrons or protons collide in large accelerators, b quarks are produced, but they do not come alone. They are typically accompanied by one other quark (mostly u, d or s) to form composite particles called B mesons. Such mesons have been produced at several colliders, most abundantly in b-factories in the US and Japan, but also at the Tevatron, an accelerator similar to the LHC and located near Chicago in the US.

Physicists from b-factories studied the decays of B mesons in great detail for more than ten years, but nothing new disproving the Standard Model has been uncovered so far, even after scrutinizing the decays of more than 470 millions B pairs of mesons! All decay modes inspected behaved according to the Standard Model predictions. This means we now need to study even rarer decay modes, the ones the Standard Model predicts will occur only once in a billion times. To do so, we need to look at several billion decays to detect the slightest deviation. This is in these small details that we hope to uncover new physics going beyond the Standard Model.

Recently, the Tevatron experiments, D0 and CDF, took the lead by measuring very rare decays, namely Bs → μμ, where a Bs (a meson made of an anti-b and an s quark) decays to a pair of muons, (denoted m), a particle very similar to electrons, only heavier. CDF saw a small excess of events with respect to Standard Model expectations. And when they look at the angular distributions of Bs à J/Ψ Φ , that is when the Bs meson decays into two other mesons, J/Ψ and Φ, they can measure a parameter called Φs , which is supposed to be zero according to the Standard Model. Both D0 and CDF obtained a non-zero result, but this measurement is not quite accurate enough to really challenge the Standard Model.

And that’s where LHCb, the new kid on the b-physics block, comes into play. With the LHC delivering data at a fast and furious pace, LHCb can already surpass the precision reached at the Tevatron. Already in July, LHCb (and CMS, another LHC experiment) contradicted the CDF claim of anomalous number of Bs → μμ events. They might do it again with the release of their first measurement of Φs, which is expected to be much more precise than the Tevatron result.

Will Φs be equal to zero as predicted by the Standard Model? LHCb will announce this on Saturday at the Lepton-Photon conference in Mumbai. Could LHCb be the first experiment to crack the Standard Model? With the level of precision they are already reaching, even if it’s not now, they will be in the best position to do it in the near future.

Stay tuned. The new results will be added here on Monday.

————————————————————————————————————————-
Addition:
At the Mumbai Lepton Photon conference on Saturday, LHCb presented their new measurement in the decay of Bs → J/ψ φ . They measure the parameter φs to be near zero, as predicted by the Standard Model. Being more precise than the CDF and D0 measurement announced earlier this year, this new measurement shows that the Standard Model holds true even when tested with this unprecedented precision.

However, there is still room for new and unexpected phenomena as the LHCb precision increases as new data are being analysed. LHCb should have about three times more data available by the end of the year, putting the Standard Model under even more rigorous tests.

LHCb result

LHCb result

The color circles show the LHCb results at different degrees of precision. The theoretical prediction is shown in black with its own uncertainty. At present, the two are in fair agreement. With more data analysed, the uncertainty in the experimental measurement will decrease, allowing for an even more stringent test of the current prediction. (The extra set of circles correspond to the other solution to the equation).

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline

Share

– By Adam DeAbreu, TRIUMF High School Fellowship Student

“Oh my god”, “it was so crazy”, “it was nuts” – that’s about all I was able to say during my interview with the North Shore News after I found out I had won the TRIUMF Fellowship. Even now [at the end of the work-term], those same emotions still stand – I’m still amazed that I was able to work at TRIUMF, work with particle physicists, work with data from the ATLAS detector – that after reading the sign saying, “TRIUMF employees only past this point”, I could walk right past it.

Before I even knew what happened my first day had come, I arrived at TRIUMF and sat down in the lobby and just about died from anticipation and anxiousness. That first day I knew I was working with Dr. Oliver Stelzer-Chilton and the ATLAS group, but I didn’t have a clue as to what my project was going to be. The constant whirlwind of butterflies in my stomach calmed as Oliver and I talked about possible projects, areas of research, and tools that I would be using.  The decision came down to working with data from the ATLAS detector or using the program Pythia to simulate collisions of particles. The answer came easy to me – if I had the chance to work with data from ATLAS, from the LHC, from CERN, then there wouldn’t be much that would persuade me to choose otherwise.

And so my learning/research project/adventure began. All the data was of sets of two muons that ATLAS had detected and that had decayed from a Z boson. The simulated data is run through a detector resolution smearing equation because there are thousands of events that have to be sorted depending on where in the detector the muons went. This equation uses two parameters, S1 and S2, with ranges that the program incrementally goes through to create templates with a range of resolutions; from very high, being a very skinny histogram, to low, being a wider histogram. These templates are then used as a structure to fit the data from ATLAS.

The first part of my project was to rewrite the code with only one resolution parameter, S1, and have the second resolution parameter, S2, that had a different momentum dependence fixed. By doing this and removing the S2 variable, we could see how much additional scaling was needed for the fits. As well, we were able to measure the Z boson mass and compare it to the world’s average value of 91.187 +-0.002 GeV.

I’d like to say that I just jumped right into the project and finished it within a day. However, when I looked at Oliver’s code it looked as though it was in a completely different language and to an extent, it was. I had to learn the C++ it was written in and the usage of the program ROOT, which created and manipulated all the histograms and data. I am not the most tech savvy person and getting my head into the programing was hard; however, it helped that I had a goal – that my programing was helping Oliver with his report: “Search for High-Mass Dilepton Resonances in pp Collisions at √(s) = 7 TeV”. Just knowing that was what my work was related to would have kept me going for months. As well, I took solace in something Oliver had said during the first days of my fellowship, “Use the code, and the programs and the language as tools. I became a physicist and if I’m not careful with all the coding around me I may end up a computer scientist.”

Finally I began what would inevitably be my last project during my fellowship. I had to create another program that would take the same data set, but include the second resolution term S2 and produce templates with one dimensionality higher. We split the data and simulation depending on the muon’s momentum. In the equation used for the resolution of the detector, the S1 and S2 terms have a different dependence on the momentum value, so it’s important to split the data in this fashion. With the Z boson mass distribution split according to momentum the motivation was that we would be able to simultaneously constrain S1 and S2. If both could be constrained, this would allow for an independent measure of S2 that currently can only be obtained from an external input. As well, this would allow both resolution parameters to be measured from the Z boson sample, which is an important calibration sample when searching for new particles at high mass.

Despite really enjoying all that I was learning, I was still thankful that the entirety of the six weeks wasn’t constant coding and compiling. No, there was a lot more to the six weeks than just that; I met great people, attended lectures, seminars and workshops. There’s a very strange feeling I got when, after hearing all these people talk about their work during lectures or in the lunchroom or the office with such passion and insight and knowledge, and to know that not long ago they were in my position – gearing up to take the first step in an education of physics. In just the six short weeks I’ve been at TRIUMF my comprehension of everything particle physics related has grown so much. And it was a great feeling when I saw that it wouldn’t be long before I would be neck deep in the physics.

Then there was the BBQ, I won’t say that I’m surprised but I was definitely pleased to see so many physicists being able to put their work aside to relax and have a great time. To see someone, one day talking about the applications of particle physics, now desperately trying to bite a hanging donut from a string, was definitely a great way to take a break, laugh and relax.

In the end I went from being completely overwhelmed by just the thought of working at TRIUMF – nevertheless actually working with particle physicists and using the same tools and data that they use – to having a handle on ROOT, C ++, and the manipulation of data and histograms. This fellowship has jump-started my learning, and my career. One last quote from Oliver: “there’s so much out there, at a certain point you go from learning it all, as in elementary and high school, to having to narrow your scope – to choose what it is you want to learn”. I’ve narrowed my scope to physics and this fellowship has given me a great experience of what it means to be a particle physicist and will undoubtedly help me when it comes time for me to narrow my scope for the next step. Until then my horizon holds all the possibilities that university physics brings with it, all rushing towards me – and I can hardly wait another minute.

Share

This story first appeared on Brookhaven’s website.

They come from the midst of exploding stars beyond our solar system — and possibly, from the nuclei of far distant galaxies. Their name, “galactic cosmic rays,” sounds like something from a science fiction movie. They’re not really rays.

Galactic cosmic rays (GCR) is the term used to describe a wide variety of charged particles traveling through space at high energies and almost the speed of light, from subatomic particles like electrons and positrons to the nuclei of every element on the periodic table. Since they’re created at energies sufficient to propel them on long journeys through space, GCRs are a form of ionizing radiation, or streaming particles and light waves with enough oomph to knock electrons out of their orbits, creating newly charged, unstable atoms in most of the matter they traverse. (more…)

Share

Particles have an inherent spin. We explored the case of fermions (“spin-1/2”) in a recent post on helicity and chirality. Now we’ll extend this to the case of vector (“spin-1”) particles which describe gauge bosons—force particles.

By now regular US LHC readers are probably familiar with the idea that there are two kinds of particles in nature: fermions (matter particles) and bosons (force particles). The matter particles are the ‘nouns’ of the Standard Model. The ‘verbs’ are the bosons which mediate forces between these particles. The Standard Model bosons are the photon, gluon, W, Z, and the Higgs. The first four (the gauge bosons of the fundamental forces) are what we call vector particles because of the way they spin.

An arrow that represents spin

You might remember the usual high school definition of a vector: an object that has a direction and a magnitude. More colloquially, it’s something that you can draw as an arrow. Great. What does this have to do with force particles?

In our recent investigation of spin-1/2 fermions, the punchline was that chiral (massless) fermions either spin clockwise or counter-clockwise relative to their direction of motion. We can convert this into an arrow by identifying the spin axis. Take your right hand and wrap your fingers around the direction of rotation. The direction of your thumb is an arrow that identifies the helicity of the fermion, it is a ‘spin vector.’ In the following cartoon, the gray arrows represent the direction of motion (right) and the big colored arrows give the spin vector.

You can see that a particle has either spin up (red: spin points in the same direction as motion) or spin down (blue: spin points in the opposite direction as motion). It should not surprise you that we can write down a two-component mathematical object that describes a particle. Such an object is called a spinor, but it’s really just a special kind of vector. In can be represented this way:

ψ = ( spin up , spin down )

As you can see, there’s one slot that contains information about the particle when it is spin up and another slot that contains information about the particle when it is spin down. It’s really just a list with two entries.

Don’t panic! We’re not going to do any actual math in this post, but it will be instructive—and relatively painless—to see what the mathematical objects look like. This is the difference between taking a look at the cockpit of a jet versus actually flying it.

All you have to appreciate at this point is that we’ve described fermions (spin-1/2 particles) in terms of an arrow that determine its spin. Further, we can describe this object as a two-component ‘spinor.’

For experts: a spinor is a vector (“fundamental representation”) of the group SL(2,C), which is the universal cover of the Lorentz group. The point here is that we’re looking at projective representations of the Lorentz group (quantum mechanics says that we’re allowed to transform up to a phase). The existence of a projective representation of a group is closely tied to its topology (whether or not it is simply connected); the Lorentz group is not simply connected, it is doubly connected. The objects with projective phase -1 (i.e. that pick up a minus sign after a 360 degree rotation) are precisely the half-integer spinor representations, i.e. the fermions.

Relativity and spin

Why did we bother writing the spinor as two components? Why not just work with one component at a time: we pick up a fermion and if it’s spin up we use one object and if it’s spin down we use another.

This, however, doesn’t work. To see why, we can imagine what happens if we take the same particle but change the observer. You can imagine driving next to a spin-up particle on the freeway, and then accelerating past it. Relative to you, the particle reverses its direction of motion so that it becomes a spin-down particle.

What does this mean? In order to account for relativity (different observers see different things) we must describe the particle simultaneously in terms of being spin-up and spin-down. To describe this effect mathematically, we would perform a transformation on the spinor which changes the spin up component into the spin down component.

Remark: I’m cheating a little because I’m implicitly referring to a massive fermion while referring to the two-component spinor of a massless fermion. Experts can imagine that I’m referring to a Majorana fermion, non-experts can ignore this because the punchline is the same and there’s not much to be gained by being more rigorous at this stage.

In fact, to a mathematician, this is the whole point of constructing vectors: they’re things which know how to transform properly when you rotate them. In this way they are intimately linked to the symmetries of spacetime: we should know how particles behave when we grab them and rotate them.

Spin-1 (vector) particles

Now that we’ve reviewed spin-1/2 (fermions), let’s move on to spin-1: these are the vector particles and include the gauge bosons of the Standard Model. Unlike the spin-1/2 particles, whose spin arrows must be parallel to the direction of motion, vector particles can have their spin point in any direction. (This is subject to some constraints that we’ll get to below.) We know how to write arrows in three dimensions: you just write down the coordinates of the arrow tip:

3D arrow = (x-component, y-component, z-component)

When we take into account special relativity, however, we must work instead in four dimensional spacetime, i.e. we need a vector with four components (sometimes called a four-vector, see Brian’s recent post). The reason for this is in addition to rotating our vector, we can also boost the observer—this is precisely what we did in the example above where we drove past a particle on the freeway—so that we need to be able to include the length contraction and time dilation effects that occur in special relativity. Heuristically, these are rotations into the time direction.

So now we’ve defined vector particles to be those whose spin can be described by an arrow pointing in four dimensions. A photon, for example, can thus be represented as:

Aμ = (A0, A1, A2, A3)

Here we’ve used the standard convention of labeling the x, y, and z directions by 1, 2, and 3. The A0 corresponds to the component of the spin in the time direction. What does this all mean? The (spin) vector associated with a spin-1 particle has a more common name: the polarization of the particle.

You’ve probably heard of polarized light: the electric (and hence also the magnetic) field is fixed to oscillate along only one axis; this is the basis for polarized sunglasses. Here’s a heuristic drawing of electromagnetic radiation from a dipole (from Wikipedia, CC-BY-SA license):

http://en.wikipedia.org/wiki/File:Onde_electromagnetique.svg

The polarization of a photon refers to the same idea. As mentioned in Brian’s post, the electric and magnetic fields are given by derivatives of the vector potential A. This vector potential is exactly the same thing that we have specified above; in a sense, a photon is a quantum of the vector potential.

Four vectors are too big

Now we get to a very important point: we’ve argued based on spacetime symmetry that we should be using these four-component vectors to describe particles like photons. Unfortunately, it turns out that four components are too many! In other words, there are some photon polarizations that we could write down which are not physical!

Here we’ll describe one reason why this is true; we will again appeal to special relativity. One of the tenets of special relativity is that you cannot travel faster than the speed of light. Further, we know that photons are massless and thus travel at exactly the speed of light. Now consider a photon with is spinning in the same direction as its motion (i.e. the spin vector is perpendicular to the page):

In this case the bottom part of the photon (blue) is moving opposite the direction of motion and so travels slightly slower than the speed of light. On the other hand, the top part of the photon is moving with the photon and thus would be moving faster than the speed of light!

This is a big no-no, and so we cannot have any photons polarized in this way. Our four-component vector contains more information than the physical photon. Or more accurately: being able to write down our theory in a way that manifestly respects spacetime symmetry comes at the cost of introducing extra, non-physical degrees of freedom in how we describe some of our particles.

(If we removed this degree of freedom and worked with three-component vectors, then our mathematical formalism doesn’t have enough room to describe how the particle behaves under rotations and boosts.)

Fortunately, when we put four-component photons through the machinery of quantum field theory, we automatically get rid of these unphysical polarizations. (Quantum field theory is really just quantum mechanics that knows about special relativity.)

Gauge invariance: four vectors are still too big

Now I’d like to introduce one of the key ideas of particle physics. It turns out that even after removing the unphysical ‘faster than light’ polarization of the photon, we still have too many degrees of freedom. A massless particle only has two polarizations: spin-up or spin-down. Thus our photon still has one extra degree of freedom!

The resolution to this problem is incredibly subtle: some of the polarizations that we could write down using a four-vector are physically identical. I don’t just mean that they give the same numbers when you do the math, I mean that they literally describe the same physical state. In other words, there is a redundancy in this four-vector description of particles! Just as the case of the unphysical polarization above, this redundancy is the cost of writing things in a way which manifestly respects spacetime symmetry. This redundancy is called gauge invariance.

Gauge invariance is a big topic that deserves its own post—I’m still thinking of a good way to present it—but the “gauge” refers to the same thing in term “gauge boson.” This gauge invariance (redundancy in our description of physics) is intimately linked to the fundamental forces of our theory.

Remark, massive particles: Unlike the massless photon, which has two polarizations, the W and Z bosons have three polarizations. Heuristically the third polarization corresponds to the particle spinning in the direction of motion which wasn’t allowed for massles particles that travel at the speed of light. It is still true, however, that there is still a gauge redundancy in the four-component description for the thee-polarization massive gauge bosons.
For experts: at this point, I should probably mention that the mathematical object which really describes gauge bosons aren’t vectors, but rather co-vectors, or (one-)forms. One way to see this is that these are objects that get integrated over in the action. The distinction is mostly pedantic, but a lot of the power of differential geometry and topology is manifested when one treats gauge theory in its ‘natural’ language of fiber bundles. For more prosaic goals, we can write down Maxwell’s equations in an even more compact form: d*F = j. (Even more compact than Brian’s notation! 🙂 )

Wigner’s classification

Let me take a step back to address the ‘big picture.’ In this post I’ve tried to give a hint of a classification of “irreducible [unitarity] representations of the Poincaré group” by Hungarian mathematical physicist Eugene Wigner in the late 1930s.

At the heart of this program is a definition of what we really mean by ‘particle.’ A particle is something with transforms in a definite way under the symmeties of spacetime, which we call the Poincaré group. Wigner developed a systematic way to write down all of the ‘representations’ of the Poincaré group that describe quantum particles; these representations are what we mean by spin-1, spin-1/2, etc.

In addition to these two examples, there are fields which do nothing under spacetime symmetries: these are the spin-0 scalar fields, such as the Higgs boson. If we treated gravity quantum mechanically, then the graviton would be a spin-2 [antisymmetric] tensor field. If nature is supersymmetric, then the graviton would also have a spin-3/2 gravitino partner. Each of these different spin fields is represented by a mathematical object with different numbers of components that mix into one another when you do a spacetime transformation (e.g. rotations, boosts).

In principle one can construct higher spin fields, e.g. spin-3, but there are good reasons to believe that such particles would not be manifested in nature. These reasons basically say that those particles wouldn’t be able to interact with any of the lower-spin particles (there’s no “conserved current” to which they may couple).

Next time: there are a few other physics (and some non-physics) topics that I’d like to blog about in the near future, but I will eventually get back to this burning question about the meaning of gauge symmetry. From there we can then talk about electroweak symmetry breaking, is the main reason why we need the Higgs boson (or something like it) in nature. (For those who have been wondering why I haven’t been writing about the Higgs—this is why! We need to go over more background to do things properly.)

Share

New results, same uncertainty

Tuesday, August 23rd, 2011

At the time of the European Physics Conference in July, an intriguing small excess of events was reported in the search for the elusive Higgs boson. Yesterday, as the Lepton-Photon conference in Mumbai, India, opened, these signs appear to be less compelling. What happened?

All phenomena we study follow statistical laws and are therefore subject to statistical fluctuations. The signal can grow bigger, smaller or disappear. Nothing we can do about it but analyze more data to get a definitive answer. In times, either the signal emerges unambiguously if it was real or it vanishes if it is only due to a statistical fluctuation.

Fortunately with statistics, when you double the data sample size, the error bar or margin for statistical fluctuations goes down as the square root of the increase. This is why we are always trying to collect more data, to reduce the size of possible statistical fluctuations.

So where do we stand now? With twice as much data as we had in July, each experiment sees a small decrease in the potential signal. So it is less compelling than it was in July, but far from having vanished.

Most importantly, the two large LHC experiments, CMS and ATLAS, exclude a wide range of possible Higgs masses. This is just as crucial since we may soon simply prove that there is no Higgs! What is guaranteed is, given the rate at which the data are coming in, within a year we will have the final answer. If it is there, we’ll see it. If it does not exist, we will prove that, which is just as important as finding the Higgs.

As for what is happening at 145 GeV, where the first excess was initially spotted, it is still impossible to say if what we have been seeing could be due to a Higgs boson. As it is, we have been expecting this new particle to show up a bit like travellers waiting for their train. While checking in the direction of the tracks, we noticed something in the far distance looking much like that long awaited train (the Higgs boson in our case). Add a bit of fog and it becomes impossible to say if what we see is really our train.

The “fog” comes from all the other known processes that can mimic the Higgs boson, what we call the background. Even today, with the size of data sample at hand, it is too early for the train to be clearly visible in the distance. We still can not really distinguish between a Higgs boson and the background. All we see is an indistinct shape in the far distance, too small to be able to say if it is our much expected train or just a shape emerging from thick fog.

Once the two experiments combine their results, we will gain another factor of two in statistics. The combination has the advantage of taking into account all possible problems that are common to both experiments, ironing out fluctuations. However, this combination is tedious and requires that all individual measurements be well understood, which should be done in the coming weeks.

We are also expecting to double our data sample again before the end of the year. With four times as much data as in July and the combined results from the two experiments, it will be like looking at the train at a quarter of the distance. We will have a much better chance to say if this is our train or just an illusion

Pauline Gagnon

All Higgs masses excluded by the CMS collaboration after only eight months of data taking are shown in orange. The ATLAS group obtains similar results. This is to be compared to the blue zones excluded by the Tevatron experiments after 20 years of hard work. And that’s only the beginning…

To be alerted of new postings, follow me on Twitter: @GagnonPauline

Share

Durant la Conférence Européenne de Physique en juillet dernier, un intrigant petit excès d’évènements avaient créé l’émoi car il aurait pu s’agir de l’insaisissable boson de Higgs.  Hier, alors que s’ouvrait une autre conférence à Mumbai, en Inde, ces signes précurseurs semblaient vouloir s’estomper. Que s’est-il donc passé ?

Tous les phénomènes que nous étudions obéissent aux règles de la statistique et sont donc sujets à des fluctuations. Le signal peut augmenter, diminuer ou tout simplement disparaître. Il n’y a rien que l’on puisse faire à part analyser plus de données pour en avoir le cœur net. Eventuellement, soit le signal devient tout à fait évident si l’effet était réel, soit il disparaît si ce n’était dû qu’à une fluctuation statistique.

Heureusement, quand on double l’échantillon de données, la marge d’erreur associée à ces fluctuations se trouve divisée par la racine carrée de deux. C’est d’ailleurs la raison pour laquelle on cherche toujours à accumuler plus de données, réduisant ainsi la taille des fluctuations possibles.

Alors où en sommes nous aujourd’hui avec ce Higgs? Avec maintenant deux fois plus de données qu’en juillet, les deux expériences du LHC, ATLAS et CMS, voient toujours un excès mais un peu plus faible qu’auparavant. La possibilité d’avoir trouvé le boson de Higgs s’amenuise sans pour autant avoir complètement disparue.

Mais le plus important, c’est que déjà les deux expériences peuvent exclure un très grand intervalle de masses possible pour le Higgs. Et cela est crucial car on pourrait bien d’ici peu prouver que le Higgs n’existe pas.  Ce qui est garanti, au rythme auquel on accumule les données, c’est qu’on aura une réponse définitive d’ici un an tout au plus. Ou on le trouvera, ou on prouvera hors de tout doute qu’il n’existe pas. Et ce serait déjà un immense pas en avant.

En ce qui concerne le petit excès observé autour de 145 GeV, il est encore et toujours trop tôt pour se prononcer. La découverte du Higgs si elle advient se fera petit à petit, un peu comme un train apparaît à l’horizon. C’est comme si nous étions des voyageurs attendant impatiemment l’arrivée du train. Tout le monde scrute l’horizon espérant l’apercevoir. Certaines personnes pensent deviner sa présence au loin, mais comme il y a du brouillard en plus, il est impossible de se prononcer.

Le brouillard qui gène notre vue, c’est la somme de tous les autres types d’évènements qui peuvent ressembler au Higgs. Il est difficile de dire si ce qu’on voit est bien le train ou juste une illusion créée par le brouillard. Le plus simple est encore d’attendre que le train se rapproche.

Donc, cet excès, est-il dû à la présence d’un Higgs ayant cette masse ou est-ce seulement une fluctuation statistique? Trop tôt pour le dire. D’ici la fin de l’année, on espère encore doubler la quantité de données disponibles. Avec quatre fois plus de données qu’en juillet, ce sera comme si le train était deux fois plus près. Il sera donc beaucoup plus facile de juger.

De plus, lorsque CMS et ATLAS auront combiné leurs résultats, on gagnera encore un facteur de deux en données. Cette combinaison aura aussi l’avantage de prendre en compte les problèmes communs aux deux expériences, réduisant ainsi les fluctuations. Mais pour ça aussi il faudra encore patienter un peu car les moindres détails doivent être bien compris, ce qui prendra encore quelques semaines.

On aura alors une bien meilleure chance de juger si on a affaire au Higgs ou juste à une illusion.

Pauline Gagnon

Les zones d’exclusion obtenues par l’expérience CMS après seulement huit mois d’opération avec le LHC sont montrées en orange. La collaboration ATLAS obtient des résultats similaires. Les expériences du Tevatron à Fermilab près de Chicago auront mis vingt ans de dur labeur pour exclure les zones montrées en bleu. Et ce n’est que le début…

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline

Share

The summer conference season may be winding down, but that doesn’t mean we are quite done yet.  Today was the first day of the Lepton Photon 2011 (LP2011) Conference; which is taking place in Mumbai, India all this week.  The proceedings of LP2011 are available via webcast from CERN (although Mumbai is ~10 hours ahead if you are in the Eastern Standard Timezone).  But if you’re a bit of a night owl and wish to participate in the excitement, then this is the link for the webcast.

The complete schedule for the conference can be found here.

But what was shown today?  Today was a day of Higgs & QCD Physics.  I’ll try to point out some of the highlights of the day in this post.  So let’s get to it.

The Hunt for the Higgs

Today’s update on the CMS Collaboration’s search for the ever elusive Higgs boson made use of ~110-170 trillion proton-proton collisions (1.1-1.7 fb -1); covering eight separate decay channels and a Higgs mass range of 110-600 GeV.   The specific channels studied and the corresponding amount of data used for each are shown in the table at left.  Here l represents a charged lepton and v represents a neutrino.

The CMS Collaboration has not reported a significant excess of events in the 110-600 GeV range at LP2011.  However, the exclusion limits for the Higgs boson mass range were updated from our previously reported values at EPS2011.  By combining the results of the eight analyses mentioned above the CMS Collaboration produced the following plot summarizing the current state of Higgs exclusion (which I have taken from the Official CMS Press Release, Ref. 1; and CMS PAS HIG-11-022, Ref. 2.  Please see the PAS for full analysis details):

 

Standard Model Higgs boson combined confidence levels showing current exclusion regions, image courtesy of the CMS Collaboration (Ref 1 & 2).

 

But how do you interpret this plot?  Rather than re-inventing the wheel, I suggest you take a quick look at Aidan‘s nice set of instructions in this post here.

Now then, from the above plot we can see that the Standard Model Higgs boson has been excluded at 95% confidence level (C.L.) in the ranges of 145-216, 226-288 and 310-400 GeV [1,2].  At a lower CL of 90%, the Collaboration has excluded the SM Higgs boson for a mass window of 144-440 GeV [1,2].

These limits shown at LP2011 improve the previous limits shown at EPS2011 (using 1.1 fb-1).  The previous exclusion limits were 149-206 and 300-440 GeV at 95% C.L., or 145-480 GeV at 90% C.L.

While the LP2011 results did not show a Higgs discovery, the CMS Collaboration is removing places for this elusive boson to hide.

QCD Physics

Today’s other talks focused on quantum chromodynamics (QCD).  With the CMS Collaboration’s results shown for a variety of QCD related measurements.

One of the highlights of these results is the measurement of the inclusive jet production cross section.  The measurement was made for a jet transverse momentum over a range of ~20-1100 GeV.  The range in cross-section covers roughly ten orders of magnitude!

Measurement of the inclusive jet cross-section made with the CMS Collaboration, here data are the black points, the theoretical prediction is given by the red line. Image courtesy of the CMS Collaboration (Ref. 3).

In this plot above each of the data series are “binned” by what is known as a jet’s rapidity (denoted by the letter y). Or in this case the absolute value of the jets rapidity.  Rapidity is a measure of where a jet is located in space.

The CMS detector is a giant cylinder, with the collisions taking place in the center of the cylinder.  If I bisect the detector at the center with a plane (perpendicular to the cylinder’s axis), objects with lower rapidities make a small angle with this plane.  Whereas objects with higher rapidities make a large angle with this plane.

As we can see from the above plot, the theoretical prediction of QCD matches the experimental data rather well.

Another highlight of CMS Collaboration’s results shown at LP2011 is the measurement of di-jet production cross-section

Measurement of the dijet production cross-section made with the CMS Collaboration.  Again, data are the black points, the theoretical prediction is given by the red line.  Image courtesy of the CMS Collaboration (Ref. 3).

Here the CMS results shown cover an invariant dijet mass of up to ~4 TeV, that’s over half the CoM collision energy!  Again, the theory is in good agreement with the experimental data!

And the last highlight I’d like to show is the production cross section of isolated photons as recorded by the CMS Detector (this is a conference about leptons and photons after all!).

Measurement of the isolated photon production cross-section made with the CMS Collaboration. Again, data are the black points, the theoretical prediction is given by the red line.  Image courtesy of the CMS Collaboration (Ref. 3).

What happens in isolated photon production is a quark in one proton interacts with a gluon in the other proton.  This interaction is mediated by a quark propogrator (which is a virtual quark).  The outgoing particles are a quark and photon.  Essentially this process is a joining of QCD and QED, an example of the Feynman Diagram for isolated photon production is shown below (with time running vertically):

From the above plot, the theoretical predictions for isolated photon production are, again, in good agreement with the experimental data!

These and other experimental tests of QCD shown at LP2011 (and other conferences) are illustrating that the theory is in good agreement with the data, even at the LHC’s unprecedented energy level.  Some tweaks are still needed, but the theorists really deserve a round of applause.

 

 

But I encourage anyone with the time or interest to tune into the live webcast all this week!  Perhaps I’ll be able to provide an update on the other talks/poster sessions in the coming days (If not check out the above links!).

Until Next Time,

-Brian

 

References

[1] CMS Collaboration, “New CMS Higgs Search Results for the Lepton Photon 2011 Conference,” http://cms.web.cern.ch/cms/News/2011/LP11/, August 22nd 2011.

[2] CMS Collaboration, “Combination of Higgs Searches,” CMS Physics Analysis Summary, CMS-PAS-HIG-11-022, http://cdsweb.cern.ch/record/1376643/, August 22nd 2011.

[3] James Pilcher, “QCD Results from Hadron Colliders,” Proceedings of the Lepton Photon 2011 Conference, http://www.ino.tifr.res.in/MaKaC/contributionDisplay.py?contribId=122&sessionId=7&confId=79, August 22nd 2011.

Share

What If It’s Not The Higgs?

Sunday, August 21st, 2011

Updated: Monday, 2011 August 29, to clarify shape of angular distribution plots.

It’s the $10 billion question: If experimentalists do discover a bump at the Large Hadron Collider, does it have to be the infamous higgs boson? Not. One. Bit. Plainly and simply, if the ATLAS & CMS collaborations find something at the end of this year it will take a little more data to know we are definitely dealing with a higgs boson. Okay, I suppose I should back up a little an add some context. 🙂

The Standard Model of Particle Physics (or SM for short) is the name for the very well established theory that explains how almost everything in the Universe works, from a physics perspective at least. The fundamental particles that make up the SM, and hence our Universe, are shown in figure 1 and you can learn all about them by clicking on the hyperlink a sentence back. Additionally, this short Guardian article does a great job explaining fermions & bosons.

Fig 1. The Standard Model is composed of elementary particles, which are the fundamental building blocks of the Universe, and rules dictating how the particles interact. The fundamental building blocks are known as fermions and the particles which mediate interactions between fermions are called bosons. (Image: AAAS)

As great as the Standard Model is, it is not perfect. In fact, the best way to describe the theory is to say that it is incomplete. Three phenomena that are not fully explained, among many, are: (1) how do fermions (blue & green boxes in figure 1) obtain their mass; (2) why is there so little antimatter (or so much matter) in the Universe; and (3) how does gravity work at the nanoscopic scale? These are pretty big questions and over the years theorists have come up with some pretty good ideas.

The leading explanation for how fermions (blue & green boxes in figure 1) have mass is called the Higgs Mechanism and it predicts that there should be a new particle called the higgs boson (red box at bottom of figure 1). Physicist believe that the Higgs Mechanism may explain the fermion masses is because this same mechanism very accurately predicts the masses for the other bosons (red boxes in figure 1). It is worth nothing that when using the Higgs Mechanism to explain the masses of the bosons, no new particle is predicted.

Unfortunately, the leading explanations for the huge disparity between matter & antimatter, as well as a theory of gravity at the quantum level, have not been as successful. Interestingly, all three types of  theories (the Higgs Mechanism, matter/antimatter, and quantum gravity) generally predict the existence of a new boson, namely, the higgs boson, the Z’ boson (pronounced: zee prime), and the graviton. A key property that distinguishes each type of boson from the others is the intrinsic angular momentum they each carry. The higgs boson does not carry any, so we call it a “spin 0” boson; the Z’ boson carries a specific amount, so it is called a “spin 1” boson; and the graviton carries precisely twice as much angular momenta as the Z’ boson, so the graviton is called a “spin 2” boson. This will be really important in a few paragraphs but quickly let’s jump back to the higgs story.

Fig 2. Feynman Diagrams representing a higgs boson (left), Z’ boson (center), and graviton (right)
decaying into a b quark (b) & anti-b quark (b).

In July, at the European Physics Society conference, the CDF & DZero Experiments, associated with the Tevatron Collider in Illinois, USA, and the CMS & ATLAS Experiments, associated with the Large Hadron Collider, in Geneva, Switzerland, reported their latest results in the search for the higgs boson. The surprising news was that it might have been found but we will not know for sure until the end of 2011/beginning of 2012.

This brings us all the way back to our $10/€7 billion question: If the experiments have found something, how do we know that it is the higgs boson and not a Z’ boson or a graviton? Now I want to be clear: It is insanely unlikely that the new discovery is a Z’ or a graviton, if there is a new discovery at all. If something has been been discovered, chances are it is the higgs boson but how do we know?

Now, here is where awesome things happen.

The Solution.

In all three cases, the predicted boson can decay into a b quark (b) & anti-b quark (b) pair, which you can see in the Feynman diagrams in figure 2. Thanks to the Law of Conservation of Momentum, we can calculate the angle between each quark and the boson. Thanks to the well-constructed detectors at the Large Hadron Collider and the Tevatron, we can measure the angle between each quark and the boson. The point is that the angular distribution (the number of quarks observed per angle)  is different for spin 0 (higgs), spin 1 (Z’), and spin 2 (graviton) bosons!

To show this, I decided to use a computer program to simulate how we expect angular distributions for a higgs → bb, a Z’→ bb, and a graviton → bb to look. Below are three pairs of plots: the ones to the left show the percentage of b (or b) quarks we expect at a particular angle, with respect to the decaying boson; the ones on the right show the percentage of quarks we expect at the cosine (yes, the trigonometric cosine) of the particular angle.

 

Figure 3. The angular distribution (left) and cosine of the angular distribution (right) for the higgs (spin-0) boson, mH = 140 GeV/c2. 50K* events generated using PYTHIA MSUB(3).

Figure 4. The angular distribution (left) and cosine of the angular distribution (right) for a Z’ (spin-1) boson, mZ’ = 140 GeV/c2. 50K* events generated using PYTHIA MSUB(141).

Figure 5. The angular distribution (left) and cosine of the angular distribution (right) for a graviton (spin-2) boson, mG = 140 GeV/c2. 40K* events generated using PYTHIA MSUB(391), i.e., RS Graviton.

Thanks to the Law of Conservation of Angular Momentum, the intrinsic angular momenta held by the spin 0 (higgs), spin 1 (Z’), and spin 2 (graviton) force the quarks to decay preferentially at some angles and almost forbid other angles. Consequentially, the angular distribution for the higgs boson (spin 0) will give one giant hump around 90°; for the Z’ boson will have two humps at 60° and 120°; and the graviton (spin 2) will have three humps at 30°, 90°, and 150°. Similarly in the cosine distribution: the spin-0 higgs boson has no defining peak; the spin-1 Z’ boson has two peaks; and the spin-2 graviton has three peaks!

In other words, if it smells like a higgs, looks like a higgs, spins like a higgs, then my money is on the higgs.

A Few Words About The Plots

I have been asked by a reader if I could comment a bit on the shape and apparent symmetry in the angular distribution plots, both of which are extremely well understood. When writing the post, I admittedly glossed over these really important features because I was pressed to finish the post before traveling down to Chicago for a short summer school/conference, so I am really excited that I was asked about this.

At the Large Hadron Collider, we collide protons head-on. Since the protons are nicely aligned (thanks to the amazing people who actually operate the collider), we can consistently and uniformly label the direction through which the protons travel. In our case, let’s have a proton that come from the left be proton A and a proton that comes from the right be proton B. With this convention, proton A is traveling along what I call the “z-axis”; if proton A were to shoot vertically up toward the top of this page it would be traveling along the “x-axis”; and if it were to travel out of the computer screen toward you, the reader, the proton would be traveling in the “y direction” (or along the “y-axis”). The angle between the z-axis and the x-axis (or z-axis and the y-axis) is called θ (pronounced: theta). You can take a look at figure 6 for a nice little cartoon of the coordinate system I just described to you.

Figure 6: A coordinate system in which proton A (pA) is traveling along the z-axis and proton B (pB) in the negative z direction. The angle θ is measure as the angle between the z-axis and the x-axis, or equally, between the z-axis and the y-axis.

When the quarks (spin 1/2) inside a proton collide to become a higgs (spin 0), Z’ (spin 1), or graviton (spin 2), angular momentum must always be conserved. The simplest way for a quark in proton A and a quark in proton B to make a higgs boson is for the quarks to spin opposite directions, while still traveling along the z-axis, so that their spins cancel out, i.e., spin 1/2 – spin 1/2 = spin 0. This means that the higgs boson (spin 0) does not have any angular momentum constraints when decaying into two b-quarks and thus the cosine of the angle between the two b-quarks should be roughly flat and uniform. This is a little hard to see in figure 3 (right) because, as my colleague pointed out, the resolution in my plots are too small. (Thanks, Zhen!)

Turning to the Z’ boson (spin 1) case, protons A & B can generate a spin 1 particle most easily when their quarks, again while traveling along the z-axis, are spinning in the same direction, i.e., spin 1/2 + spin 1/2 = spin 1. Consequentially, the spin 1 Z’ boson and its decay products, unlike the higgs boson (spin 0), are required to conserve 1 unit of angular momentum. This happens most prominently when the two b-quarks (1) push against each other in opposite directions or (2) travel in the same direction. Therefore, the cosine of the angle made by the b-quarks is dominantly -1 or +1. If we allow for quantum mechanical fluctuations, caused by Heisenberg’s Uncertainty Principle, then we should also expect b-quarks to sometimes decay with a cosine greater than -1 and less than +1. See figure 4 (right).

The spin 2 graviton can similarly be explained but with a key difference. The spin 2 graviton is special because like the Z’ boson (spin 1) it can have 1 unit of angular momentum, but unlike Z’ boson (spin 1) it can also have 2 units of angular momenta. To produce a graviton with 2 units of angular momenta, rarer processes that involve the W & Z bosons (red W & Z in figure 1) must occur. This allows the final-state b-quarks to decay with a cosine of 0, which explains the slight enhancement in figure 5 (right).

It is worth noting that the reason why I have been discussing the cosine of the angle between the the quarks and not the angle itself is because the cosine is what we physicists calculate and measure. The cosine of an angle, or equally sine of an angle, amplify subtle differences between particle interactions and can at times be easier to calculate & measure.

The final thing I want to say about the angular distributions is probably the coolest thing ever, better than figuring out the spin of a particle. Back in the 1920s, when Quantum Mechanics was first proposed, people were unsure about a keystone of the theory, namely the simultaneous particle and wave nature of matter. We know bosons definitely behave like particles because they can collide and decay. That wavy/oscillatory behavior you see in the plots are exactly that: wavy/oscillatory behavior. No classical object will decay into particles with a continuous distribution; no classical has ever been found to do so nor do we expect to find one, at least according to our laws of classical physics. This wave/particle/warticle behavior is a purely quantum physics effect and would be an indicator that Quantum Mechanics is correct at the energy scale being probed by the Large Hadron Collider. 🙂

 

Happy Colliding.

– richard (@bravelittlemuon)

PS I apologize if some things are a little unclear or confusing. I traveling this weekend and have not had time to fully edit this post. If you have a question or want me to clarify something, please, feel free to write a comment.

PPS If you are going to be at the PreSUSY Summer School in Chicago next week, feel free to say hi!

*A note on the plots: I simulated several tens of thousands of events for clarity. According to my calculations, it would take four centuries to generate 40,000 gravitons, assuming the parameters I chose. In reality, the physicists can make the same determination as we did with fewer than four years worth of data.

Share