• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for July, 2011

The combined Tevatron results exclude the existence of a Higgs particle with a mass between 100-108 and 156-177 GeV/c2. For the range 110-155 GeV/c2, the experiments are now extremely close to the sensitivity needed (dotted line below 1) either to see a substantial excess of Higgs-like events or to rule out the existence of the particle. The small excess of Higgs-like events observed by the Tevatron experiments in the range from 120 to 155 (see solid curve) is not yet statistically significant.

Scientists of the CDF and DZero collaborations at Fermilab continue to increase the sensitivity of their Tevatron experiments to the Higgs particle and narrow the range in which the particle seems to be hiding. At the European Physical Society conference in Grenoble, Fermilab physicist Eric James reported today that together the CDF and DZero experiments now can exclude the existence of a Higgs particle in the 100-108 and the 156-177 GeV/c2 mass ranges, expanding exclusion ranges that the two experiments had reported in March 2011.

Last Friday, the ATLAS and CMS experiments at the European center for particle physics, CERN, reported their first exclusion regions. The two experiments exclude a Higgs particle with a mass of about 150 to 450 GeV/c2, confirming the Tevatron exclusion range and extending it to higher masses that are beyond the reach of the Tevatron. Even larger Higgs masses are excluded on theoretical grounds.

This leaves a narrow window for the Higgs particle, and the Tevatron experiments are on track to collect enough data by the end of September 2011 to close this window if the Higgs particle does not exist.

James reported that the Tevatron experiments are steadily becoming more sensitive to Higgs processes that the LHC experiments will not be able to measure for some time. In particular, the Tevatron experiments can look for the decay of a Higgs particle into a pair of bottom and anti-bottom quark which are the dominant, hard-to-detect decay mode of the Higgs particle. In contrast, the ATLAS and CMS experiments currently focus on the search for the decay of a Higgs particle into a pair of W bosons, which then decay into lighter particles.

This graph shows the improvement in the combined sensitivity of the CDF and DZero experiments to a Higgs signal over the last couple of years. When the sensitivity for a particular value of the Higgs mass, mH, drops below one, scientists expect the Tevatron experiments to be able to rule out a Higgs particle with that particular mass. By early 2012, the Tevatron experiments should be able to corroborate or rule out a Higgs particle with a mass between 100 to about 190 GeV/c2.

The LHC experiments reported at the EPS conference an excess of Higgs-like events in the 120-150 GeV/c2 mass region at about the 2-sigma level. The Tevatron experiments have seen a small, 1-sigma excess of Higgs-like events in this region for a couple of years. A 3-sigma level is considered evidence for a new result, but particle physicists prefer a 5-sigma level to claim a discovery. More data and better analyses are necessary to determine whether these excesses are due to a Higgs particle, some new phenomena or random data fluctuations.

In early July, before the announcement of the latest Tevatron and LHC results, a global analysis of particle physics data by the GFitter group indicated that, in the simplest Higgs model, the Higgs particle should have a mass between approximately 115 and 137 GeV/c2.

“To have confidence in having found the Higgs particle that theory predicts, you need to analyze the various ways it interacts with other particles,” said Giovanni Punzi, co-spokesperson of the CDF experiment. “If there really is a Higgs boson hiding in this region, you should be able to find its decay into a bottom-anti-bottom pair. Otherwise, the result could be a statistical fluctuation, or some different particle lurking in your data.”

The CDF and DZero experiments will continue to take data until the Tevatron shuts down at the end of September.

“The search for the Higgs particle in its bottom and anti-bottom quark decay mode really has been the strength of the Tevatron,” said Dmitri Denisov, DZero co-spokesperson

“With the additional data and further improvements in our analysis tools, we expect to be sensitive to the Higgs particle for the entire mass range that has not yet been excluded. We should be able to exclude the Higgs particle or see first hints of its existence in early 2012.”

The details of the CDF and DZero analysis are described in this note, which will be posted later today, as well as submitted to the arXiv.

—Kurt Riesselmann

Share

Stop right there, particle!

Tuesday, July 26th, 2011

Looking back over my previous posts, I noticed that I forgot to describe the calorimeter and muon systems before jumping straight to the trigger. The subject of today’s post will thus be the calorimeters and my next post will probably be about the muon system.

So what is a calorimeter? I vaguely remember that in high school chemistry, we performed a calorimetry experiment to measure the energy change in a chemical reaction by measuring the heat released (for those who are enjoy their etymology, calorimeter derives from the Latin word, calor, which means heat).

It is slightly different in particle physics, where the main function of the calorimeter detector subsystems is to measure the energy of produced particles. The materials and techniques vary, however the basic principle of all calorimeter systems is the same: to stop particles in the detector and measure how much energy is produced through interactions with the detector material.

On my very first post, I mentioned that LHCb contains two calorimeters; the electromagnetic calorimeter is responsible for measuring the energy of electrons and photons, while the hadron calorimeter samples the energy of protons, neutrons and other particles containing quarks. The calorimeters provide the main way of identifying particles that possess no electrical charge, such as photons and neutrons.

Both calorimeters have a sandwich-like structure, with alternating layers of metal and plastic plates. The metal plates are to stop particles, while the plastic plates are to measure the energy released. More technically, when particles hit the metal plates, they produce showers of secondary particles. These, in turn, excite molecules within the plastic plates, emitting ultraviolet light, which is then guided to photomultiplier detectors. The amount of light produced is proportional to the energy of the particles entering the calorimeter.

Above is a photo of the two calorimeters, the one labeled with LHCb ECAL is unsurprisingly the electromagnetic one, while the hadronic one is behind it. It is a little hard to get a sense of scale from the photo, but the electromagnetic calorimeter wall is approximately 6.3 metres by 7.8 metres and 0.5 metres thick, while the hadronic calorimeter wall is around 8.4 metres by 6.8 metres and 1.7 metres thick.

I think that’s all I have to say about the LHCb calorimeters, except to leave you all with this random fact. The specific design of the electromagnetic calorimeter, its alternate layers of scintillator and lead, readout by plastic fibres which run parallel to the plates, is called shashlik, which is also a type of shish kebab… mmm…

Share

Vendredi dernier, près de 750 physiciens et physiciennes participant à la conférence de la Société Européenne de Physique à Grenoble en France ont été agréablement surpris. L’audience attendait avec intérêt les premiers résultats importants sur la recherche pour le boson de Higgs à venir d’ATLAS et CMS, les deux expériences majeures du Grand Accélérateur de Hadrons (le LHC), mais tout le monde fut pris par surprise. Bien sûr, les membres de la collaboration ATLAS avaient déjà vu et pu commenter abondamment les résultats d’ATLAS. Nous savions qu’on avait un petit excès d’évènements dans une analyse spécialisée dans la recherche d’un boson de Higgs se désintégrant en deux bosons W, mais rien de particulièrement convainquant. C’est ce qu’on appelle en général une légère fluctuation.

Même scénario pour la collaboration CMS. Ils observent eux aussi et indépendamment la même chose avec le même type de désintégrations. Lorsque ces résultats furent présentés côte à côte pour la première fois en public, l’audience a eu droit à un certain frisson. Serait-on en train d’observer les toutes premières manifestations du boson de Higgs, la particule qu’on essaie de coincer depuis plusieurs décennies pour enfin résoudre le mystère de l’origine de la masse? Pris individuellement, ces résultats sont peu probants mais dès qu’ils apparaissent dans deux détecteurs indépendants, ça devient drôlement intriguant.

Les deux groupes observent aussi d’autres légers excès dans un autre mode possible de désintégration, soit lorsque le boson de Higgs se désintègre en deux bosons Z, et ceux-ci donnent naissance à une paire de muons ou d’électrons. Toutes ces petites anomalies correspondent à des valeurs de masse potentielles pour le Higgs non exclues par les recherches précédentes effectuées par d’autres équipes au Tevatron, près de Chicago.

Tout cela peut paraitre assez peu mais constituerait la pièce majeure de toutes nos recherches au cours des dernières décennies. Et comme me disait mon amie Christiane, cela ferait de nous non plus des chercheuses mais des “trouveuses”, une chance que très peu d’entre nous a eu dans sa carrière. Et depuis, le sujet occupe toutes les discussions à cette conférence.

Mais nous savons toutes et tous qu’il est bien trop tôt pour se laisser emporter par l’excitation. Il nous faut plus de données avant de pouvoir affirmer avec certitude quoi que ce soit qu’on ne regrettera pas dans les mois à venir! Comme le LHC fonctionne à merveilles, on aura d’ici peu 40 à 50% plus de données analysées. On verra alors si le signal se maintient ou s’il disparait, si jamais il s’agissait tout simplement d’une vulgaire variation statistique.

Les résultats combinés de la collaboration CMS pour tous les canaux de désintégration du boson de Higgs ayant une masse hypothétique entre 110 et 600 GeV.

Résultats combinés de la collaboration ATLAS mais pour une région réduite du Higgs entre 110 et 200 GeV. A l’intérieur des ellipses en rouge sur les deux figures ci-dessus, les lignes continues noires devraient se trouver à l’intérieur de la bande d’erreur indiquée en jaune. C’est ce que l’on devrait mesurer avec 95% de certitude en l’absence d’un Higgs. Le fait que cette ligne en sort légèrement dans les deux cas pourrait être les premiers signes de vie du boson de Higgs. Mais seulement lorsque nous aurons plus de données d’ici quelques mois et une combinaison rigoureuse pourrons nous l’affirmer sans aucune ambiguité.

D’ici peu, on aura la combinaison des deux résultats qui prendra en compte toutes les corrélations. Même si on essaie autant que possible de mesurer directement les bruits de fond, on doit aussi se fier à des simulations dans certains cas. Comme ces simulations se basent sur les mêmes prédictions théoriques pour les deux expériences, une petite erreur commune pourrait fausser les résultats des deux groupes. Il faut donc tenir compte de tout cela et voir ce que la combinaison des deux résultats révèlera. Cette démarche est en cours mais est fort longue et complexe et ne sera peut-être pas disponible avant la fin de cette conférence. Dans ce cas, on devra attendre la prochaine conférence prévue pour la fin août à Mumbai en Inde avant d’en savoir plus.

A ce point-ci, tout le monde s’entend: il nous faut plus de données et la combinaison complète de tous les résultats. Mais en attendant, c’est déjà suffisamment intriguant pour qu’on continue à regarder dans cette direction. Nous sommes peut-être en train d’observer la caravane faisant son apparition à l’horizon. D’ici quelques mois à peine, on la distinguera clairement ou on saura si on avait tous du sable dans les yeux…

Mais pourquoi tant d’intérêt pour une simple particule? Très simple. Malgré tout ce qu’on connait en physique sur les particules qui forment toute la matière qu’on voit autour de nous, comme les électrons et les quarks dans les protons, on a aucune idée où ces particules prennent leur masse. Le modèle théorique actuel ne peut expliquer l’origine de la masse. Toutes les particules élémentaires y apparaissent dépourvues de masse comme le photon, alors qu’on sait que ce n’est pas le cas. Ce boson de Higgs, s’il existe, fournirait un mécanisme permettant d’expliquer d’où vient cette masse.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline

Share

Hello World

Monday, July 25th, 2011

I confess to dreading the day when they have wireless on airplanes. I’ve spent a lot of time traveling in the last year, and have come to value the time in the air as an opportunity for focused thinking. Taking away email and the perennial distraction of the internet makes it possible to write, to think, to really read papers, to develop coherent presentations. Yes, even in coach class.

So here is my first US LHC blog post, which I am writing this from the plane from Dallas to Boston. I spent 10 days in Madison, WI for the CTEQ summer school and workshop. It was a great experience, and I’ll write more about it in a later post. From Madison I traveled directly to Dallas to meet up with my brother and sister-in-law at my parents’ house. I had a few work obligations to follow through on while I was there, but it was nice to mostly unplug for a while and spend time with family before I shift my home base to Europe for the indefinite future.

At the end of August, after four years of traveling back and forth (by choice), I’ll be moving from Boston to Geneva. I’ll still be employed by Harvard as a postdoc, but for the first time I’ll be spending most of my time at CERN. CERN is in Switzerland, right on the Swiss-French border. Like many CERN visitors, I’ll be living in France, staying at our group’s apartment just a little ways up the hill in Thoiry. When I get back to Boston, I’ll need to immediately start the process of acquiring visas for both countries. Of the logistical details I need to settle, that seems likely to be the most challenging.

It should be a great year to be at CERN. Data are pouring in, and many of the measurements forming the broad LHC physics program have at least been started. If the intriguing results shown EPS turn out to be the first glimpses of new particles, it could be a very exciting year as those signals, and perhaps others, become unambiguous.

Share

These last three weeks have been very intense and eventful in the home front. In the meanwhile, back at work, and besides the usual research work, I’ve been taking care of what it’s called “run monitoring.”

In a perfect world, once you’re done building a complex detector like IceCube your problems should be over, and you could take data without caring for how the apparatus is doing 1.5 km below the South Pole surface. Unfortunately, that’s not the world we live in.

On a daily basis, we have to make sure that each of the 5160 light sensors (aka DOMs, for Digital Optical Modules) deployed in the deep ice are working and doing fine, and for that we have a very nice monitoring system that can spot most problems automatically.

I’m showing below a screenshot of the monitoring system that shows the frequency with which each DOM saw light. Each little square is a DOM, blue indicating a lower rate of detection, and yellow-ish a higher one. Black DOMs were not taking data at that moment. Depth increases towards the bottom of the image. An interesting feature of this image is the blue horizontal band of DOMs in the middle of the image, which shows that DOMs at that depth record muon events systematically less often than the rest. This is not a detector issue, but a geological feature about 65000 years old that corresponds to a stadial (or cold period) during the last glacial period in the late Pleistocene where a lot of dust seems to have deposited in the then surface of the Antarctic ice. Weaker layers are seen in the upper part of the detector as well. Due to the high dust concentration light can’t propagate too far in the ice without being absorbed, causing the observed decrease in the DOM detection rate.

The detection rate for each DOM in a section of IceCube. Depth increases towards the bottom of the image, with DOMs beginning at 1.5 km deep at the top and ending at 2.5 km in the bottom.

 

A data-taking period (or “run”) is usually 8 hours long, during which IceCube records about 2000 muons going though the detector per second. It would be great if all these muons were associated with neutrinos of astrophysical origin, but most of them come from well-known cosmic rays hitting the Earth’s atmosphere.

 

Every second Mother Nature gives IceCube about 2000 more muons to chew on.

Most of the runs that I had to monitor were perfectly fine. With an uptime of 98% IceCube (now running on its final configuration) is doing great, and I’m sure that we’ll have more interesting results in the future with more data coming.

Share

The following guest post is from Kostas Nikolopoulos, a postdoctoral researcher at Brookhaven National Laboratory. Nikolopoulos, who is analyzing data from the Large Hadron Collider at CERN, received his Ph.D. in experimental high-energy physics from the University of Athens in 2010.

Last Wednesday, I travelled three hours by train from Geneva, Switzerland to Grenoble, France to spend a week at the International Europhysics Conference on High Energy Physics. Here, I’m presenting some of the latest findings in the search for the Higgs boson at the Large Hadron Collider’s ATLAS detector, and joining the overarching conversation about the elusive particle. (more…)
Share

I thought I might touch on the topic of a recent post by one of my fellow bloggers here at Quantum Diaries.  As was mentioned, researchers at Fermilab have recently discovered the Ξ0b (read Xi-0-b) a “heavy relative of the neutron.”  The significance of this discovery was found to 6.8σ [1]!  The CDF Collaboration has prepared a brief article regarding this discovery, which is is being submitted to Physical Review Letters (a peer-review journal).  A pre-print has been made available on arXiv.

But rather than talk about what’s already been written, let’s discuss something new.  Namely, how on earth did physicists know to look for such a particle?

The answer to this question takes us back to the year 1961.  An American physicist (and now Nobel Laureate, 1969)  by the name of Murray Gell-Mann proposed a way to arrange baryons and mesons based on their properties.  In a sense, Gell-Mann did for particle physics what Dmitri Mendeleev did for chemistry.

Gell-Mann decided to use geometric shapes for arranging particles.  He placed a baryon/meson onto these geometric shapes.  The location a particle was given went according to that particle’s properties.  All his diagrams however were incomplete.  Meaning, there were spaces on the shapes that a particle should have went, but the location was empty.  This was because Gell-Mann had a much smaller number of particles to work with, today more have been discovered; but we still have holes in the diagrams.

But to illustrate how Gell-Mann originally made these diagrams, I’ve shown an example using a triangle, which is part of a larger diagram that appeared in the previous post on this  subject.  I’ve also added three sets of colored lines to this diagram.

Let’s talk about the black set of lines first.  If you go along the direction indicated by each of these lines you’ll notice something interesting.  On the far right line (labeled Qe=+1, Up =2), there is only one particle along this direction, the Σ+b.  This baryon is composed of two up quarks, a beauty quark, and has an electric charge of +1.

Let’s go to the second black line (labeled Qe = 0, Up =1).  Here there are four particles (the Σ0b has yet to be discovered).  But all of these four particles have one up quark, and zero electric charge.

See the pattern?

But just to drive the point home, look at the orange lines.  Each line represents the number of strange quarks found in the particles along the line’s direction (0, 1 or 2 strange quarks!).  The blue lines do the same thing, only for the number of down quarks present in each particle. Also, for all the particles shown on this red triangle, each particle has one beauty quark present!

In fact, if you go back to the original post on the Ξ0b discovery, you’ll notice the diagram has three “levels.”  All the particles on the top level have two beauty quarks present.  Then the red triangle appears (that I’ve shown in detail above).  Then finally in the bottom level, all the particles have zero beauty quarks.

Also, if you spend some time, you can see the black, orange and blue lines I’ve drawn at right actually form planes in this 3D diagram.  And all the particles on one of these planes will have the properties of the plane (electric charge, quark content)!

So what’s the big deal about this anyway?

Well, when Gell-Mann first created the Eight-Fold-Way in the early 1960s, none of the shapes were “filled.”  But just like Dmitri Mendeleev, Gell-Mann  took this to mean that there were undiscovered particles that would go into the empty spots!!!!!

So this seemingly abstract ordering of particles onto geometric shapes (called the Eight-Fold-Way) gave Gell-Mann a way to theoretically predict the existence of new particles.  And just like Mendeleev’s periodic table, the Eight-Fold-Way went one step further, by immediately giving us knowledge on the properties these undiscovered particles would have!

If you’re not convinced, let’s come back to the experimental discovery of the Ξ0b, which is conveniently encompassed by the yellow star in the diagram above.  This particle was experimentally discovered just a few weeks ago.  But Murray Gell-Mann himself could have made the prediction that the Ξ0b existed decades earlier.  Gell-Mann would have even been able to tell us that it would have zero electric charge and be made of a u,s and b quark!!!

In fact, Gell-Mann’s Eight-Fold-Way tells high energy physicists that there is still one particle left to be discovered before this red triangle may be completed.  So, to all my colleagues in HEP, happy Σ0b hunting!

 

 

But in summary, it was the Eight-Fold-Way that gave physicists the clue that the Ξ0b was lurking out there in the void, just waiting to be discovered.

Until Next Time,

-Brian

 

References

[1] T. Aaltonen (CDF Collaboration), “Observation of the Xi_b^0 Baryon,” arXiv:1107.4015v1[hep-ex], http://arxiv.org/abs/1107.4015

Share

The Birds and the Bs

Friday, July 22nd, 2011

`Yesterday marked the beginning of the HEP summer conference season with EPS-HEP 2011, which is particularly exciting since the LHC now has  enough luminosity (accumulated data) to start seeing hints of new physics. As Ken pointed out, the Tevatron’s new lower bound on the Bs → μμ decay rate seemed to be a harbinger of things to come (Experts can check out the official paper, the CDF public page, and the excellent summaries by Tommaso Dorigo and Jester.).

Somewhat unfortunately, the first LHCb results on this process do not confirm the CDF excess, though they are not yet mutually exclusive. Instead of delving too much into this particular result, I’d like to give some background to motivate why it’s interesting to those of us looking for new physics. This requires a lesson in “the birds and the Bs”—of course, by this I mean B mesons and the so-called ‘penguin’ diagrams.

The Bs meson: why it’s special

It's a terrible pun, I know.

A Bs meson is a bound state of a bottom anti-quark and strange quark; it’s sort of like a “molecule” of quarks. There are all sorts of mesons that one could imagine by sticking together different quarks and anti-quarks, but the Bs meson and it’s lighter cousin, the Bd meson, are particularly interesting characters in the spectrum of all possible mesons.

The reason is that both the Bs and the Bd are neutral particles, and it turns out that they mix quantum mechanically with their antiparticles, which we call the Bs and Bd. This mixing is the exact same kind of flavor phenomenon that we described when we mentioned “Neapolitan” neutrinos and is analogous to the mixing of chiralities in a massive fermion. Recall that properties like “bottom-ness” or “strangeness” are referred to as flavor. Going from a Bs to a Bs changes the “number of bottom quarks” from -1 to +1 and the “number of strange quarks” from +1 to -1, so such effects are called flavor-changing.

To help clarify things, here’s an example diagram that encodes this quantum mixing:

The ui refers to any up-type quark.

Any neutral meson can mix—or “oscillate”—into its antiparticle, but the B mesons are special because of their lifetime. Recall that mesons are unstable and decay, so unlike neutrinos, we can’t just wait for a while to see if they oscillate into something interesting. Some mesons live for too long and their oscillation phenomena get ‘washed out’ before we get to observe them. Other mesons don’t live long enough and decay before they have a chance to oscillate at all. But B mesons—oh, wonderful Goldilocks B mesons—they have a lifetime and oscillation time that are roughly  of the same magnitude. This means that by measuring their decays and relative decay rates we can learn about how these mesons mix, i.e. we can learn about the underlying flavor structure of the Standard Model.

Historical remark: The Bd meson is special for another reason: by a coincidence, we can produce them rather copiously. The reason is that the Bd meson mass just happens to be just under half of the mass of the Upsilon 4S particle, ϒ(4S), which just happens to decay into a BdBd pair. Thus, by the power of resonances, we can collide electrons and positrons to produce lots of upsilons, which then decay in to lots of B mesons. For the past decade flavor physics focused around these ‘B factories,’ mainly the BaBar detector at SLAC and Belle in Japan. BaBar has since been retired, while Belle is being upgraded to “Super Belle.” For the meanwhile, the current torch-bearer for B-physics is LHCb.

The CDF and LHCb results: Bs → mu mu

It turns out that there are interesting flavor-changing effects even without considering meson mixing, but rather in the decay of the B meson itself. For example, we can modify the previous diagram to consider the decay of a Bs meson into a muon/anti-muon pair:

This is still a flavor-changing decay since the net strangeness (+1) and bottom-ness (-1) is not preserved; but note that the lepton flavor is conserved since the muon/anti-muon pair have no net muon number. (As an exercise: try drawing the other diagrams that contribute; the trick is that you need W bosons to change flavor.) You could also replace muons by electrons or taus, but those decays are much harder to detect experimentally. As a rule of thumb muons are really nice final state particles since they make it all the way through the detector and one has a decent shot at getting good momentum measurements.

It turns out that this decay is extremely rare. For the Bs meson, the Standard Model predicts a dimuon branching ratio of around 3 × 10-9, which means that a Bs will only decay into two muons 0.0000003% of the time… clearly in order to accurately measure the actual rate one needs to produce a lot of B mesons.

In fact, until recently, we simply did not have enough observed B meson decays to even estimate the true dimuon decay rate. The ‘B factories’ of the past decade were only able to put upper limits on this rate. In fact, this decay is one of the main motivations for LHCb, which was designed to be the first experiment that would be sensitive enough to probe the Standard Model decay rate. (This means that if the decay rate is at least at the Standard Model rate, then LHCb will see it.)

The exciting news from CDF last week was that—for the first time—they appeared to have been able to set a lower bound on the dimuon decay rate of the Bs meson. (The Bd meson has a smaller decay rate and CDF was unable to set a lower bound.) The lower bound is still statistically consistent with the Standard Model rate, but the suggested (‘central value’) rate was 1.8 × 10-8. If this is true, then it would be a fairly strong signal for new physics beyond the Standard Model. The 90% confidence level range from CDF is:

4.6 × 10-9 < BR(Bs → μ+μ) < 3.9 × 10-8.

Unfortunately, today’s new result from LHCb didn’t detect an excess with which it could set a lower bound and could only set a 90% confidence upper bound,

BR(Bs → μ+μ) < 1.3 × 10-8.

This goes down to 1.2 × 10-8 when including 2010 data. The bounds are not yet at odds with one another, but many people were hoping that LHCb would have been able to confirm the CDF excess in dimuon events. The analyses of the two experiments seem to be fairly similar, so there isn’t too much wiggle room to think that the different results just come from having different experiments.

More data will clarify the situation; LHCb should accumulate enough data to prove branching ratios down to the Standard Model prediction of 3 × 10-9. Unfortunately CDF will not be able to reach that sensitivity.

New physics in loops

Now that we’re up to date with the experimental status of Bs → μμ, let’s figure out why it’s so interesting from a theoretical point of view. One thing you might have noticed from the “box” Feynman diagrams above is that they involve a closed loop. An interesting thing about closed loops in Feynman diagrams is that they can probe physics at much higher energies than one would naively expect.

The reason for this is that the particles running in the loop do not have their momenta fixed in terms of the momenta of the external particles. You can see this for yourself by assigning momenta (call them p1, p2, … , etc.) to each particle line and (following the usual Feynman rules) impose momentum conservation at each vertex. You’ll find that there is an unconstrained momentum that goes around the loop. Because this momentum is unspecified, the laws of quantum physics say that one must add together the contributions from all possible momenta. Thus it turns out that even though the Bs meson mass is around 5 GeV, the dimuon decay is sensitive to particles that are a hundred times heavier.

Note that unlike other processes where we study new physics by directly producing it and watching it decay, in low-energy loop diagrams one only intuits the presence of new particles through their virtual effects (quantum interference). I’ll leave the details for another time, but here are a few facts that you can assume for now:

  1. Loop diagrams can be sensitive to new heavy particles through quantum interference.
  2. Processes which only occur through loop diagrams are often suppressed. (This is partly why the Standard Model branching ratio for Bs → μμ is so small.)
  3. In the Standard Model, all flavor-changing neutral currents (FCNC)—i.e. all flavor-changing processes whose intermediate states carry no net electric charge—only occur at loop level. (Recall that the electrically-charged W bosons can change flavor, but the electrically neutral Z bosons cannot. Similarly, note that there is no way to draw a Bs → μμ diagram in the Standard Model without including a loop.)
  4. Thus, processes with a flavor-changing neutral current (such as Bs → μμ) are fruitful places to look for new physics effects that only show up at loop level. If there were a non-loop level (“tree level”) contribution from the Standard Model, then the loop-induced new physics effects would tend to be drowned out because they are only small corrections to the tree-level result. However, since there are no FCNCs in the Standard Model, the new physics contributions have a ‘fighting change’ at having a big effect relative to the Standard Model result.
  5. Semi-technical remark, for experts: indeed, for Bs → μμ the Standard Model diagrams are additionally suppressed by a GIM suppression (as is the case for FCNCs) as well as helicity suppression (the B meson is a pseudoscalar, so the final states require a muon mass insertion).

So the punchline is that Bs → μμ is a really fertile place to hope to see some deviation from the Standard Model branching ratio due to new physics.

Introducing the Penguin

I would be remiss if I didn’t mention the “penguin diagram” and its role in physics. You can learn about the penguin’s silly etymology in its Wikipedia article; suffice it for me to ‘wow’ you with a picture of an autographed paper from one of the penguin’s progenitors:

A copy of the original "penguin" paper, autographed by John Ellis.

The main idea is that penguin diagrams are flavor-changing loops that involve two fermions and a neutral gauge boson. For example, the b→s penguin takes the form (no, it doesn’t look much like a penguin)

You should have guessed that in the Standard Model, the wiggly line on top has to be a W boson in order for the fermion line to change flavors. The photon could also be a Z boson, a gluon, or even a Higgs boson. If we allow the boson to decay into a pair of muons, we obtain a diagram that contributes to Bs → μμ.

Some intuition for why the penguin takes this particular form: as mentioned above, any flavor-changing neutral transition in the Standard Model requires a loop. So we start by drawing a diagram with a W loop. This is fine, but because the b quark is so much heavier than the s quark, the diagram does not conserve energy. We need to have a third particle which carries away the difference in energy between the b and the s, so we allow the loop to emit a gauge boson. And thus we have the diagram above.

Thus, in addition to the box diagrams above, there are penguin diagrams which contribute to Bs → μμ. As a nice ‘homework’ exercise, you can try drawing all of the penguins that contribute to this process in the Standard Model. (Most of the work is relabeling diagrams for different internal states.)

[Remark, 6/23: my colleague Monika points out that it’s ironic that I drew the b, s, photon penguin since this penguin doesn’t actually contribute to the dimuon decay! (For experts: the reason is the Ward identity.) ]

Supersymmetry and the Bs → mu mu penguin

Finally, I’d like to give an example of a new physics scenario where we would expect that penguins containing new particles give a large contribution to the Bs → μμ branching ratio. It turns out that this happens quite often in models of supersymmetry or, more generally, ‘two Higgs doublet models.’

If neither of those words mean anything to you, then all you have to know is that these models have not just one, but two independent Higgs particles which obtain separate vacuum expectation values (vevs). The punchline is that there is a free parameter in such theories called tan β which measures the ratio of the two vevs, and that for large values of tan β, the Bs → μμ branching ratio goes like (tan β)6 … which can be quite large and can dwarf the Standard Model contribution.

 

Added 6/23, because I couldn't help it: a supersymmetric penguin. Corny image from one of my talks.

 

[What follows is mostly for ‘experts,’ my apologies.]

On a slightly more technical note, it’s not often well explained why this branching ratio goes like the sixth power of tan β, so I did want to point this out for anyone who was curious. There are three sources of tan β in the amplitude; these all appear in the neutral Higgs diagram:

Each blue dot is a factor of tan β. The Yukawa couplings at each Higgs vertex goes like the fermion mass divided by the Higgs vev. For the down-type quarks and leptons, this gives a factor of m/v ~ 1/cos β ~ tan β for large tan β. An additional factor of comes from the mixing between the s and b quarks, which also goes like the Yukawa coupling. (This is the blue dot on the s quark leg.) Hence one has three powers of tan β in the amplitude, and thus six powers of tan β in the branching ratio.

Outlook

While the LHCb result was somewhat sobering, we can still cross our fingers and hope that there is still an excess to be discovered in the near future. The LHC shuts down for repairs at the end of next year; this should provide ample data for LHCb to probe all the way down to the Standard Model expectation value for this process. Meanwhile, it seems that while I’ve been writing this post there have been intriguing hints of a Higgs (also via our editor)… [edit, 6/23: Aidan put up an excellent intro to these results]

[Many thanks to the experimentalists with whom I’ve had useful discussions about this.]

Share

Life at the limit

Friday, July 22nd, 2011

At the moment anyone who has even a passing interest in particle physics is thinking about the results being presented that European Physical Society High Energy Physics 2011 conference (referred to as simply “EPS”.) It is at EPS that we hear news about the search the Higgs boson, and the news is tantalizing! The talks are all publicly available, and to understand them fully you need to know a little bit about how limits work.

What is a limit?

A limit is an upper or lower bound for a physical quantity, and we place limits when we don’t have enough information to estimate the value accurately or precisely. When we say something like “The lower limit for the mass of the Higgs boson is 114GeV” what we mean is that given the data we have had access to we can be confident that the mass of the Higgs boson is at least 114GeV.

One of the most interesting parts of the search for the Higgs is that we have several limits at the moment, so one experiment might say “The mass is at least 114GeV” and another experiment will say “It’s not between 155 and 190GeV”, and experts in electroweak physics will say “Don’t bother to look above 300GeV”. Each mass region requires a slightly different search, and that’s why some limits appear before others.

Confidence problems

Like anyone else, physicists have issues with “confidence”. To a physicist, “confidence” means the extent to which they trust a measurement, so it’s an important concept to get right! Our data are statistically limited, so we can never be 100% certain in any of our measurements. What we usually do is say something like “We’re 95% certain that the Higgs mass is not in the region 157-174GeV”. To understand what that really means you need to think backwards. We’ve got some data and the probability that we would get this data, given that the Higgs mass in the region 157-174GeV is 5%, or 1 in 20.

You can probably see why this gives us confidence problems… if we have 20 data points that show us measurements with 95% confidence then we expect 1 of them to be incorrect. As we look across one of our plots we can see lots of data points (generally, every time there’s a kink in the plot there’s another data point.) How do we know when we’ve got it right? The answer is that we don’t know, and the fluctuations can take the distributions up and down. At first this seems like a minor irritation, but it has serious implications.

Your typical limit plot

A lot of the talks at EPS contain plots like this:

ATLAS limit (K. Cranmer, on behalf of ATLAS. EPS HEP 2011)

ATLAS limit on Higgs mass (K. Cranmer, on behalf of ATLAS. EPS HEP 2011)

They look pretty, but they don’t look simple. The green and yellow bands show us the expected confidence bounds for some number, and that’s what we should look at first to get a feeling for what the plot is telling us. The line at the center of these bands shows us the expected limit. The “Observed” line shows us what we actually see in the experiment. If the “Observed” line stays within the bands then our expectations are about right.

The y-axis shows the production cross section of the Higgs boson, multiplied by the branching fraction to the final state, and some other factors. These numbers all vary as the mass of the Higgs boson, which is one of the reasons why the graphs look so wiggly. The exciting part is the horizontal line at 1. This is the line where we would expect to see the Higgs boson being produced. If the “Observed” line crosses the line at 1 then we can conclude that the Higgs boson probably does not exist at that mass, because our limit is already at 1 times the Standard Model. As the upper yellow band passes under the line at 1 we can be almost certain that the Higgs doesn’t exist there. (Remember the definition of the confidence: “At this mass point, we’re 95% sure that the Higgs production cross section is less than what the Standard Model predicts.”)

What the plots tell us

Exciting things start to happen when the limits change! As we gather more data the limits improve and we exclude more mass points. On the plot, we would see the green and yellow bands move down. If the Higgs doesn’t exist in a particular mass region then the “Observed” line would move down as well. But, if we see the bands move down and the “Observed” line get left behind then that’s a hint that the Higgs boson mass is in that region!

This is cause for major excitement for some physicists and skepticism for others. Remember the confidence problem of fluctuations and you can see that this kind of fluctuation would happen very often. When does a “fluctuation” turn into “evidence”? It’s a topic that’s not very well defined, but we’ve chosen to say three standard deviations (imagine a third colored band on the plot) is a good indication of evidence, and five standard deviations (a veritable rainbow of confidence!) is proof of new physics. When we see a fluctuation the answer is to add more data and see if it remains. If it stays there while the bands move down around it then there’s probably a particle there.

LEP? What is LEP?

In these talks you’ll often see “LEP” on the plots. This refers to a collection of four high-precision experiments that operated at CERN before the era of the LHC. LEP was an electron-positron collider and the LHC now occupies the tunnels where LEP were. The four experiments (ALEPH, Delphi, L3 and Opal) searched for the Higgs boson directly via a process known as Higgsstrahlung. They excluded the Higgs boson mass up to 114GeV, and their contribution to the hunt, although it looks small on these plots, is certainly substantial. (In fact, you can see how the limits at the LHC experiments get poorer and poorer at lower masses. The LEP experiments were better suited to these regions by design, and by design the LHC experiments are better at higher mass regions.)

What about electroweak?

The Higgs boson is just one of many particles, and since all the other particles interact with each other we expect the Higgs boson to interact as well. If it does it will affect all kinds of processes, most notably the electroweak processes, which are finely constrained. This allows us to identify the most likely place where the Higgs boson would be found, so if you see anything about electroweak fits and exclusions it’s usually the result of fitting a huge number of electroweak parameters in an attempt to limit the mass of the Higgs boson.

Find your own limit!

Now that you have all you need to read these plots you can go check out the limits yourself! Here are the talks from the main experiments:

ATLAS
CDF
CMS
D0

Electroweak fit

On an unrelated note, Happy Pi Approximation Day!

Errata

This post initially incorrectly credited the plot to A. Baroncelli. This plot was presented at EPS by Kyle Cranmer. Apologies to Kyle!

Share

Science and Simplicity

Friday, July 22nd, 2011

– By Byron Jennings, Theorist and Project Coordinator

One of the leading naturalists of the 19th century was Phillip Henry Gosse (6 April 1810 – 23 August 1888). He spent time in Newfoundland and Ontario where he cataloged insect species, among other things. On his return to England he published Canadian Naturalist (1840) and became one of the leading popularizers of natural science of his day.  He invented, or at least made practical, the marine aquarium with his 1854 book, The Aquarium, which initiated a craze for aquariums. He was elected a Fellow of the Royal Society in 1856. He even communicated with Darwin over the study of orchids. All together, a first rate naturalist.

He was also an extremely devout Christian; a member of the Plymouth Brethren. It bothered him that the geological record implied an age of the earth much older than the age given in the Bible. Gosse’s solution was given in a book called Omphalos: an Attempt to Untie the Geological Knot (published in 1857, two years before Darwin’s famous book). The title, Omphalos, is the Greek word for belly button; as in, did Adam have a belly button? The book itself is rather tedious with page after page of examples, but the first part is well worth a read.

OK, anyone who has read my previous blogs knows what coming next. That’s right, the Duhem-Quine Thesis: the idea that one can always avoid falsification by adjusting an auxiliary hypothesis. In this case, it’s a doozie. Gosse suggested God created the universe only six thousand years ago but in such a manner that it is indistinguishable from an old one: light created in transit from stars, fossils created in rocks, etc.  Gosse made two distinct points:

  1. Any act of special reaction implies a false history. A created chicken implies an egg that did not exist or vise versa. A created tree would have rings not due to growth. Dinosaur fossils, however, do seem a bit extreme.
  2. The universe could be any age and it is only an outside reference (e.g. the bible) that allows one to determine the actual age.

Needless to say the book was not a hit, denounced by Christians (God is a deceiver? Bah humbug) and ignored by scientists (Am I studying an illusion? Bah humbug). Even a name change did not save it, and in 1869 the remaining copies were sold for scrap. However, it has significant epistemological implications that has led to some modern parodies like last Thursdayism; the idea that the earth was created last Thursday but only appears older.

Ok, one may be able to argue about Omphalosism, but Last Thursdayism is clearly absurd. But why? What postulate of the scientific method does it violate? It makes all the same predictions, by construction, as the standard models. Thus, cannot be eliminated by the appeal to observation, the touchstone of the scientific method. I would suggest the only criterion is simplicity. It is simplicity that eliminates Omphalosism. Extra complexity has been added to the model with no gain in the ability to make predictions.

Simplicity is like air; it is so ubiquitous that one tends to forget it is there. But it is there; from William of Ockham (1238 – 1348) (Occam’s razor: Entities should not be multiplied unnecessarily), to Isaac Newton (We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances) to Steven Weinberg (You may use any degrees of freedom you like to describe a physical system, but if you use the wrong ones, you’ll be sorry![1]).

In my last blog, I introduced the idea of conventionalism—that what is frequently regarded as truth is but a convention. Weinberg’s degrees of freedom are an example. But how is the convention chosen? I would suggest simplicity plays a domineering role. We use an earth-based frame or a sun-based frame due to simplify or ease of use.  We use a nuclear potential that is weak at short distances like Vlow k  rather than the traditional potentials with strong short-range repulsion due to simplicity and ease of use. Only this, and nothing more (to quote The Raven).

Omphalos and Last Thursdayism (some heretics believe in Last Tuesdayism, but we will excommunicate them) make the emphatic point that simplicity is necessary, absolutely necessary. Otherwise one can multiply hypotheses without limit and get bogged down in futile arguments (Last Thrusdayism vs Last Tuesdayism, Vlow k vs the traditional potentials).  We draw smooth curves through data points rather than wiggly ones, again due to simplicity. Simplicity all the time and everywhere. Simplicity rules!

While simplification is crucial, in the end it leads astray. Newtonian dynamics was replaced by more complex models (relativity and quantum mechanics), fixed continents were replaced by the more complex idea of continental drift, animals reproducing after their kind was replaced by the complexities of evolution, and on it goes with simple paradigms being replaced by more complex ones. Mr. Kuhn[2] meet Mr. Murphy[3], Mr. Murphy meet Mr. Kuhn.

 


[1] From: “Asymptotic Realms of Physics” (ed. by Guth, Huang, Jaffe, MIT Press, 1983)

[2] Thomas Kuhn introduced the idea of paradigm change.

[3] Everyone has run a foul of Murphy’s Law.

Share