• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for March, 2012

Finding the Higgs boson will have no epistemic value whatsoever.  A provocative statement. However, if you believe that science is defined by falsification, it is a true one.  Can it really be true, or is the flaw in the idea of falsification?  Should we thumb our noses at Karl Popper (1902 – 1994), the philosopher who introduced the idea of falsification?

The Higgs boson, the last remaining piece of the standard model, is the object of an enormous search involving scientists from around the world.  The ATLAS collaboration alone has 3000 participants from 174 institutions in 38 different countries. Can only the failure of this search be significant? Should we send out condolence letters if the Higgs boson is found? Were the Nobel prizes for the W and Z bosons a mistake?

Imre Lakatos (1922 – 1974), a neo-falsificationist and follower of Popper, states it very cleanly and emphatically:

But, as many skeptics pointed out, rival theories are always indefinitely many and therefore the proving power of experiment vanishes.  One cannot learn from experience about the truth of any scientific theory, only at best about it falsehood: confirming instances have no epistemic value whatsoever (emphasis in the original).

Yipes! What is going on? Can this actually be true? No! To see the flaw in Lakatos’s argument, let’s consider an avian metaphor—this time Cygnus not Corvus. Consider the statement: All swans are white. (Here we go again.) Before 1492, Europeans would have considered this a valid statement. All the swans they had seen were white. Then Europeans started exploring North America. Again, the swans were white. Then they went on to South America and found swans with black necks (Cygnus melancoryphus) and finally to Australia where the swans are black (Cygnus atratus). By the standards of the falsificationist, nothing was learned when white swans were found, but only when the black swans or partially black swans were found.  With all due respect, or lack of same, that is nonsense. It is the same old problem: you ask a stupid question you get a stupid answer. Did we learn anything when white swans were found in North America? Yes. We learned that there were swans in North America and that they were white. Based on having white swans in Europe, we could not deduce the colour of swans in North America or even that they existed. In Australia, we learned that swans existed there and were black. Thus, we learned a similar amount of information in both cases—really nothing more or nothing less.  The useful question is not, ‘Are all swans white?’ Rather, ‘On which continents do swans exist and what color are they on each continent?’

Moving on from birds to model cars (after all, the standard model of particle physics is a model). What can we learn about a model car? Certainly, not if it is correct. Models are never an exact reproduction of reality. But, we can ask, ‘Which part the car is correctly described by the model? Is it the color? Is it the shape of the head lights or bumper?’ The same type of question applies to models in science. The question is not, ‘Is the standard model of particle physics correct?’ We knew from its inception that it is not the answer to the ultimate question about life, the universe and everything. The answer to that is 42 (Deep Thought, from The Hitchhiker’s Guide to the Galaxy by Douglas Adams). We also know that the standard model is incomplete because it does not include gravity. Thus, the question never was, ‘Is this model correct?’ Rather, ‘What range of phenomena does it usefully describe?’ It has long history of successful predictions and collates a lot of data. So, like the model car, it captures some aspect of reality, but not all.

Finding the Higgs boson helps define what part of reality the standard model describes. It tells us that the standard model still describes reality at the energy scale corresponding to the mass of the Higgs boson. But, it also tells us more: It tells us that the mechanism for electroweak symmetry break –a fundamental part of the model—is adequately described by the mechanism that Peter Higgs (and others) proposed and not some more complex and exotic mechanism.

The quote from Lakatos, given above, misses a very important aspect of science–parsimony. The ambiguity noted there is eliminated by the appeal to simplicity. The standard model of particle physics describes a wide range of experimental observations. Philosophers call this phenomenological adequacy. But a lot of other models are phenomenologically adequate. The literature is filled with extensions to the standard model that agree with the standard model where the standard model has been experimentally tested. They disagree elsewhere, usually at higher energy. Why do we prefer the standard model to these pretenders? Simplicity and only simplicity. And the standard model will reign supreme until one of the more complicated pretenders is demonstrated to be more phenomenolgically adequate. In the meantime, I will be a heretic and proclaim that finding the Higgs boson would indeed confirm the standard model. Popper, Lakatos, and the falsificationists be damned.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

So as many are finding out today the world of High Energy Physics (HEP) in the US is having its future further blurred with the announcement from the Office of Science directors announcement that the Long Baseline Neutrino Experiment (LBNE) that

“we cannot support the LBNE project as it is currently configured…(this decision) is a recognition that the peak cost of the project cannot be accommodated in the current budget climate or that projected for the next decade”

This is pulled from a letter from Office of Science Director Bill Brinkman  (found here)

While I can’t say this is a particularly surprising result given tight budgets and tough political climate as well as the projected $1.5 billion price tag of LBNE…it is very disheartening when I read “for the next decade

Science Insider (article linked here) does a nice job of explaining the

“latest twist in the long saga to build an underground lab in Homestake”

I think to say that this decision, the recent shutdown of the Tevatron, and the rough forecast of the budget makes the stakes very high for Fermilab and the future science at this lab.

Next week is a Directors review of the LBNE project  (http://lbne.fnal.gov/reviews/CD1-review-top.shtml) here at Fermilab where I am sure much discussion and planning will get thrashed out in the coming days.

Share

Turtles all the way down?

Tuesday, March 20th, 2012

I recently got an interesting e-mail about the Big Bang. The writer said she didn’t see how you could make something out of nothing. She collects creation myths and thought that, no matter how you sliced it, it’s always “turtles all the way down.” This is a reference to creation myths where the world is poised on top of a turtle, which is itself poised on top of something else, but raises the issue: Is there any firm ground?

This is worth addressing because it illustrates the gulf between the understandings in people’s minds about the Big Bang on one hand, and how physicists deal with it on the other. To be clear – we have a wealth of observations that support the Big Bang, but you have to be careful. We can only look back into the universe to a moment 300,000 years after the ‘start,’ as best we can discern it. At this early moment, the universe went from being opaque to transparent. Before this moment, ionized gas kept light from traveling any distance, but once protons and electrons cooled enough to form neutral hydrogen, light (photons) could travel long distances. The remnant photons from this time are seen as the so-called cosmic microwave radiation. These photons were first observed by Arno Penzias and Robert Wilson in the 1960’s and continues to be a rich source of information about the early universe.

What do we see? We see galaxies moving away from each other. The further away we look, the faster they appear to recede. Einstein’s gravity has a number of solutions for possible universe structures. One of these solutions describes the expanding universe very well, and, if taken at face value, would extrapolate back in time to an initial state when all matter in the universe existed as a single point of infinite density. But, does a point of infinite density make sense? The author of the e-mail question thinks not, that it’s like pulling a rabbit out of the hat. You can’t make something from nothing, and this apparent absurdity invalidates the Big Bang model.

The main issue is that, although our observations are very consistent with this model of a Big Bang universe, we cannot actually see the initial moment. It’s hidden from view. We strongly suspect that the laws of physics might change dramatically when distances scales and energy densities approach the conditions very close to initial moment. We know that when the classical laws of physics are combined with quantum mechanics, new phenomena emerge. This was the case of our theory of electromagnetism. When we incorporated quantum mechanics with electromagnetism, the phenomenon of anti-matter became apparent. We have yet to find a satisfactory theory of gravity that combines quantum mechanics. The manifestations of quantum mechanics in gravity will only emerge at extremely high energy densities, such as those in the very early universe, near the time of this infinite density, and will likely modify our current models. For all we know, space-time might resemble some Escher print, eluding the concept of an infinite density starting point through a twisted configuration that folds in on itself.

Rather than dealing with a concept that seems almost theological in nature, physicists try to reconcile models against data. We fully realize that our models will extrapolate to conditions that raise difficult issues, like infinite densities. More often than not, the difficult conditions are something we avoid talking about, because, largely, we cannot really test or measure these. If it is inaccessible, it is inaccessible. The work can be perhaps more likened to the work of explorers. Our job is to map new territories, and, if anything, we can only report on territories we’ve explored. What lies beyond the horizon is a matter of speculation.
Responses? Questions? Contact me on Twitter @JohnHuth1

Share

Dissecting the Penguin

Monday, March 19th, 2012

No animals were harmed in the writing of this blog post.

One of the amusing tales in particle physics is the story of how the “penguin diagram” got its name. We won’t go into that here, instead, we’ll make use of some of the tools we’ve developed with Feynman diagrams to understand the physics behind these ‘penguin’ diagrams. In doing so, we’ll have a nice playground to really make use of what we’ve learned so far about Feynman rules. (Feel free to review the series if you need a refresher!)

Caveat: we’ll try to squeeze as much physics as we can out of our diagrams, and occasionally I’ll lapse into some more technical details. Feel free to skip these if you just want a ‘big picture,’ the main idea is independent of the details.

Drawing the penguin diagram

In a nutshell, a penguin (in the Feynman diagram sense) is a process where one flavor of matter particle changes into another flavor while emitting a photon or gluon: for example, a bottom (b) quark converting into a strange (s) quark and a photon. These processes are rare in the Standard Model and are interesting because they can get large enhancements from new physics

So let’s jump right in. If I told you that I wanted to study a process where a bottom quark turns into a photon and a strange quark, the simplest diagram to draw would be something like this:

However, this is wrong. The problem is that this is not an allowed Feynman rule in the Standard Model—the photon does not connect particles of different flavors. It can talk to a bottom and anti-bottom, or a strange and an anti-strange, but not to a bottom and an anti-strange. (Recall that an anti-strange quark going into the vertex is the same as a strange quark coming out.)

In fact, we know that the only source of flavor-changing in the Standard Model comes from the W boson. Thus we conclude that whatever diagram mediates this process, which we succinctly write as b → sγ (“bottom quark to strange quark and photon”), must include a W boson somewhere. Here’s a simple diagram one could draw:

Great! This actually works. A couple of things to notice right off the bat:

  1. This is a “loop” diagram, to be distinguished from the “tree” diagram we tried to draw above. This demonstrates a principle in the Standard Model: there are no tree-level “flavor-changing neutral currents” (FCNC). That is to say, the neutral gauge bosons (photon, gluon, Z) do not have direct interactions to fermions which change flavor. The b → sγ transition evades this principle by being a loop-level (and hence “more quantum”) process.
  2. The top quark (t) in this diagram could have been replaced by any of the up-type quarks: up, charm, top. In fact, since these internal quarks are virtual—they’re not directly observed—quantum mechanically all three quarks simultaneously mediate this process. (Mathematically this means that there are three complex numbers that add together.)
  3. In fact, there are several other diagrams one could draw. Since we can learn a lot just by looking at this diagram, I’ll leave it to you to draw these diagrams as homework. (Think of different places to put the photon, and if you’re really slick, think of other bosons that could replace the W.)
Technical comments: For those with some more mathematical background, the loop diagrams are higher order in a Taylor expansion. In fact, the whole point of drawing Feynman diagrams is to have a succinct way of writing out a Taylor expansion for a process. The expansion parameters are the coupling constants, g, associated with each vertex: note that a loop diagram contains two extra vertices compared to a  tree-level diagram for the same process. It turns out that calculating the loop also involves a 4D integral from including all possible momentum configurations for the virtual particles, and this ends up giving a factor of 1/16π2 so that the expansion parameter is really g2/16π2. For the electromagnetic coupling we know that e2/4π is 1/137, so that this is indeed a small parameter to expand about.

Drawing the diagram using chiral notation

Edit, 19 March: an earlier version of this post had incorrectly suggested that angular momentum restricts the penguin to be a dipole operator. Thanks to Jack Collins for pointing this out.

Instead of pushing this diagram for all it’s worth, let’s try to make use of the “chiral nature” of the Standard Model that tells us that left-handed particles are rather different from right-handed particles.

We propose using different notation which makes the “chirality” (handedness) of the matter particles manifest; in other words, we explicitly say if a particles is right- or left-hande, e.g. bR for the right-handed bottom. This notation will lead to more complicated diagrams, but it won’t require us to do any math to understand physics that is hidden in the simpler notation.

Angular momentum plays an important role here. What we really mean by a penguin process is one that connects a right-handed fermion with a left-handed fermion, for example bR → sL γ. This may sound little abstract, so here’s a cartoon picture, where the red arrows are carefully drawn to indicate handedness. The green numbers represent the angular momentum in the bottom quark’s direction of motion: right-handed fermions carry half a unit of angular momentum, photons carry 1 unit of angular momentum. (The left-handed strange quark has negative angular momentum since it’s spinning in the opposite direction.)

Okay: so now we’ve learned that penguins are transitions in which a right-handed fermion decays into a photon and left-handed fermion of a different flavor. Of course you can also have left-handed fermions decaying into a photon and a right-handed fermion. Similarly, you can replace the photon with a gluon (“gluonic penguin”). Photonic penguins in which the flavors don’t change have their own fancy name, “electric dipole moments.”

Technical comment, for physics students: The identification of the penguin with a “dipole” should make sense from the chirality of the operator. These right-to-left transitions are mediated by a σμν . Alternately (and slightly more technically), you can appeal to gauge invariance and observe that the only gauge-invariant operator you can write with one photon that is not a kinetic term must contain a field strength Fμν. The antisymmetry of the Lorentz indices requires that it contracts with a σμν which necessarily requires a chirality-flip. 

You might wonder why we don’t consider same-chirality transitions of the form bL → sL γ. Loop diagrams contribute to this are field-strength renormalizations, they correct the tree-level kinetic term and lead to a redefinition of flavors with respect to the tree-level basis.

Instead of dictating the chiral Feynman rules to you, let’s discover them together. We’ll be sketchy since for our purposes the details won’t buy us much. Let’s naively try to draw a chiral Feynman diagram by just copying the non-chiral diagram above.

We know that the chiral transition must be of the form bRsL γ. First let’s make the observation that we’ve drawn the bR leg with an arrow going in the opposite direction. This is one of the new conventions of the chiral Feynman diagrams:

Arrows no longer correspond to particle or anti-particle, instead, they correspond to the chirality of the particle.

So right- and left-handed particles have opposite arrows.

Now there’s a clear problem in the diagram above, which we’ve marked with a question mark (?). We know that the W boson is biased: it only talks to left-handed particles! Thus we are not allowed to have the coupling of a bR to a tL. We need to convert the right-handed bottom quark to a left-handed bottom quark (and these are totally different particles!).

Fortunately, we can do that! A right-chiral particle can convert into its a left-chiral sibling by bouncing off the Higgs vacuum expectation value. (Once again, electroweak symmetry breaking plays a key role!) We draw the “bounce off of the Higgs vacuum expectation value” as a cross on the fermion that changes the direction of the arrow. We call these crosses “mass insertions” because they are proportional to the mass of the particle.

Technical details: for our new “chiral” Feynman diagrams, the arrows no longer represent particle/anti-particle flow. So how do you know if a line corresponds to a particle or anti-particle? Usually it’s clear from angular-momentum conservation. For example, we should have been more precise and said that in the above diagram, the right-chiral bR is converted into the anti-bL since that’s the guy with same angular momentum as the original bR.

Why cross on the bR (the “bounce of the Higgs vacuum expectation”) physically represent a mass? This makes sense: a massive particle is one which can come in both chiralities since you can always boost into a frame where it’s spinning in the opposite direction.

For students who want to a reference for chiral diagrams, I refer to the encyclopedic “two-component spinor bible.” The results are completely equivalent to what one would obtain using four-component Dirac spinors, but the main benefit is that you hardly have to do any Dirac algebra to see the chiral structure of the amplitude. Further, things like Majorana fermions can be very difficult in Dirac notation but are straightforward using the two-component Weyl spinors.

There! Now we have a diagram that appears to work. Except it doesn’t. The reason is a a bit technical, but it has to do with the fact that not only are there tops (tL) running in the loop, but there are also charms (cL) and ups (uL). When you sum over all of these contributions, it turns out that the final result is zero! This result is known as the GIM mechanism.

Technical detail: What is the origin of this GIM mechanism cancellation? The W boson coupling is actually a 3×3 matrix corresponding to which down-type flavor is being converted into which up-type flavor. These matrices are unitary, they encode a change in flavor basis, and the diagram above is proportional to:

This is basically the relation UU = 1 for a unitary matrix U.

The lesson from this is that the previous Feynman diagram is too simple—it needs more internal structure to avoid the GIM cancellation. The particular structure that it needs is something that differentiates the top/charm/up type diagrams. Fortunately, there’s a way to do this: we just add more mass insertions: since the top, charm, and up each have different masses, the sum of the following diagrams (with t replaced by c and u in other diagrams) will not vanish:

Of course, the question mark is our way of pointing out that this still doesn’t work. The mass insertion in the loop converts the left-handed top quark into a right-handed top quark. However, the W only couples to left-handed particles, so the WtRsL is not allowed. This means that we need another internal mass insertion to convert the right-handed top back into a left-handed top:

I drew the mass insertion after the photon, but there are other places it could have gone. As an exercise you can draw the other diagrams that contribute to bR → sL γ. These extra mass insertions come at a price: they tend to reduce the size of the diagram.

What have we learned?

  1. One reason that these penguin decays are rare is that there is no tree-level diagram in the Standard Model. It’s a loop-level process which makes it “more quantum.”
  2. Another reason why the penguin is rare is the “GIM mechanism,” which requires that the diagram picks up additional mass insertions. In order to avoid this, we need additional internal mass insertions which come in pairs and typically suppresses the process. (This also tells us that in the limit where all of the up-type quarks have the same mass, the probability of this process must vanish.)
  3. The chiral structure of the Standard Model tells us a lot about what’s actually happening in a penguin! We learned that penguins are left-right “dipole” transitions and that (in the Standard Model) they require that the fermions bounce off the Higgs vacuum expectation value a few times due to (a) angular momentum conservation and (2) the coupling of the W to only left-handed fermions.

One of the nice things about the chiral Feynman diagrams is that they’re easier to read when trying to estimate the size of the diagram without doing the nitty gritty details of the calculation. Each mass insertion gives a factor of the fermion mass (or the mass splitting in the case of GIM cancellation) and then we can fudge additional factors by dimensional analysis. This is beyond the scope of this post, but it’s worth explaining why these slightly-more-complicated diagrams are worth their complications. For the above diagram, one can see that the bR → sL γ penguin is proportional to the mass of the bottom quark and the difference in the squared masses of the internal up-type quarks.

Bonus: Leptonic penguins

As a final example, let’s quickly go over the story of the leptonic penguin. The prime example is the decay of a muon into a photon and an electron, μR → eL γ. Usually the relation between up/down quarks is analogous to that of electrons and neutrinos. This leads us to guess the following diagram:

Here we’ve taken the liberty of introducing a right-handed neutrino into the theory to account for the experimental observation that neutrinos have a very tiny, but non-zero mass. Unfortunately, the above diagram does not work since the neutrinos are not electrically charged and so they do not interact with the photon. We need to look for other diagrams. In particular, the photon must not come off of the fermion leg, but perhaps from the W leg.

One thus ends up with diagrams of the form:

What is that dashed line? That’s a charged Higgs—one of the Goldstone bosons that was eaten by the W. I’ve drawn it here just to show off a little: we can draw diagrams in which we make the interactions with the different components of the W manifest. Here we know that the charged Higgs is really the “longitudinal polarization” of the W, but we’re drawing it as an independent particle. (We could have “picked a gauge” in which this diagram is absent, but let’s allow ourselves to show off for pedagogical purposes.) Let us use this opportunity to highlight another aspect of the chiral notation:

  1. Interactions with vector particles (spin 1) preserve fermion chirality. We saw this with the W boson above: a left-handed particle stayed left-handed after interacting via a W boson. This was also true with the photon coupling, and turns out to be true for all of the gauge bosons. It has nothing to do with the chiral structure of the Standard Model, rather it has to do with conservation of angular momentum. (The chiral structure of the Standard Model shows up when we say the W only talks to left-handed particles; compare this to the photon which will talk to pairs of left-handed particles or pairs of right-handed particles, but never a left-handed and a right-handed particle in the same vertex.)
  2. On the other hand, interactions with scalars (spin 0), such as the Higgs vacuum expectation value or any of its components, do change chirality. This is just a feature of scalar interactions versus vector interactions.

Now some remarks about the leptonic penguins:

  1. Note that because there is no photon–neutrino coupling, the set of diagrams for the leptonic penguins are different from those of the quark penguins! (As an exercise, try drawing all the diagrams for the quark–quark–gluon penguins, there are even more since gluons can couple to other gluons.)
  2. In the diagram above, the charged Higgs coupling to a right-handed muon and a left-handed electron is proportional to the mass of the muon—thus one still picks up a factor of the initial fermion mass.
  3. We could also draw the reversed diagram where we pick up the mass of the electron. (Exercise: draw this diagram and label all internal states with their chiralities.) However, since the electron is so much lighter than the muon, we can ignore this contribution.
  4. [technical detail] For those who know a little field theory, it should be straight forward to do a dimensional analysis on this to determine the dependence of the branching ratio on the internal neutrino masses and the external muon mass. (Let the W mass be the dominant mass scale in the diagram, though kinematic factors have to be made up with the mass scale of the process, which is the muon mass.)
  5. The muon mass insertions are really small. We know that these mass insertions are really mass splittings (i.e. proportional to the differences in mass). Since the neutrino mass splittings are experimentally known to be very small, the μR → eL γ penguin is very rare!

Punchline: looking for new physics

While this post has been somewhat technical, for the most part we’ve managed to avoid doing any mathematics while still being able to make some fairly quantitative statements about the penguin process. We could, for example, talk about how the bR → sL γ penguin vanishes if the internal quarks have the same mass, or even guess the dependence of the quantum mechanical amplitude on the masses of the internal and external particles. If we were to do the calculation using the “standard” (rather than chiral) Feynman diagram, these properties would require a little bit of mathematical work to see explicitly.

Now that we’ve really beaten ourselves over the head with these penguins, let me just close by explaining that these penguins are interesting primarily because they are loop level processes where any allowed particle may run in the loop, including new particles that aren’t in the Standard Model. This is because such internal particles are virtual and don’t need to be on-shell, that is to say that they don’t need to have enough energy to actually exist for long periods of time. In popular books this is often explained with Heisenberg’s uncertainty principle: the internal particles can violate energy conservation for a very short period of time, as long as they decay into states which do respect energy conservation relative to the initial particle. Thus the inside of the penguin can include contributions from exotic new particles. Since the Standard Model contribution is suppressed, there’s a chance that the effect of the exotic new particles might be seen in an enhanced decay rate.

Share

One of the more interesting little conundrums in understanding science is the raven paradox. It was proposed by Carl Hempel (1905 –1997) in the 1940s. Consider the statement: All ravens are black. In strict logical terms, this statement is equivalent to: Everything that is not black is not a raven. To verify the first we look for ravens that are black. To verify the latter we look for coloured objects that are not ravens.  Thus finding a red (not black) apple (not raven) confirms that: Everything that is not black is not a raven, and hence that: all ravens are black. Seems strange: to learn about the colour of birds, we study a basket of fruit.

While the two statements may be equivalent for ravens, they are not equivalent for snarks.  The statement: Everything that is not black is not a snark, is trivially true since snarks do not exist, except in Lewis Carroll’s imagination. However, the statement: All snarks are black, is rather meaningless since snarks of any colour do not exist (boojums are another matter). Hence, the equivalence of the two statements in the first paragraph relies on the hypothesis that ravens do exist.

One resolution of the paradox is referred to as the Bayesian solution.  The ratio of ravens to non-black objects is as near to zero as makes no difference.  Thus finding 20 black ravens is more significant than find 20 non-black, non-ravens. You have sampled a much larger fraction of the objects of interest. While it is not possible to check a significant fraction of non-black objects in the universe, it may be possible to check a significant faction of ravens, at least those which are currently alive.

But the real solution to the problem seems to me to lie in different direction. Finding a red apple confirms not only that all ravens are black but also that all ravens are green, or chartreuse, or even my daughter’s favorite colour, pink.  The problem is that a given observation can confirm or support many different, and possibly contradictory, models.  What we do in science is compare models and see which is better. We grade on a relative, not absolute scale.  To quote Sir Carl Popper:

And we have learnt not to be disappointed any longer if our scientific theories are overthrown; for we can, in most cases, determine with great confidence which of any two theories is the better one. We can therefore know that we are making progress; and it is this knowledge that to most of us atones for the loss of the illusion of finality and certainty.

We do not want to know if: All ravens are black is true but rather if the statement all ravens are black is more accurate than the statement all ravens are green. A red apple confirms both statements, while a green apple confirms one and is neutral about the other. Thus the relative validity of the two statements cannot be checked by studying apples, but only by studying ravens to see what colour they are.  Thus, the idea of comparing models leads to the intuitive result. Whereas, thinking in terms of absolute validity, leads to nonsense:  Here, check this stone to see if ravens are black. Crack, tinkle (sound of broken glass as stone misses raven and goes through neighbor’s window)

We can go farther. Consider the two statements: All ravens are black, and Some ravens are not black. The relative validity of these two statements cannot be checked by studying apples or even black ravens. Rather what is needed is a non-black raven. This is just the idea of falsification. Hence, falsification is just a special case of comparing models: A is correct, A is not correct.

In practice, all ravens are not black. There are purported instances of white ravens. Google says so and Google is never wrong. Right? Thus, we have the statement: Most ravens are black. This statement does not imply anything about non-black objects; they may or may not be ravens.  Curious… this whole raven paradox was based on a false statement and as with: All ravens are black, most absolute statements are false, or at least, not known for certain.

Even non-absolute statements can lead to trouble. Consider: Most ravens are black, and: Most raven are green. So we merrily check ravens to see which is correct. But is it not possible that the green ravens blend in so well with the green foliage that we are not aware that they are there? Rather like the elephants in the kid’s joke that paint their toe nails red so they can hide in cherry trees. Works like charm. Who has seen an elephant in a cherry tree?  We are back to the Duhem-Quine thesis that no idea can be checked in isolation. Ugh. So, why do we dismiss the idea of perfectly camouflaged green ravens and red-nailed elephants? Like any good conspiracy theory, they can only be eliminated by an appeal to simplicity. We eliminate the perfectly camouflaged green raven by parsimony, and as for the red apple, I ate it for lunch.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

Under review

Friday, March 16th, 2012

It has been a very busy couple of weeks for particle physics, as has been chronicled here in Quantum Diaries — new results in the Higgs search (as Alain Blondel, the summary speaker at the Moriond conference said, “Too soon to claim evidence, but who would bet against Higgs boson at 125 GeV?”),
the first definitive non-zero measurement of the neutrino mixing parameter theta-13, and today’s news that the ICARUS experiment, in the same underground lab as OPERA, has measured the speed of neutrinos and found it to be consistent with the speed of light (as many would say, “Too soon to claim an error, but who would bet against Einstein at 3 x 10^8 m/s?”). Meanwhile, the first beams of the year are now circulating in the LHC, and we are anticipating a very exciting year.

However, I have come here today to discuss something much more boring, which is money. (Sorry about that, but my job here is to write about life in particle physics; this is a piece of it.) All of the great science that the LHC is bringing to you doesn’t come for free, of course — in fact, it is funded by you, the taxpayer. In the United States, research in particle physics is supported predominantly by the Department of Energy and the National Science Foundation, who are also the sponsors of the US LHC blog that you are reading right now. Much of the funding goes into grants to research groups at individual universities, which in turn goes to support the hardworking graduate students and postdoctoral researchers who are running the experiment and analyzing the data, and who will be the future leaders of science and technology in our country. But a lot of it goes into behind the scenes stuff — helping to pay our share for the operations of the experiments, funds for research and development and purchasing equipment for detector upgrades, and the deployment and operation of the computing resources needed to analyze the data that comes out. This is referred to as the “operations program”, and for US CMS, this comes to about $38M/year — not much at all in the grand scheme of the entire multi-trillion dollar federal budget, but a noticeable bit of the budget for particle physics in the US. I’m the deputy leader of US CMS software and computing, so it is part of my job to make sure that the program is executed well.

It is only proper that there is some oversight and review of the operations program. The program managers interact regularly with our contacts at the funding agencies, and with all of the US CMS physicists who depend on and benefit from the program. But we also have an annual formal external review. This year’s review was held last week at sunny SLAC National Accelerator Laboratory. While the review is coordinated by program officers at the funding agencies, it is conducted by our peers — experienced particle physicists (and a few physicists from other fields) who have had to run similar programs themselves. They know the hard questions to ask that will probe whether we are really providing value to researchers and whether the science we are doing is truly worth the investment. Getting their outside perspective is very useful for us, as it helps us evaluate our own work from a different angle.

If I may say so, these reviews are pretty intense. We start getting ready for them a couple of months in advance, as we pull together documentation that demonstrates our achievements of the past year, and how we have implemented recommendations from previous reviews. We are often given specific questions about how we would allocate resources for the future. We also rehearse the presentations that we are going to give for our collaborators, who help us make sure that what we say is going to make sense to outsiders. The review itself starts with a series of presentations from us about what we are doing. Then the review panel breaks into subcommittees that focus on different aspects of the program, and we address some issues in more details. At the end of the working day, the panel gets back together and poses a set of questions for us to respond to about topics that they thought needed more consideration. After a nice dinner where we try not to think of the task ahead of us, the US CMS team reconvenes to come up with written answers to the questions. This year I stayed up until 1 AM to finish my part, while other colleagues were up later. Then we all got back together at 6 AM to check things over in advance of our presentation to the panel at 8 AM. Whew!

Then the panel takes a few more hours to synthesize what they learned from us, and to present a closeout report. I’m happy to say that US CMS came out quite well this year. We were praised for our contributions to the fabulous results that came out of the LHC in the past year, and for how we are supporting our colleagues in pursuing the science. It’s always a relief to get through this, but also to know that we are doing right by our collaborators and by you, the people who are generously making our work possible.

Share

This is a follow-up from our last post where Paul Schaffer, Head of the Nuclear Medicine Division at TRIUMF, was talking about his experience of being in the media spotlight. In this post, Paul talks more in-depth about the science of medical isotopes.

It all started 19 months ago. A grant that would forever change my perspective of science geared specifically toward innovating a solution for a critical unmet need—in this situation, it was the global isotope crisis. In 2010, not too long out of the private sector, I was already working on an effort funded by NSERC and CIHR through the BC Cancer Agency to establish the feasibility of producing Tc-99m—the world’s most common medical isotope—on a common medical cyclotron. The idea: produce this isotope where it’s needed, on demand, every day, if and when needed. Sounds good, right? The problem is that the world had come to accept what would have seemed impossible just 50 years ago.

The current Tc-99m production cycle, which uses nuclear reactors. Image courtesy of Nordion.

We are currently using a centralized production model for this isotope with just a six hour half-life. This model involves just a handful of dedicated, government-funded research reactors, producing molybdenum-99 from highly enriched uranium (which is another issue for another time). Moly, as we’ve come to affectionately call it, decays via beta emission to technetium, and when packaged into alumina columns, is sterilized, and encased in a hundred pounds of lead. It is then shipped by the thousands to hospitals around the world. The result: the world has come to accept Tc-99m, which is used in 85% of the 20 to 40 million patient scans every year as an isotope available from a small, 100 pound cylinder that was replaced every week or so, without question, without worry. Moly and her daughter were always there…but in 2007 and again in 2009, suddenly they weren’t. The world had come to realize that something must be done.

In the middle of our NSERC/CIHR effort, we were presented with an opportunity to write a proof-of-concept grant based on the proof-of-feasibility we were actively pursuing. Luckily, the team had come far enough to believe we were on the right track. We believed that large scale curie-level production of Tc-99m using existing cyclotron technology was indeed possible. The ensuing effort was—in contrast to the current way of doing things—ridiculous.

With extensive, continuous input from several top scientists from around the country, I stitched together a document 200 pages long. It was a grant that was supposed to redefine how the most important isotope in nuclear medicine was produced. 200 pages, well 199 to be exact, describing a process—THE process—we were hopefully going to be working on for the next 18 months. We waited…success! And we began.

The effort started the same way as the document – with nothing more than a blank piece of paper. Blank in the sense that we knew what we had to do, we just had not defined exactly how we were going to achieve our goal. But what happened next was a truly remarkable thing; with that blank sheet, I witnessed first-hand a team of people imagine a solution, roll up their sleeves and turn those notions into reality.

If you would like to read the PET report, click here

 

 

Share

La première semaine de la plus grosse conférence de physique d’hiver, les Rencontres de Moriond de La Thuile en Italie, s’est terminée le 10 mars laissant tout le monde à la fois impressionné et perplexe devant tous les nouveaux résultats présentés.

La situation est la suivante: les théoriciennes et théoriciens savent que le modèle standard de la physique des particules, a ses limites et qu’il n’est probablement que la partie la plus accessible d’une théorie plus complexe encore inconnue. Un peu comme si on ne connaissait que l’arithmétique en mathématiques. Cela suffit pour la plupart des opérations quotidiennes, même si l’algèbre, la géométrie et le calcul intégral sont essentiels pour résoudre des problèmes plus complexes.

On s’attend donc à voir apparaitre des phénomènes liés à une « nouvelle physique » qui nous indiquerait laquelle parmi toutes les théories proposées est la bonne. Tout le monde espère que les expériences du Grand Collisionneur de Hadrons (LHC) révèleront bientôt un indice pour nous mettre sur la bonne voie.

La conférence visait donc en grande partie à faire le point sur l’impact de toutes les mesures récentes sur ces nouvelles théories, et en particulier sur la supersymmétrie (SUSY) et les dimensions supplémentaires. Et de nouveaux résultats, il y en avait, tant sur le boson de Higgs, les particules de SUSY et de matière noire, que sur les mesures de précision et en physique des neutrinos.

La première nouvelle excitante est venue des expériences LHCb, CMS et ATLAS du LHC, avec de nouveaux résultats sur combien de fois les mésons Bs se désintègrent en deux muons. Ce procédé est tellement rare d’après le modèle standard que toute contribution additionnelle venant d’une nouvelle physique serait vite détectée. LHCb obtient la meilleure limite, soit moins de 4.5 x 10-9, à peine plus grand que la valeur prédite par le modèle standard de 3.5 x 10-9. Ceci laisse peu de place pour la nouvelle physique. Heureusement, David Straub, un théoricien de la Scuola Normale Superiore et INFN de Pisa, a montré qu’une valeur inférieure à celle prédite permettrait tout autant de déceler les effets de la nouvelle physique. Cette possibilité est désormais envisageable, étant donné le degré de précision atteint par les expériences du LHC.

L’impact des nouvelles limites sur les désintégrations rares de Bs et Bd en deux muons sur les différents modèles de SUSY montre que l’espace des paramètres encore permis est de plus en plus restreint (petit rectangle en bas à gauche) comparé à tout le reste qui était encore possible l’an dernier.

En ce qui concerne le boson de Higgs, quatre expériences indépendantes observent maintenant de petits signaux qui pourraient venir du Higgs dans quatre canaux de désintégration différents. C’est un peu comme si on entendait la même rumeur de quatre personnes fiables qui l’ont elles-mêmes obtenues de quatre sources indépendantes. Tout cela ne constitue aucunement une preuve, mais il y a matière à réflexion. Les quatre observent toutes un petit excès autour de 125 GeV, quoiqu’aucun pris séparément ne soit convainquant. ATLAS et CMS recommenceront à accumuler de nouvelles données dès la semaine prochaine et on aura enfin le cœur net.

Bien que les quatre expériences – ATLAS, CMS, CDF et D0 – insistent qu’il soit trop tôt pour conclure, les théoriciens et théoriciennes y allaient allègrement. Nazila Mahmoudi, une théoricienne du CERN a montré l’impact qu’aurait la découverte d’un boson de Higgs ayant cette masse sur les différent modèles de supersymmétrie.

Les valeurs de « tan β » et  « mA », deux importants paramètres des modèles de SUSY, encore permises sont montrées par les points noirs. Les valeurs en rouge sont disqualifiées par les mesures récentes en physique du quark b. Toutes la région au-dessus de la ligne jaune est exclue par les recherches de particules de SUSY faites par CMS. Et si on ajoute les contraintes qu’imposerait la découverte d’un boson de Higgs de 125 GeV, il ne resterait plus que les points en vert.

La composition de l’univers: 96% vient de substances complètement inconnues appelées « matière noire » et « énergie noire ».

Josef Pradler du Perimeter Institute au Canada a jeté un éclairage nouveau sur un résultat très controversé depuis des années venant de l’expérience DAMA/LIBRA. Le groupe affirme observer un signal très clair venant de la « matière noire ». Ce type de matière, fort mystérieuse et inconnue, forme 23% de tout l’univers alors que la matière qu’on connait (étoiles et galaxies) ne compte que pour 4% du contenu total.

Le problème, c’est qu’aucune autre expérience n’arrive vraiment à corroborer leurs résultats, de telle sorte que plusieurs personnes supposaient que ce signal venait de muons cosmiques. Josef Pradler et ses collègues ont démontré que les données de DAMA/LIBRA sont incompatibles avec l’hypothèse des muons cosmiques avec 99% de certitude. Le mystère demeure entier.


A partir de mesures gravitationnelles, des astronomes ont montré que la matière noire est concentrée dans le halo galactique, c.à.d. l’extérieur de notre galaxie. Quand la terre orbite autour du soleil durant son cycle annuel, elle rencontre un flux de matière noire  (représenté par le « WIMP Wind » sur le dessin). Le WIMP (Weakly Interacting Massive Particles) est un des candidats de matière noire. En juin, le flux de WIMPs est plus fort (vent de face) qu’en décembre quand la terre s’éloigne de la source de ces particules.

Le détecteur DAMA/LIBRA compte moins de collisions avec des WIMPs en hiver qu’en été, d’où la modulation observée dans le nombre de particules détectées (axe vertical) suivant la période de l’année (axe horizontal).

Des signes possibles d’un boson de Higgs ayant les caractéristiques de celui prédit par le modèle standard et toujours aucune trace de la nouvelle physique, voilà ce qui résume en peu de mots une conférence qui en a laissé plusieurs songeurs.

On sait que le modèle standard ne décrit pas tout ce que l’on observe. Mais quelle est donc la véritable théorie qui aurait réponse à tout? Lisa Randall, théoricienne de l’université Harvard a rappelé que peu importe le modèle, il doit s’attaquer à deux problèmes majeurs : la brisure de symétrie (pourquoi trois particules associées à la force électrofaible sont massives et une sans masse?) et le problème dit de la « hiérarchie » (pourquoi le quark top est-il 350 000 fois plus lourd que l’électron?). De quoi rendre songeur.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

 

 

Share

The first week of the biggest winter conference, the Rencontres de Moriond held in La Thuile in Italy closed on March 10, leaving all attendants both impressed and puzzled by all the new results presented.

The situation is the following: Theorists know that the current theoretical model, the Standard Model of particle physics, has its limits and that it is probably the most accessible part of a more complex but unknown theory. Think of it as for mathematics: arithmetic is all most of us need for every day tasks even though we know geometry, algebra and calculus are needed for more complex applications.

Physicists expect to see new phenomena that are referred to as “new physics”, which would tell us which one of the many new theories currently proposed is the right one. And everybody hopes the Large Hadron Collider (LHC) experiments will soon discover something to set us in the right direction.

Hence, the focus of this conference was to assess the impact of the all the latest experimental results on new models, particularly supersymmetry (SUSY) and extra dimensions. And there were plenty of new results on searches for the Higgs boson, SUSY particles and dark matter, as well as new precision measurements and neutrino physics.

The first excitement came from the LHCb, CMS and ATLAS experiments operating at the LHC with new measurements of how often a Bs meson decays into two muons. This decay occurs so rarely in the context of the Standard Model that even small contributions from new physics could be detected. LHCb is setting the best limit to date, with less than 4.5 x 10-9, barely above the Standard Model prediction of around 3.5 x 10-9. This leaves very little room for new physics. However, David Straub, a theorist affiliated with Scuola Normale Superiore and INFN in Pisa, showed that finding less than what is predicted by the Standard Model would also open the door to new physics, something that has previously received little attention but is now becoming possible with the increase in precision from the LHC experiments.


With stringent limits on rare decays such as Bs or Bd to two muons, many supersymmetric models have very little parameter space still allowed, as shown by the small rectangle in the bottom left corner. The rest is what was still allowed a year ago.

On the search for the Higgs boson, now, four separate experiments see faint signs of what could be Higgs bosons in four different channels. It is a bit like hearing a rumor from four trust-worthy people who all got very similar information from different reputable sources. Although it does not prove anything, we can all start thinking seriously about it. All experiments see an excess compatible with a Higgs boson mass of 125 GeV, even though the strength of the signal is still too weak to be convincing. ATLAS and CMS will resume data-taking next week and should have a clear and final verdict this year.

While all four collaborations – ATLAS, CMS, CDF and D0 – insisted that it was too early to jump to conclusions about the Higgs boson, theorists have already been checking the effects of the mass of the Higgs. Nazila Mahmoudi, a theorist from CERN, showed that the currently allowed range for the Higgs boson mass is already putting constraints on SUSY models.

The values of tan β and mA, two important parameters of supersymmetric models, still allowed if a Higgs boson is found with a mass of 125 GeV. The red points are disqualified by b-physics results. Everything above the yellow curve is excluded by direct searches for SUSY particles obtained by the CMS collaboration. And if the Higgs boson is found around 125 GeV, only the green band would still be allowed under certain constraints.


The Universe content: 96% of it comes from some absolutely unknown types called dark matter and dark energy.

Josef Pradler from the Perimeter Institute in Canada addressed a long-standing and controversial result reported several years ago by the DAMA/LIBRA experiment. The group has been claiming for years the observation of a very strong signal for “dark matter”, a mysterious and unknown type of matter that accounts for about 23% of all the content of the Universe while regular matter (all stars and galaxies) amounts to only 4%. The remaining 73% comes from some unknown type of energy called “dark energy”.

From various gravitational measurements, astronomers have shown that dark matter is more concentrated galactic halo, i.e. outskirt of the galaxy. As the Earth orbits around the Sun on its annual cycle, it encounters more WIMPs (Weakly Interacting Massive Particles, a nickname for a dark matter candidate) than in December when moving away from the dark matter source. It is very much like getting a head-wind in the summer and a tail wind in the winter.


The DAMA/LIBRA detector counts more interactions with WIMPS in the summer than in the winter, hence the annual modulation in the number of particles detected (vertical axis) as a function of time (horizontal axis).

The problem is that other experiments cannot quite confirm this result, so some people have suggested that this could simply be due to cosmic muons. Josef Pradler and his colleagues just showed that the DAMA/LIBRA data are inconsistent with the cosmic muon hypothesis at 99% CL. The mystery remains.

Possible signs of a Higgs boson being produced and decaying just like the Standard Model predicts and no signs of new physics despite extremely precise tests, left all participants rather puzzled.

Theorists know that the Standard Model does not describe everything we observe. So what is the real theory that would explain everything? Lisa Randall from Harvard University reminded the audience that whatever the new theory is, it will have to address both the symmetry breaking (why some particles associated with the electroweak force are massive, others massless) and the hierarchy problem (why is the top quark so much heavier than the electron?). Much food for thought there.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification.

 

Share

Hi All.

In case you have been away from the Wonderful World of Physics for the past few weeks there is now evidence for the Standard Model Brout­-Englert­-Higgs Boson, with a mass of approximately 125 GeV/c2, from the ATLAS, CMS, CDF, DZero, and the combined CDF+Zero experiments [Moriond 2012 Conference, FNAL press release]. This is really exciting, and measurements of Higgs-related processes will definitely have a profound impact on the viability of Beyond the Standard Model theories like supersymmetry and technicolor.

Enough about Higgs, though. Of the many, MANY reasons for constructing the Large Hadron Collider and the Detector Experiments, one of my personal favorites is

to search for evidence of quantum gravity in TeV-scale proton collisions.

We know pretty well that gravity exists. (If you have issue with this, buy two apples and while eating one let go of the other.) We also know things like electrons, muons, & photons exist. (Flip on a light switch or buy a Geiger counter.) What we are less sure about is how, on an elementary level, are electrons, muons, & photons affected by gravity?

Figure 1: An example of a black hole (center) demonstrating Hawking radiation, which is when a black hole radiates, or emits,  particles (e & γ) through interaction with virtual particles.

Over the past few decades, there has been a ton of research investigating this very question, resulting in very fruitful and fascinating discoveries. For example: black holes can radiate photons and other gauge bosons by interacting with particles that have spontaneously been produced through quantum mechanical fluctuations. This is the famous Hawking radiation (See Fig. 1) [3]. Two other examples that come to mind both attempt to explain why gravity appears to be so much weaker than either the strong nuclear force (QCD) or the electroweak force (EWK). Their argument is that all Standard Model particles are restricted to three spatial dimensions, whereas new physics, include quantum gravity, exists in more than three spatial dimensions. The difference between the two theories is that the Large Extra Dimensions (or ADD) model supposes that all additional spatial dimensions are very small (<10-20 cm) but that each dimension is not too difference from what we experience everyday (See Fig. 2) [4,5]. The Randall-Sundrum model, on the other hand, proposes that there exists only a single extra dimension but that this spatial dimension is “warped” and unlike anything we have ever experienced [6,7]. I have not even mentioned string theory, but I am sure you can imagine that the list goes on for a while.

 

Figure 2: In the ADD (Large Extra Dimension) model, an electron (e-) and positron (e+) may annihilate and produce a graviton (G) and photon (γ). A defining feature is that the Standard Model particles (e±,γ) are restricted to the move in 3 spatial dimensions, whereas the graviton may propagate in additional dimensions.

Microscopic Black Holes

Despite the number of models trying to describe gravity at the most elementary level, there is actually a phenomenon that is surprisingly common to most all of them: they all predict the existence of microscopic black holes, or at least something very close to it. Now here is where I can easily dig myself a hole, so I want to be clear. The black hole-like objects these models predict are vastly different from the star-devouring black holes we have grown to know and love. Those exist at the center of galaxies and other places like that. The most obvious difference is that astronomical black holes are, well, astronomically huge. The black holes that I am talking about, if they exist, are significantly smaller than a proton.  The term “microscopic” makes these things sound much bigger than they are. Secondly, the masses of micro-black holes are comparable to the energy of the LHC; consequently, they will evaporate (via Hawking radiation) and disintegrate (decay) within moments of being produced. In the off chance that a stable micro-black hole is generated, then after about 10-25 seconds the thing will decay and burst into a blaze of  glory quarks & gluons (See Figs. 1 (above) & 3 (below)). Research has also concluded that these things are harmless and CERN has gone out of its way to inform the public of this.

Figure 3: "-->--" is the path the microscopic black hole travels (exaggerated) while evaporating, before decaying. Click to enlarge.

Admittedly, the fun part of writing this post was trying figure out a way to describe just how a microscopic black hole event, if it existed, would look in an LHC collider detector. Hawking radiation is straight forward enough to draw (Fig. 1), but things are a bit more involved when you want to show that some of those photons and Z bosons decaying into, say, electrons and positrons. So I got a little carried away and drew things by hand. Figure 3 shows a “typical” a micro-black hole, if they exist, briefly zipping around the detector radiating photons (γ), Z’s, W±’s, and gluons (g), before bursting into a bunch more bosons all at once. These bosons will then do whatever particles normally do in a particle detector and make a mess (shower and hadronize). A very distinguishing feature that I want to highlight is the number of particles that are produced in a single micro-black hole event, this is called particle multiplicity. If they exist, then the average micro-black hole event will result in a very high multiplicity (number) of final-state particles.

This is really important because in a typical proton-proton collision, things are not as busy. To clarify: plenty of things happen in proton collisions; micro-black hole events are just a bit busier. When protons collide, only two or three primary particles are produced and these then decay in predictable ways. In addition, the incident protons fragment and hit the side walls (“end caps”) of the detectors.

Figure 4: Typical proton-proton collision at the Large Hadron Collider as seen from a Detector Experiment. Click to enlarge.

This is it though. This is how experimentalists test whether these gravity-motivated theories correctly describe nature. What differentiates microscopic black hole events from any other proton-proton event is the number of final-state particles seen by the detector. In other words: particle multiplicity! There are not too many Standard Model processes that will result in, say, 10~15 final-state particles. If suddenly a experiment group sees a bunch of 15-particle events, then more refined searches can be performed to determine the root cause of this potential signal of new physics.

Recent Results from ATLAS and CMS

The most recent results from the ATLAS and CMS Experiments on their searches for microscopic black holes are both from March 2012. In these papers, ATLAS reports using 1.3 fb-1 of data, which is the equivalent of 91 trillion proton-proton collisions; CMS reports using a whopping 4.7 fb-1, or the equivalent of 329 trillion collisions. Both groups have opted to look for events with a large number of final-state particles, specifically in the central/barrel region of the detector in order to sidestep the fact that fragmenting protons increase the multiplicity in the detectors’ side walls (end caps). ATLAS, in particular, requires that two of the final-state particles are muons with the same electric charge. This subtle requirement actually has a significant impact on the search by minimizing the number of Standard Model processes that may mimic the signal, but at the cost of reducing the number of expected micro-black hole events. In order to optimize their search, CMS sums the magnitudes of all final-state particles’ momenta. This is a bit clever because with so many additional particles this sum is expected to be significantly larger than for a typical Standard Model process.

Sadly, as you have probably guessed, neither group has seen anything like a micro-black hole. 🙁 At any rate, here is a really cool micro-black hole candidate observed by with the CMS detector. It is most likely NOT an actual mico-black hole event, just a couple Standard Model processes that passed all the analysis requirements. Pretty, isn’t it.

Figure 5: A candidate microscopic black hole event observed with the Compact Muon Solenoid Experiment. Click to enlarge.

 

 

Happy Colliding

– richard (@bravelittlemuon)

 

 

Partial Bibliography

  1. ATLAS Collaboration, Search for strong gravity signatures in same-sign dimuon final states using the ATLAS detector at the LHC, Phys. Lett. B 709 (2012) 322-340, arXiv:1111.0080v2
  2. CMS Collaboration,Search for microscopic black holes in pp collisions at sqrt(s) = 7 TeV, Submitted to the Journal of High Energy Physics,  arXiv:1202.6396v1
  3. S. Hawking, Particle Creation by Black Holes, Commun. Math. Phys. 43 (1975) 199–220, euclid.cmp/1103899181
  4. N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, The hierarchy problem and new dimensions at a millimeter, Phys. Lett. B 429 (1998) 263–267, arXiv:hep-ph/9803315v1
  5. N. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phenomenology, astrophysics and cosmology of theories with submillimeter dimensions and TeV scale quantum gravity, Phys. Rev. D 59 (1999) 086004, arXiv:hep-ph/9807344v1
  6. L. Randall and R. Sundrum, Large Mass Hierarchy from a Small Extra Dimension, Phys. Rev. Lett. 83 (1999) 3370–3373, arXiv:hep-ph/9905221v1
  7. L. Randall and R. Sundrum, An Alternative to Compactification, Phys. Rev. Lett. 83(1999) 4690–4693, arXiv:hep-th/9906064v1
  8. S. Dimopoulos and R. Emparan, String balls at the LHC and beyond, Phys. Lett. B 526(2002) 393–398, arXiv:hep-ph/0108060v1
  9. R. Casadio, S. Fabi, B. Harms, & O. Micu, Theoretical survey of tidal-charged black holes at the LHC, arxiv.org/abs/0911.1884v1
Share