• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for February, 2012

La face cachée du CERN

Wednesday, February 15th, 2012

La plupart des gens associent le CERN avec le Grand Collisionneur de Hadrons (LHC). Pourtant,  on y retrouve aussi de nombreuses expériences certes moins connues mais très diversifiées.

Près d’un millier de scientifiques travaillent sur des expériences allant de l’antimatière à des thérapies anti-cancer, en passant par la formation des nuages et la production de radioisotopes.

Déjà en 2011, l’expérience ALPHA a fait les manchettes en réussissant à conserver de l’antihydrogène pendant plus de quinze minutes. Antiparticules et particules sont produites en quantités égales dans les accélérateurs de hautes énergies. Mais comme nous vivons dans un monde fait de matière, confiner de l’antimatière n’est pas une mince affaire.  Il faut l’empêcher d’entrer en contact avec la matière, sinon les deux s’annihileront. Habituellement, une «bouteille magnétique» est utilisée pour le confinement. Il s’agit d’un espace délimité par de puissants champs magnétiques et opéré dans un vide absolu pour éviter cette rencontre. Il faut au préalable réussir à combiner un antiproton avec un antiélectron (appelé «positon») à très basse température. Ainsi, les antiatomes formés ont aussi peu d’énergie que possible afin qu’on puisse les piéger, soit moins de 0.5 0K ou -272.5 0C.

Malgré tout, maintenant que les techniques de confinement ont été améliorées en 2011, le but des expériences ALPHA, ASACUSA, et ATRAP est maintenant de voir si ces antiatomes ont les mêmes propriétés que les atomes de matière, la même spectroscopie par exemple. Une nouvelle expérience AEgIS démarrera cette année et tentera de vérifier si la constante gravitationnelle g de l’antihydrogène est la même que celle de la matière.

Entre temps, l’expérience CLOUD essaie de résoudre une vieille énigme: comment se forment les aérosols dans l’atmosphère? Toutes les gouttelettes de nuages apparaissent sur des aérosols, de toutes petites particules liquides ou solides en suspension dans l’air. Mais la formation de ces aérosols demeure un mystère. Pour y répondre, on utilise une chambre à température contrôlée contenant de l’air pur où on introduit des traces de vapeurs d’éléments chimiques. Et surprise, on constate que l’ammoniaque et l’acide sulfurique, les deux composants chimiques aéroportés soupçonnés d’être responsables de la formation de tous les aérosols, contribuent à peine à expliquer la formation d’un dixième à un millième des taux observés dans l’atmosphère. Le programme pour 2012 est donc clair : identifier les éléments manquants et poursuivre l’étude de l’effet des rayons cosmiques (simulés avec des faisceaux de pions) sur la formation de ces aérosols.

Beaucoup d’activité aussi du côté de la thérapie hadronique, une technique de pointe utilisée dans la lutte contre le cancer. Des protons et autres ions légers remplacent les photons des rayons X utilisés en radiothérapie. Le défi est de détruire les cellules cancérigènes sans affecter les cellules des tissus sains. Contrairement aux rayons X, les protons et ions de carbone déposent pratiquement toute leur énergie à la fin de leur parcours au lieu de la disséminer tout le long du trajet. Il est donc possible d’apporter de grandes quantités d’énergie là où on en a besoin, sans abimer les autres cellules en chemin.

Énergie déposée par différentes particules en fonction de la distance parcourue lorsqu’elles pénètrent la matière, par exemple un tissus humain. Les protons et les ions de carbone déposent la majorité de leur énergie à une profondeur bien précise, alors que les photons des rayons X conventionnels la dépose tout au long de leur parcours, endommageant les tissus sains avant d’atteindre la tumeur.

Le CERN a joué un rôle de catalyseur dans la formation du Réseau Européen de Recherche en Thérapie Hadronique (ENLIGHT) établi en 2002 pour faciliter la coordination des efforts européens. Durant les années 90, un groupe du CERN a développé la conception d’un accélérateur de particules pour la thérapie hadronique appelé PIMMS  (Proton Ion Medical Machine Study). Ce travail a servi de base pour plusieurs autres versions. Le CERN soutient aussi le vaste projet MedAustron en Autriche et planifie d’exploiter son expertise en technologie des accélérateurs pour développer une seconde génération d’appareils dédiés à la thérapie hadronique.

L’expérience ACE a testé la même idée avec des antiprotons. L’avantage est qu’ils détruisent plus de cellules malignes avec l’énergie supplémentaire libérée lors de l’annihilation des antiquarks de l’antiproton avec les quarks des protons et neutrons de la tumeur. Ces travaux seront complétés cette année.

Et puis il y a ISOLDE, un centre qui utilise un petit accélérateur du CERN (le booster du Proton Synchroton) pour produire des noyaux “exotiques” pour presque tous les éléments chimiques en ajoutant des protons à des noyaux stables. Ces radioisotopes sont ensuite utilisés par une cinquantaine d’expériences pour étudier le structure du noyau, l’astrophysique nucléaire, les symétries fondamentales, la physique atomique et de l’état solide, et des applications en biologie.

D’autres scientifiques utilisent des faisceaux de neutrons de n_TOF pour en autres tenter de transmuter des déchets radioactifs en éléments de plus courtes durées de vie, voire même les rendre stables et inoffensifs.

Les équipes de CAST et OSQAR elles sont  la recherche d’axions, paraphotons et caméléons, toutes des particules hypothétiques proposées par différentes théories pour expliquer la mystérieuse matière noire. Depuis une décennie déjà, ces chercheur-e-s ne cessent d’ajouter de nouvelles hypothèses sur leur liste de : « à trouver » sans risquer de chômer de si tôt.

Et comme des millions de personnes l’ont appris l’an dernier, le CERN fournit aussi des faisceaux de neutrinos à plusieurs expériences situées au laboratoire du Gran Sasso en Italie, dont OPERA où la troublante annonce de la mesure de neutrinos voyageant apparemment plus vite que la lumière a envoyé une onde de choc tout autour de la planète. Deux autres expériences du Gran Sasso se préparent en ce moment à vérifier ces résultats dans les mois à venir.

Voilà donc quelques unes des nombreuses expériences en cours au CERN en plus du programme du LHC. Ensemble, elles font du CERN un endroit valant la peine de garder à l’œil. Alors suivez nous sur Twitter @CERN.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

Share

The hidden face of CERN

Wednesday, February 15th, 2012

Most people associate CERN with the Large Hadron Collider (LHC). But lesser known although extremely diversified research activities are also ongoing at CERN.

About a thousand physicists are working on experiments ranging from antimatter studies to cancer therapy, cloud formation and radioisotope production.

Already in 2011, the ALPHA experiment made the headlines when they managed to trap antihydrogen atoms for more than fifteen minutes. Antiparticles and particles are produced in equal amounts in high energy accelerators. But since we live in a world made of matter, it is no small feat to prevent antiparticles from annihilating with particles of matter and vanishing. Usually, a magnetic “bottle” is used as the trap  This is a space confined by strong magnetic fields and operated in a high vacuum to keep antimatter from encountering any matter. First hurdle: one has to combine an antiproton with an antielectron (called “positron”) at low temperature to form antihydrogen atoms that are sluggish enough to be able to trap them (less than 0.5 K or -272.5 0C).

Nevertheless, having improved their antihydrogen production techniques in 2011, the goal of the ALPHA, ASACUSA, and ATRAP experiments is now to see if these antiatoms have the same properties as their counterpart of matter, the same spectroscopy for example. A new experiment AEgIS will come online this year with the long-term goal of measuring the gravitational constant g with antihydrogen to see if it is the same g as matter experiences.

Meanwhile, the CLOUD experiment is attempting to solve a long-standing enigma: how do aerosol particles form in the atmosphere? All cloud droplets form on aerosols — tiny solid or liquid particles suspended in the air – but how these aerosols form or “nucleate” remains a mystery. To find out, a chamber with a carefully controlled temperature is used to introduce traces of various chemical vapours into an initially “pure” atmosphere. Surprise: ammonia and sulphuric acid, the two airborne chemicals thought to be responsible for all aerosol formation, can account for only one tenth to one thousandth of the rate observed in nature. The goal for 2012 is clear: identify the missing elements and pursue studies on the influence of cosmic rays (simulated using a pion beam) on the aerosol formation rate.

Lots of developments are happening in hadron therapy, a cutting-edge cancer therapy technique where protons and other light ions are used instead of X-rays photons as in conventional radiotherapy treatment. The challenge is to destroy cancer cells without affecting the neighbouring healthy tissue. Contrary to X-rays, protons and other ions deposit nearly all their energy at a specific point near the end of their path instead of all along their path. This means one can bring large amounts of energy exactly where needed without causing damage along the way.


Energy deposited by different particles as they penetrate matter such as human tissue. Protons and carbon ions deposit most of their energy at a specific depth, whereas photons used in conventional X-rays tend to leave energy all along their path, damaging healthy tissue.

CERN acted as a catalyst in the formation of the European Network for Research in Light-Ion Hadron Therapy (ENLIGHT) in 2002 , which was established to coordinate European efforts in radiation therapy using light-ion beams. During the 1990s a group at CERN developed designs for a hadron therapy accelerator in the Proton Ion Medical Machine Study(PIMMS). This basic work has been incorporated into several of the subsequent designs. CERN is currently supporting the MedAustron therapy project in Austria and is also planning to exploit its accelerator technology and expertise in developing a second generation design for hadron therapy.

The ACE experiment has also tested the idea of using beams of antiprotons for hadron therapy, with the added advantage of blasting more malignant cells because of the amount of energy released when the antiquarks of the antiproton annihilate with the quarks of protons or neutrons from one of the cancer cells. This work is nearly completed and will be finished this year.

Much is also ongoing at the ISOLDE facility, which uses protons from a small CERN accelerator (the Proton Synchroton Booster) to produce “exotic” nuclei from most chemical elements by adding protons to stable nuclei. The radioisotopes are then used by more than 50 experiments to study nuclear structure, nuclear astrophysics, fundamental symmetries, atomic and condensed-matter physics, and for applications in life sciences. Some scientists pursue research using neutron beams from the n_TOF facility in the hope of transforming long-lived radioactive waste from nuclear power plants into shorter-lived or stable, non-radioactive elements.

Others at the CAST and OSQAR experiments are hot on the tail of “axions”, “paraphotons” and “chameleons”, some of the many hypothetical and rather exotic particles proposed by theorists to explain the nature of dark matter. For the past decade, these experimentalists have been adding new tricks to their experiments every few years to test new hypotheses and axions of heavier masses. More ideas keep these experiments’ “dance-cards” full all the time.

As millions of individuals have heard, CERN also supplies a neutrino beam to several experiments at the Gran Sasso Laboratory in Italy, including OPERA where puzzling results on muon neutrinos apparently travelling faster than the speed of light were reported last year. Two separate experiments at Gran Sasso are now setting up to cross-check this result in the coming months.

Much more is happening but it is impossible to do every one justice in a short overview. These are just a few of the many activities ongoing at CERN besides the LHC programme. All together, they make CERN a place well worth keeping an eye on in 2012, so follow us on Twitter @CERN.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification.

 

Share

Hi everyone—it’s time that I wrap up some old posts about the Higgs boson. Last December’s tantalizing results may end up being the first signals of the real deal and the physics community is eagerly awaiting the combined results to be announce at the Rencontres de Moriond conference next month. So now would be a great time to remind ourselves of why we’re making such a big deal out of the Higgs.

Review of the story so far

Since it’s been a while since I’ve posted (sorry about that!), let’s review the main points that we’ve developed so far. See the linked posts for a reminder of the ideas behind the words and pictures.

There’s not only one, but four particles associated with the Higgs. Three of these particles “eaten” by the W and Z bosons to become massive; they form the “longitudinal polarization” of those massive particles. The fourth particle—the one we really mean when we refer to The Higgs boson—is responsible for electroweak symmetry breaking. A cartoon picture would look something like this:

The solid line is a one-dimensional version of the Higgs potential. The x-axis represents the Higgs “vacuum expectation value,” or vev. For any value other than zero, this means that the Higgs field is “on” at every point in spacetime, allowing fermions to bounce off of it and hence become massive. The y-axis is the potential energy cost of the Higgs taking a particular vacuum value—we see that to minimize this energy, the Higgs wants to roll down to a non-zero vev.

Actually, because the Higgs vev can be any complex number, a more realistic picture is to plot the Higgs potential over the complex plane:

 

Now the minimum of the potential is a circle and the Higgs can pick any value. Higgs particles are quantum excitations—or ripples—of the Higgs field. Quantum excitations which push along this circle are called Goldstone bosons, and these represent the parts of the Higgs which are eaten by the gauge bosons. Here’s an example:

Of course, in the Standard Model we know there are three Goldstone bosons (one each for the W+, W-, and Z), so there must be three “flat directions” in the Higgs potential. Unfortunately, I cannot fit this many dimensions into a 2D picture. 🙂 The remaining Higgs particle is the excitation in the not-flat direction:

Usually all of this is said rather glibly:

The Higgs boson is the particle which is responsible for giving mass.

A better reason for why we need the Higgs

The above story is nice, but you would be perfectly justified if you thought it sounded like a bit of overkill. Why do we need all of this fancy machinery with Goldstone bosons and these funny “Mexican hat” potentials? Couldn’t we have just had a theory that started out with massive gauge bosons without needing any of this fancy “electroweak symmetry breaking” footwork?

It turns out that this is the main reason why we need the Higgs-or-something-like it. It turns out that if we tried to build the Standard Model without it, then something very nefarious happens. To see what happens, we’ll appeal to some Feynman diagrams, which you may want to review if you’re rusty.

Suppose you wanted to study the scattering of two W bosons off of one another. In the Standard Model you would draw the following diagrams:

There are other diagrams, but these two will be sufficient for our purposes. You can draw the rest of the diagrams for homework, there should be three more that have at most one virtual particle. In the first diagram, the two W bosons annihilate into a virtual Z boson or a photon (γ) which subsequently decay back into two W bosons. In the second diagram it’s the same story, only now the W bosons annihilate into a virtual Higgs particle.

Recall that these diagrams are shorthand for mathematical expressions for the probability that the W bosons to scatter off of one another. If you always include the sum of the virtual Z/photon diagrams with the virtual Higgs diagram, then everything is well behaved. On the other hand, if you ignored the Higgs and only included the Z/photon diagram, then the mathematical expressions do not behave.

By this I mean that the probability keeps growing and growing with energy like the monsters that fight the Power Rangers. If you smash the two W bosons together at higher and higher energies, the number associated with this diagram gets bigger and bigger. If  these numbers get too big, then it would seem that probability isn’t conserved—we’d get probabilities larger than 100%, a mathematical inconsistency. That’s a problem that not even the Power Rangers could handle.

Mathematics doesn’t actually break down in this scenario—what really happens in our “no Higgs” theory is something more subtle but also disturbing: the theory becomes non-perturbative (or “strongly coupled”). In other words, the theory enters a regime where Feynman diagrams fail. The simple diagram above no longer accurately represents the W scattering process because of large corrections from additional diagrams which are more “quantum,” i.e. they have more unobserved internal virtual particles. For example:

In addition to this diagram we would also have even more involved diagrams with even more virtual particles which also give big corrections:

And so forth until you have more diagrams than you can calculate in a lifetime (even with a computer!). Usually these “very quantum” diagrams are negligible compared to the simpler diagrams, but in the non-perturbative regime each successive diagram is almost as important as the previous. Our usual tools fail us. Our “no Higgs theory” avoids mathematical inconsistency, but at the steep cost of losing predictivity.

Now let me be totally clear: there’s nothing “wrong” with this scenario… nature may very well have chosen this path. In fact, we know at least one example where it has: the theory of quarks and gluons (QCD) at low energies is non-perturbative. But this is just telling us that the “particles” that we see at those energies aren’t quarks and gluons since they’re too tightly bound together: the relevant particles at those energies are mesons and baryons (e.g.pions and protons). Even though QCD—a theory of quarks and gluons—breaks down as a calculational tool, nature allowed us to describe physics in terms of perfectly well behaved (perturbative) “bound state” objects like mesons in aneffective theory of QCD. The old adage is true: when nature closes a door, it opens a window.

So if we took our “no Higgs” theory seriously, we’d be in an uncomfortable situation. The theory at high energies would become “strongly coupled” and non-perturbative just like QCD at low energies. It turns out that for W boson scattering, this happens at around the TeV scale, which means that we should be seeing hints of the substructure of the Standard Model electroweak gauge bosons—which we do not. (Incidentally, the signatures of such a scenario would likely involve something that behaves somewhat like the Standard Model Higgs.)

On the other hand, if we had the Higgs and we proposed the “electroweak symmetry breaking” story above, then this is never a problem. The probability for W boson scattering doesn’t grow uncontrollably and the theory remains well behaved and perturbative.

Goldstone Liberation at High Energies

The way that the Higgs mechanism saves us is somewhat technical and falls under the name of the Goldstone Boson Equivalence Theorem. The main point is that our massive gauge bosons—the ones which misbehave if there were no Higgs—are actually a pair of particles: a massless gauge boson and a massless Higgs/Goldstone particle which was “eaten” so that the combined particle is massive. One cute way of showing this is to show the W boson eating Gold[stone]fish:

Indeed, at low energies the combined “massless W plus Goldstone” particle behaves just like a massive W. A good question right now is “low compared to what?” The answer is the Higgs vacuum expectation value (vev), i.e. the energy scale at which electroweak symmetry is broken.

However, at very high energies compared to the Higgs vev, we should expect these two particles to behave independently again. This is a very intuitive statement: it would be very disruptive if your cell phone rang at a “low energy” classical music concert and people would be very affected by this; they would shake their heads at you disapprovingly. However, at a “high energy” heavy metal concert, nobody would even hear your cell phone ring.

Thus at high energies, the “massless W plus Goldstone” system really behaves like two different particles. In a sense, the Goldstone is being liberated from the massive gauge boson:

Now it turns out that the massless W is perfectly well behaved so that at high energies. Further, the set of all four Higgses together (the three Goldstones that were eaten and the Higgs) are also perfectly well behaved. However, if you separate the four Higgses, then each individual piece behaves poorly. This is fine, since the the four Higgses come as a package deal when we write our theory.

What electroweak symmetry breaking really does is that it mixes up these Higgses with the massless gauge bosons. Since this is just a reshuffling of the same particles into different combinations, the entire combined theory is still well behaved. This good behavior, though, hinges on the fact that even though we’ve separated the four Higgses, all four of them are still in the theory.

This is why the Higgs (the one we’re looking for) is so important: the good behavior of the Standard Model depends on it. In fact, it turns out that any well behaved theory with massive gauge bosons must have come from some kind of Higgs-like mechanism. In jargon, we say that the Higgs unitarizes longitudinal gauge boson scattering.

For advanced readers: What’s happening here is that the theory of a complex scalar Higgs doublet is perfectly well behaved. However, when we write the theory nonlinearly (e.g. chiral perturbation theory, nonlinear sigma model) to incorporate electroweak symmetry breaking, we say something like: H(x) = (v+h(x)) exp (i π(x)/v). The π’s are the Goldstone bosons. If we ignore the Higgs, h, we’re doing gross violence to the well behaved complex scalar doublet. Further, we’re left with a non-renormalizable theory with dimensionful couplings that have powers of 1/v all over the place. Just by dimensional analysis, you can see that scattering cross sections for these Goldstones (i.e. the longitudinal modes of the gauge bosons) must scale like a positive power of the energy. In this sense, the problem of “unitarizing W boson scattering” is really the same as UV completing a non-renormalizable effective theory. [I thank Javi S. for filling in this gap in my education.]

Caveat: Higgs versus Higgs-like

I want to make one important caveat: all that I’ve argued here is that we need something to play the role of the Higgs in order to “restore” the “four well behaved Higgses.” While the Standard Model gives a simple candidate for this, there are other theories beyond the Standard Model that give alternate candidates. For example, the Higgs itself might be a “meson” formed out of some strongly coupled new physics. There are even “Higgsless” theories in which this “unitarization” occurs due to the exchange of new gauge bosons. But the point is that there needs to be something that plays the role of the Higgs in the above story.

Share

par Nathalie Aubin et Sylvie Massiot, artistes de la compagnie Nukku Matti

Les zéolithes, le pic du spectre, disséquer les gonades, les nématodes, anaérobie, enzymatique, l’étuve agitante, interaction, j’ai du temps de faisceau, le pouième, la désintégration double béta des états excités, la soupe de quark, la magicité du noyau, TeV, KeV… Des mots imaginaires ? Non, le vocabulaire bien spécifique des scientifiques : leur « jargon » comme on dit. Parce que ces mots nous amusent, parce que les phénomènes qu’ils décrivent nous fascinent, et parce qu’ils nous inspirent tout simplement, nous venons de plonger dans l’univers de l’infiniment petit pour la création d’un spectacle sur la structure de la matière et les particules élémentaires. Nous terminons tout juste la deuxième phase : la prise de données…

Les comédiennes interprètent une chanson devant un instrument de physique du CENBG. Photo : Service audiovisuel de Bordeaux 1

Pour ce faire, nous nous sommes immergées, durant cinq jours, dans le monde de la recherche fondamentale et de la physique des particules. Notre expérience s’est déroulée plus précisément au Centre d’Etudes Nucléaires de Bordeaux Gradignan (CENBG). Nous y avons passé une semaine exceptionnelle et nous avons découvert un univers extraordinaire… Christine Marquet, chercheuse au CENBG, nous a ouvert les portes d’un monde jusqu’alors invisible à nos yeux. Ici les chercheurs tentent de percer les mystères par la réflexion, la collaboration, l’échange de savoir, l’invention et la construction d’instruments insolites pour le néophyte. L’ensemble des professionnels s’est mis à notre portée sans compter son temps, ni son énergie pour partager ses connaissances et ses questionnements.

Ainsi, chercheurs, ingénieurs, techniciens nous ont parlé de noyaux exotiques, de mécanique, d’électronique, de chimie chaude, d’astrophysique, de biologie, d’informatique, de particules mais aussi de la place de la recherche dans notre société, de l’importance de la collaboration internationale, de la question de la rentabilité incompatible avec le principe même de la recherche fondamentale. Nous avons collecté beaucoup de données qu’il va nous falloir analyser et trier, mais comme le dit Stéphane, un physicien du CENBG : « le résultat n’est pas toujours là où on l’attend ».

Toutefois cette semaine d’immersion confirme notre envie de transmettre au plus grand nombre l’enthousiasme dans lequel nous avons été plongées. Notre souhait le plus cher est de réussir à traduire dans ce spectacle la même passion, la même curiosité, la même envie de partage que les chercheurs nous ont montrée.


Vidéo de la « Prise de données » (réalisation : Service audiovisuel de l’Université Bordeaux 1)

Pour le moment intitulé « Parce que 12 », ce nouveau spectacle sera en tournée cet automne. Le projet est soutenu par : l’IDDAC, le CENBG, le CNRS/IN2P3, l’Université Bordeaux 1, la Communauté de Communes du Vallon de l’Artolie, la ville de Villenave de Rions. Pour suivre l’évolution du projet, rendez-vous sur la rubrique “Création 2012” de notre site web !

Share

Maîtriser la complexité

Sunday, February 12th, 2012

Je reviens tout juste de la réunion annuelle du Forum économique mondial, à Davos. Durant ces quelques jours, je me suis attaché à faire comprendre que la science devrait occuper dans l’agenda politique et économique une place bien plus importante qu’elle ne le fait actuellement. C’est la deuxième fois seulement que je participe au Forum, mais j’ai l’impression que le message commence à être entendu. Cette année, j’ai insisté sur le fait qu’il est important d’établir des liens plus étroits entre les questions scientifiques évoquées au cours de la réunion et les discussions politiques, et je m’efforcerai de promouvoir cette idée en vue de la prochaine réunion du Forum.

La science est un sujet complexe. C’est ainsi. Mais il est essentiel que chacun l’aborde de manière constructive. C’est particulièrement vrai pour les hommes politiques et les chefs d’entreprise présents à Davos, dont les décisions en rapport avec des questions scientifiques peuvent influencer bien des choses, du bien-être de nos enfants à l’avenir de la planète. Il est fondamental que ces décisions soient prises de manière informée et rationnelle.

Le défi pour la science, c’est que nous vivons dans un monde où l’on se doit de connaître Shakespeare, Molière ou Goethe, mais où l’on peut avouer sans honte ne rien savoir de Faraday, de Pasteur ou d’Einstein. Cela n’a pas toujours été le cas et les choses pourraient être différentes. Aujourd’hui, la tendance est à l’indifférence, voire à l’hostilité envers la science. C’est une tendance dangereuse pour tous, et il est du devoir de la communauté scientifique d’y remédier.

Il n’y a encore pas si longtemps, la science faisait partie intégrante de la société. Elle faisait la une des journaux et on en parlait autant que des matches de football. Au début du XXe siècle, les découvertes d’Einstein étaient illustrées par des dessins de presse, et, dans les années 60, la science envahissait l’imaginaire populaire, en grande partie grâce au programme Apollo de la NASA. Mais, déjà, l’écart entre la science et la société se creusait, et cette tendance n’a fait que s’accentuer, laissant la société mal préparée pour prendre des décisions fondées scientifiquement.

Le changement climatique et l’énergie sont les deux grands défis auxquels la société doit aujourd’hui faire face. Ce sont là deux questions scientifiques et politiques extrêmement complexes. Le climat est en train de changer. Cela ne fait aucun doute, tout comme le fait que l’activité humaine y est pour quelque chose. Et pourtant, dans la sphère publique, la question reste débattue De la même façon, on ne peut que constater que les énergies renouvelables ne suffisent pas à l’heure actuelle pour satisfaire les besoins toujours croissants de la planète. Cela ne veut pas dire qu’elles n’ont pas leur place. Bien au contraire, et cette place prendra de l’ampleur au fil des ans. Mais il faudra du temps avant de pouvoir répondre à la demande. La société est-elle armée pour prendre les difficiles décisions qui s’imposent sur des questions d’importance planétaire comme celles-ci ? Je ne le pense pas.

Sur le plan individuel, un grand nombre de sujets laissent les citoyens perplexes, ce qui les amène à prendre des décisions en étant mal informés ; des décisions qui sont littéralement d’importance vitale : cela peut concerner la maladie de la vache folle, la peur du vaccin ROR, l’innocuité des téléphones portables, pour ne citer que ces quelques exemples.

Au CERN également nous avons bien sûr expérimenté ce phénomène. Lorsque le LHC a démarré en 2008, le monde a eu peur du trou noir. Une poignée d’individus prétendaient que notre accélérateur vedette allait créer un trou noir qui engloutirait notre planète. L’idée s’est répandue sur les réseaux sociaux et a été également largement reprise dans les médias traditionnels, dont un grand nombre ont cédé à la facilité, laissant de côté le code d’éthique journalistique et préférant exploiter l’aspect grotesque du scénario. Malheureusement, la science a trop longtemps négligé la société, et nombreux sont ceux qui n’ont pu voir tout ce que cela avait de risible. On a même signalé que des écoles avaient fermé le jour de l’inauguration de la machine pour permettre aux enfants d’être auprès de leurs parents, au cas où. Et tout cela, sur le témoignage d’un homme qui, interrogé à la télévision, a expliqué que, puisque le LHC allait peut-être détruire l’Univers, ou peut-être pas, la probabilité d’assister à un désastre était d’une chance sur deux. On pourrait en rire, si ce n’était pas si dramatique.

Que peuvent faire les scientifiques ? Selon moi, bien des choses Sur le plan institutionnel, des changements s’amorcent. Dans la toute nouvelle Blavatnik School of Government de l’Université d’Oxford, par exemple, la science fait partie intégrante des cours de politique publique. Nous devons utiliser des projets scientifiques passionnants comme le LHC pour amener les gens à s’intéresser à la science, pas uniquement par des articles scientifiques, mais aussi via de nouveaux canaux, comme le programme de résidence artistique qui vient d’être lancé au CERN. Et les scientifiques qui ont de l’influence doivent utiliser cette influence pour façonner le débat politique dans les capitales et dans des endroits comme Davos.

Depuis plusieurs années, le CERN privilégie l’ouverture, profitant de la mise en lumière du LHC pour dialoguer davantage avec le plus grand nombre (décideurs, population locale, grand public). Nos activités sont ainsi traitées de manière responsable et recommencent à faire la une des médias et à être suivies par le grand public. Parfois, les faits ne sont pas relatés exactement comme nous le voudrions, mais il est question de science, et c’est là l’essentiel.

Lorsque le LHC a démarré, le monde a continué d’exister, et un journal au moins n’a pas hésité à dire que le LHC serait le nouvel Apollo et conduirait toute une génération à s’intéresser à la science. Bien sûr, ce n’est pas à prendre au pied de la lettre, mais ce genre de commentaire a un effet positif. Plus récemment, un autre journal indiquait que la physique possède ce petit quelque chose en plus, cette qualité insaisissable qui le met dans l’air du temps.

La science dans son ensemble doit en profiter et faire en sorte que l’intérêt pour le LHC ne soit pas un simple feu de paille médiatique, et que les échanges avec le grand public se poursuivent. En tant que scientifiques, nous le devons à la planète. Nous devons aider les gens à maîtriser la complexité de leur vie quotidienne, qui dépend de questions scientifiques. Dans douze mois, c’est le message que je transmettrai à Davos.

Rolf Heuer

Share

Ah yes, peer review; one of the more misunderstood parts of the scientific method. Peer review is frequently treated as an incantation to separate the wheat from the chaff. What has been peered reviewed is good; what hasn’t is bad. But life is never so simple. In the late 1960s, Joseph Weber (1919 – 2000) published two Physical Review Letters were he claimed to have detected gravitational waves. Although there are a few holdouts who believed he did, the general consensus is that he did not, since his results have not been reproduced. Rather it is generally believed that his results were an experimental artifact. His results were peer reviewed and accepted at a “prestigious” journal but that does not guarantee that they are correct. Even the Nobel committee occasionally makes mistakes, most notably giving the award to the discoverer of lobotomies.

Conversely, consider the case of Alfred Wegener (1880 – 1930). In 1912 he proposed the idea of continental drift. To say the least, it was not enthusiastically received. It did not help that Wegener was meteorologist, not a geologist. This theory was largely rejected by his peers in geology. For example, the University of Chicago geologist Rollin T. Chamberlin said, If we are to believe in Wegener’s hypothesis we must forget everything which has been learned in the past 70 years and start all over again. In 1926, the American Association of Petroleum Geologists (AAPG) held a special symposium on the hypothesis of continental drift and rejected it. After that, the hypothesis was strictly on the fringe until the late 1950s and early ‘60s when it finally became mainstream.

Thus, we see that peer review cannot definitively be relied on to give the final answer. So what use is peer review? The problem is that, as pointed out in previous posts, in science there is no one person who can serve as the ultimate authority; rather, observation is. As a school student, the teacher knows more than the student and can be considered the final authority. In university, the professor plays that role, sometimes with gusto. But when it comes to research, frequently it is the researcher him/herself who is the world expert. So how can research be judged and how do we make decisions about that research? And decisions do have to be made. We cannot publish everything—the useful results would get lost in the noise. We must maintain the collective wisdom that has been laboriously developed. Similarly, decisions have to be made on who gets research grants. Do we use a random number generator? Ok, no snide remarks, I admit that it does occasionally look like we do.  As there is no single human to serve as the final authority, we turn to the people who know the most about the topic, namely the peers of the person. If we want a decision related to sheep farming, we consult sheep farmers; if about nuclear physics, we consult nuclear physicists. Peer review is simply the idea that when we have to make a decision, we consult those people most likely to be able to make an informed decision. Is it perfect? No. Is there a better process? Perhaps, but no one seems to know what it is.

Peer review is also used as a bulwark against bull…, oops, material, that is of questionable validity. The expression, that has not been peer reviewed, is used as a euphemism for, that is complete and utter crap and I am not going to waste my time dealing with it. In this case it tends to come across as closed minded: Not peer reviewed?  It’s nonsense! Needless to say, cranks take great exception and tend to regard peer review as a new priesthood who stifles innovation.  And indeed, as noted above, sometimes peer review does get it wrong. There is always this tension between accepting nonsense and rejecting the next big thing. As the case of continental drift illustrates, it is sometimes only in retrospect, when we have more data, that we can tell what the correct answer is. However, it is better to reject or delay the acceptance of something that has a good chance of being wrong than to have the literature overrun with wrong results (think lobotomies). However, contrary to popular conception, Copernicus and Wegener are the exception, not the rule. That is why Copernicus is still used as the example of the suppression of ideas half a millennium later—there are just not that many good examples. And I might add that both Copernicus and Wegener were initially rejected for good reasons and were accepted once sufficient supporting data came to light.  Most people, who the peer review process deems to be cranks, are indeed cranks. Never heard of Immanuel Velikovsky (1895 – 1979)? Well, there is a reason. The few who were right are remembered, but the multitudes that were wrong are, like Velikovsky, forgotten.

Peer review is one of the cornerstones of science and is an essential part of its error control process. At every level in science we use peers to check for errors. Within well-run collaborations, results are reviewed by the peers within the collaboration before submitting for publication. I will get my peers to read my papers before submission. Even the editing of these posts before being put on line can be considered peer review. Then there is the formal peer review a paper receives when it is submitted to a journal. In many ways this is the least important peer review because it is after a paper is published that it receives its most vigorous peer review. I can be quite sure there is no fundamental flaw in special relativity, not because Einstein was a genius, not because it was published in a prestigious journal, but because after it was published many very clever people tried very hard to find flaws in it and failed. Any widely read scientific paper will be subject to this thorough scrutiny by the author’s peers.  That is the reason we can have confidence in the results of science and why secrecy is the enemy of scientific progress. Given enough eyeballs, all bugs are shallow[1].

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

 

Share

Physicists Eat!

Friday, February 10th, 2012

CERN is a pretty interesting place to work, probably more so than other physics laboratories around the world, due to its highly international nature. Here is a nice graphic of the nationalities of all CERN users:


In no place is the international nature of the laboratory more evident than in the main cafeteria on site. While most of the conversations are in English, you can usually hear bits of conversation in other languages. I personally like to play the ‘guess what language that table is speaking’ game, though it’s a little frustrating as I can’t just go over and ask to check if I have it right or not.

Whatever the language the conversation is in, you can be sure that the most discussed topic is physics. In fact, a lot of important discussions occur over a drink or a bite to eat. It’s just easier to discuss issues in an informal setting with less people than a more formal video conference.

Probably due to this fact, I think there is a slight fascination with the cafeteria from the media. Every couple of weeks there is usually a film crew in there, filming people eating and talking for whatever feature they are producing.

USLHC has decided to join in on the cafeteria action, having intern Amy Dusto set up LHC Lunch, a series of articles and videos sourced from lunch time interviews with members of the LHC experiments working for US institutes.

Why do I bring all of this up? Well, I was one of the physicists whom she interviewed, and my article and video has just been published. Check it out here. Enjoy!

Share

In the thirteenth century, Western Europe rediscovered the teachings of ancient Greece. Two friars played a lead role in this: the Dominican Saint Thomas Aquinas (1225 – 1274) and the Franciscan Roger Bacon (1214/1220 –1292).  Aquinas combined the teaching of Aristotle with Christianity. His teachings became the orthodoxy in both Christianity and natural philosophy until the scientific revolution in the seventeenth century. Aquinas took Aristotle as an authority and, in turn, was taken as an authority by those who followed him. To some extent this has continued down to the present day, at least in the Catholic Church. The scientific revolution was, to a large extent, the overturning of Aristotelian philosophy as repackaged by Aquinas.

Bacon took a different track and extracted something different from the study of Aristotle. This something different was an early version of the scientific method. He applied mathematics to describe observations and advocated using observation to test models. Bacon described a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification.  Bacon was largely ignored and, unlike Aquinas, was not declared a saint. Galileo Galilei (1564 – 1642), if not directly influenced by Bacon, was in many ways following his tradition, both in his use of mathematics and in stressing the importance of observations. The difference between Aquinas and Bacon is the contrast between the appeal to authority and the finding out for oneself. In this contest, the appeal to authority lost rather decisively, but it was a long tough fight. People generally prefer a given answer, even if it is wrong, to the tough process of extracting the correct answer.

In spite of all that, appeal to authority is frequently necessary. The legal system in most democracies, for example, is based on the idea of appeal to authority. The parliament may make the laws but it is the courts that decide on what they mean.  Frequently, the courts even have the authority to override laws based on the constitution. This is true in many countries but most famously in the United States of America. In these countries, what the Supreme Court says, is the law. What a law actually means is commonly a matter of interpretation as evidenced by split decisions where one judge holds one opinion and another judge the opposite. Perhaps the interpretations are even arbitrary as they sometimes change over time despite the authority given to precedence. But a decision is required and there is no objective criteria, so the majority rules.

Now, it is worth commenting that that laws of nature and laws of man are completely different beasts and it is unfortunate that they are given the same name. The so called laws of nature are descriptive. They describe regularities that have been observed in nature.  They have no prescriptive value. In contrast, the laws of man are prescriptive, not descriptive. Certainly, the laws against smoking marijuana are not descriptive in British Columbia, neither were the laws against drinking during US prohibition. The laws describe what the government thinks should happen with prescribed punishments for those who disobey. However, there is no penalty for breaking the law of gravity because, as far as we know, it can’t be done. If someone actually did it, it would cease to be a law and there would be a Nobel Prize, not a penalty.  Like the laws of man, the laws of God—for example, the Ten Commandments—are prescriptive, not descriptive, with penalties given for breaking them. You can break the law of man and the laws of God, but not the laws of physics.

In science, things are different than in the courts of law. In the latter, we are concerned with the meaning of a law that some group of people have written. This, by its very nature, has a subjective component. In science, we are trying to discover regularities in how the universe operates. In this, we have the two objective criteria: parsimony and the ability to make correct predictions for observations. As pointed out in the previous post, idolizing a person is a mistake, even if that person is Isaac Newton. Appeal to observation trumps appeal to a human authority, but in the short term, even in science, appeal to a human authority is often necessary. Life is too short and the amount to know too large to discover it all for oneself. Thus, one relies on authorities. I consult the literature rather than trying to do experiments myself. We consult other people for expertise that we do not have ourselves. We rely on the collective wisdom of the community as reflected in the published literature. When we require decisions, we must rely on the proximate authorities of peers in a process called peer review. This process is relied on to maintain the collective wisdom and will be discussed in more detail in the next post.  In the meantime, we conclude this post by paraphrasing William Lyon McKenzie King[1] (1874 –1950): Appeal to authority if necessary but not necessarily appeal to authority.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] The longest serving Canadian Prime Minister.

Share

As a young student, I was taught that mathematics is the language of physics. While largely true, one also cannot communicate in CMS at the CERN LHC without learning a plethora of acronyms. When we wrote the CMS Trigger and Data Acquistion System (TriDAS) Technical Design Report (TDR) in year 2000, we included an appendix that contained a dictionary of 203 acronyms from ADC to ZCD, quite necessary to digest this document.  In the next years, the list of acronyms would grow exponentially. We even have nested acronyms, LPC, for example standing for LHC Physics Center. In a talk of many years ago, one of my distinguished collaborators flashed a clever new creation and quipped “I believe this is the first use of a triply-nested acronym in CMS.” I do not know if since then we have reached  quads or quints. Somehow it would not surprise me.

One of the latest creations is YETS: Year End Technical Stop, referring to the period between the end of the heavy ion run on 7 December 2011 and the restart of LHC operations due to begin next week with hardware commissioning leading ultimately to pp collisions in April. So what to physicists do during YETS? A lot as it turns out!

One of the major activities is how to cope with the projected instantaneous luminosity of 7e33 (per cm**2 per s). This luminosity will likely come with a 50 nanosecond beam structure (the time between collisions) as was used in 2011. This means that the average number of pp interactions per triggered readout will be about 35, the one you tried to select with the trigger, plus many more piled on top of it. This affects trigger rates and thresholds, background conditions, and the algorithms used in the physics analysis. In addition, we shall likely run at 8 TeV total energy (compared to 7 last year). These new expected conditions are being simulated, a process requiring a huge amount of physicist manpower and computing resources. The results are carefully scrutinized in collaboration-wide meetings. That is the “glory” activity.

Besides the glory work, there is also a huge amount of technical service work, both hardware and software. At CMS in Point 5 (P5) we have observed beam-induced pressure spikes (rise and fall) in the vacuum. The pumping required for recovery is using up the supply of non evaporable getter (NEG) needed to achieve ultrahigh vacuum (UHV). The UHV in turn is needed to ensure that the beams do not abort which nearly happened last year. A huge effort was launched to radiograph the region in question to see if the same problems of drooping radio frequency (RF) fingers are present as has been observed in other sectors. An electrical discharge from the RF fingers can possibly cause the UHV spikes. Also at P5 work will be done on the zero degree calorimeter (ZDC), the Centauro And Strange Object Research (CASTOR) detector (not to be confused with CERN Advanced Storage Manager), the cathode strip chamber (CSC), the restive plate chamber (RPC) and the drift tube (DT) muon detectors which are accessible without opening the yoke of CMS. In addition, there is maintenance of the water cooling and rewiring of the magnet circuit breaker.

Each of the CMS subsystems has work to do as evident by a recent a trip into the P5 pit. The detailed activities of the pixel (PX), silicon tracker (TK), electromagnetic calorimeter (ECAL), and muon (MU) subdetectors are beyond the scope of this blog. I can give you some idea of what is going on with the hadron calorimeter (HCAL), where a bit of the details are fresh in my mind.

The HCAL activities are quite intense. Detector channel-by-channel gains, the numbers that are needed to convert electrical signals into absolute energy units can vary with time for a variety of reasons (e.g. radiation damage) and need periodic updating. This information has to go into the look up tables (LUTs) that are used by the electronics to provide TPGs (trigger primitive generation) which are in turn used by the level-1 hardware trigger to select events. If these numbers in the LUTs are slightly off, then the energy threshold that we think we are selecting is off target which is very bad because trigger rates vary exponentially with energy.

The HCAL uses 32 optical S-LINKs (where the S stands for simple, although I don’t remember anything simple about getting it to work) to send the data to DAQ computers. My group at Boston designed and built the front end driver (FED) electronics that collects and transmits the data on these links. The data transmission involves a complex buffering and feedback system so that the data flow can be throttled without crashing in case something goes wrong. The data flow reached its design value of 2 kBytes per link per event at the end of 2011 so we are going to reduce the payload by eliminating some redundant data bits which were previously useful for commissioning the detector but are no longer needed. This will allow us to comfortably handle the expected increase in event size due to increased pileup. Also 4 of our boards developed dynamic random access memory (DRAM) problems after a sudden power failure which took up two days of my time at CERN to inventory spares, isolate the affected DRAMS, and arrange for repairs.

The HCAL computers at P5 are running 32 bit Scientific Linux CERN (SLC4, another nested acronym). While we enjoyed the stability of this release over a number of years, it will no longer be supported by CERN after February 2012.  These computers are being upgraded (as I write this!) to 64 bit SLC5.

The HF calorimeters will have their photomultiplier tubes (PMTs) replaced in the LS1. We would like to do measurements with a few new PMTs in order to study performance stability and aging in the colliding beam environment. This activity requires building and testing new high-voltage (HV) distribution printed circuit boards (PCBs). The HV PCBs require testing and installation in the current HF read out boxes (ROBOXs) while there is still access to the detector.

Our group at Boston in also involved with designing electronics needed for the HCAL upgrade, the first part of which will take place in the first long shutdown (LS1). The new electronics is based on micro telecommunications computing architecture (uTCA). In Boston we have built a uTCA advanced mezzanine card for the unique slot number 13 (AMC13). This card will distribute the LHC clock signals needed for trigger timing and control (TTC) as well as serve as the FED. We plan to test these cards during the 2012 run. To prepare for these tests we have installed an AMC13 card in the central DAQ (cDAQ) lab which can transmit data on optical fibers to a multi optical link (MOL) card which exists in the form of a personal computer interface (PCI) card that can be readily attached to a computer. I addition, to be able to perform the readout tests with the new electronics without interrupting the physics data flow, we have installed optical splitters on the HCAL front end digital signals for a portion of the detector, parts of the HCAL barrel (HB), HCAL end cap (HE), and HCAL forward (HF), so that one path can be used for physics data and the other path for uTCA tests.

I can assure you that the activities in parts of CMS are (almost) as intense as during physics runs. There has been a lot to do!

I once met a secretary in California, the land of innovative thinkers, who was exposed to physics through typing exams, that could not understand why students thought physics was so hard. She thought each letter always stood for the same thing and once you learned them you were pretty much set. I am not sure she believed me when I told her there weren’t enough letters to go around. Same thing with acronyms. A quick search for CMS will include: Center for Medicare & Medicaid Services (a nested acronym), Content Management System, Chicago Manual of Style, Chronic Mountain Sickness, Central Middle School, City Montessori School, Charlotte Motor Speedway, Comparative Media Studies, Central Management Services, Convention on Migratory Species, Correctional Medical Services, College Music Society, Colorado Medical Society, Cytoplasmic Male Sterility, Certified Master Safecracker, Cryptographic Message Syntax, Code Morphing Software, Council for the Mathematical Sciences, Court of Master Sommeliers, and my own favorite, a neighborhood landscaper Chris Mark & Sons, of which am proud owner of one of their shirts.

And for those against acronym abuse, you can buy an AAAAA T-shirt (maybe I will too):

Thanks to Kathryn Grim for suggesting a blog about what goes on at an LHC experiment during shutdown.

 

Share

Can the LHC Run Too Well?

Friday, February 3rd, 2012

For CMS data analysis, winter is a time of multitasking. On the one hand, we are rushing to finish our analyses for the winter conferences in February and March, or to finalize the papers on analyses we presented in December. On the other, we are working to prepare to take data in 2012. Although the final decisions about the LHC running conditions for 2012 haven’t been made yet, we have to be prepared both for an increase in beam energy and an increase in luminosity. For example, the energy might go to 8 TeV center-of-mass, up from last year’s 7. That will make all our events a little more exciting. But it’s the luminosity that determines how many events we get, and thus how much physics we can do in a year. For example, if the Higgs boson exists, the number of Higgs-like events we’ll see will go up, and so will the statistical power with which we can claim to have observed it. If the hints we saw at 125 GeV in December are right, our ability to be sure of its existence this year depends on collecting several times more events in 2012 than we got in 2011.

We’d many more events over 2012 if the LHC simply kept running the way it already was at the end of the year. That’s because for most of the year, the luminosity was increasing over and over as the LHC folks added more proton bunches and focused them better. But we expect that the LHC will do better, starting close to last year’s peak, and then pushing to ever-higher luminosities. The worst-case we are preparing for is perhaps twice as much luminosity as we had at the end of last year.

But wait, why did I say “worst-case”?

Well, actually, it will give us the most interesting events we can get and the best shot at officially finding the Higgs this year. But increased luminosity also gives more events in every bunch crossing, most of which are boring, and most of which get in the way. This makes it a real challenge to prepare for 2012 if you’re working on the trigger, because have to sift quickly through events with more and more extra stuff (called “pileup”). As it happens, that’s exactly what I’m working on.

Let me explain a bit more of the challenge. One of the triggers I’m becoming responsible for is trying to find collisions containing a Higgs decaying to a bottom quark and anti-bottom quark and a W boson decaying to an electron and neutrino. If we just look for an electron — the easiest thing to trigger on — then we get too many events. The easy choice is to ask only for higher-energy electrons, but beyond a certain points we start missing the events we’re looking for! So instead, we ask for the other things in the event: the two jets from the Higgs, and the missing energy from the invisible neutrino. But now, with more and more extra collisions, we have random jets added in, and random fluctuations that contribute to the missing energy. We are more and more likely to get the extra jets and missing energy we ask for even though there isn’t much missing energy or a “Higgs-like” pair of jets in the core event! As a result, the event rate for the trigger we want can become too high.

How do we deal with this? Well, there are a few choices:

1. Increase the amount of momentum required for the electron (again!)
2. Increase the amount of missing energy required
3. Increase the minimum energy of the jets being required
4. Get smarter about how you count jets, by trying to be sure that they come from the main collision rather than one of the extras
5. Check specifically if the jets come from bottom quarks
6. Find some way to allocate more bandwidth to the trigger

There’s a cost for every option. Increasing energies means we lose some events we might have wanted to collect — which means that even though the LHC has produced more Higgs bosons, it’s counterbalanced by us seeing fewer of the ones that were there. Being “smarter” about the jets means more time spent by our trigger processing software on this trigger, when it has lots of other things to look at. Asking for bottom quarks not only takes more processing, it also means the trigger can’t be shared with as many other analyses. And allocating more bandwidth means we’d have to delay processing or cut elsewhere.

And for all the options, there’s simply more work. But we have to deal with the potential for extra collisions as well as we can. In the end, the LHC collecting much more data is really the best-case scenerio.

Share