• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for August, 2012

What Goes on My Research Page?

Thursday, August 30th, 2012

It is time, it seems, for me to put up my first real departmental research page. This is a place to put up a picture, describe my research interests, and maybe link to some papers. It shouldn’t really be too difficult to write something up, as I have seem to have acquired a disturbing amount of practice in rambling about my research and putting up web pages about myself. But looking at others’ research pages has left me with a nagging question: what, really, are my research interests?

“CMS and ATLAS are two of a kind: they’re both looking for whatever new particles they can find.” — Kate McAlpine, Large Hadron Rap

In most fields, I would talk about a very specific set of problems I was interested in, and say what sort of experiments I was doing to figure things out. But the big detectors at the LHC try to look for everything, and I work on them because I’m interested in finding anything new that’s there. Am I especially interested in electroweak symmetry breaking because I work on the Higgs boson? Am I a precision tracking enthusiast because I’ve worked on pixel detectors? Well, yes, to some degree both those things are true — but the fundamental motivation for my research is to contribute to the overall program of understanding what the universe is made of, by whatever means my skills and the available opportunities allow.

Still, I suppose I had better be a bit more specific. Anyone have any suggestions?

Share

Testing theory…

Sunday, August 26th, 2012

As I discussed a BaBar result previously, it only seemed fair that I spend a post discussing a Belle one. For those of you who only associate the words BaBar and Belle to cartoon characters, they are also the names of two competing \(B\) physics experiments, both of which have finished data taking but are still producing results.

So which Belle result have I decided to discuss today? I’m going to talk about the updated measurement of the \(B^- \rightarrow \tau^- \overline{\nu}_\tau\) branching ratio that was first presented by Youngmin Yook in a parallel session at ICHEP and now can be found on arXiv.

Why would I choose this measurement you ask? Let’s have a look at the Feynmann diagram of the process on the right here. In the Standard Model, the decay can only proceed via the exchange of a \(W^-\) boson and so the branching ratio can be translated to a measurement of \(V_{ub}\), one of the CKM quark mixing matrix elements. However, new physics could significantly modify the branching ratio via the exchange of a new charged particle, like a charged Higgs boson.

An updated result of the branching ratio is even more interesting than that though, because the average of the previous consistent experimental measurements from Belle and BaBar, \((1.67 \pm 0.30)\times10^{-4}\), is higher than the prediction from CKM fit, \((0.733^{+0.121}_{-0.073})\times10^{-4}\) and the Standard Model \((1.2 \pm 0.25)\times10^{-4}\). This is what is shown on the left here, where the blue point is the average of the previous results, and the green area is the CKM fit prediction. Could this be due to new physics?

Experimentally, it is quite difficult to measure \(B^- \rightarrow \tau^- \overline{\nu}_\tau\) decays, due to the multiple undetectable neutrinos in the final state (as well as the one from the \(B^-\) decay, there is also at least one from the \(\tau^-\) decay). In fact, I’m pretty sure that we can’t perform this measurement at LHCb at all.

Belle and BaBar are able to as their \(B\) meson pairs are produced through the well defined process \(e^+e^− \rightarrow \Upsilon(4S) \rightarrow B\overline{B}\) and their detectors cover a larger solid angle, which allows them to make a fairly accurate estimate of neutrinos produced in decays. To the right, here is a plot of the extra detected energy in selected events, where the points are the data, the red dotted line shows the signal, the dashed blue line shows the background and the red solid line shows the total fit. They expect the signal to peak at zero, since neutrinos can’t be detected.

For the full details of the analysis, I encourage you all to look at the paper, here I’m only going quote the result: \([0.72^{+0.27}_{-0.25}(stat) \pm 0.11(syst)] \times 10^{−4}\) and then discuss the implications…

Firstly, does this new result bring the experimental average closer or further away from the predictions? As presented by Mikihiko Nakao in a plenary session at ICHEP, the plot below shows that the new Belle average (bottom blue point) and the new experimental average (red point) are both consistent with the CKM fit and Standard model predictions (pink and yellow bands respectively). So no hint for new physics here…

Secondly, since this result doesn’t seem to point to new physics, what does it say about \(V_{ub}\), the Standard Model parameter describing the mixing between the \(u\) and \(b\) quarks? As presented by Phillip Urquijo, also in a plenary session at ICHEP, below is a comparison of the various measurements of \(V_{ub}\), which has historically been an area of \(B\) physics which requires further investigation. This is because there are two different methods to measure \(V_{ub}\), called inclusive and exclusive, depending on what type of \(B\) decays are used, and there is currently a discrepancy between the two, which people have been trying to understand. And interestingly… the \(V_{ub}\) measurement from \(B^- \rightarrow \tau^- \overline{\nu}_\tau\) is in agreement with both methods…

Share

Le LSM(1) est un laboratoire insolite par sa situation géographique, situé à 1700 mètres sous la roche pour une meilleure observation de l’univers. Ce n’est pas sa seule particularité …

Entre la Savoie et l’Italie, dans l’atmosphère étouffante et assourdissante du tunnel du Fréjus, rien n’indique la présence du laboratoire au kilomètre 6,5. Puis, en pénétrant dans l’antre, et à la vue de cette grande caverne bardée d’instruments scientifiques dans laquelle s’affairent des chercheurs aux accents russe, grec ou chinois, c’est une excitante sensation d’être au beau milieu d’un film de James Bond qui vous saisit. A l’extérieur, dans la vallée, comme un écho à cette impression, les rumeurs vont bon train et parlent même d’expériences secrètes ! Pourtant, il n’en est rien, car l’intérêt à s’installer sous la montagne est purement scientifique. En effet, le but n’est pas de se soustraire aux regards indiscrets, mais de s’abriter du flux des rayons cosmiques qui bombardent la surface de la Terre en permanence. L’objectif est de mener des recherches sur la matière noire ou le neutrino et procéder à des mesures d’ultra faible radioactivité grâce à un niveau de bruit de fond très bas. Une quête au moins aussi palpitante qu’un scénario de James Bond !

C’est ainsi que depuis 30 ans, le laboratoire aiguise la curiosité des habitants de Modane et des vacanciers… Un lieu propice à l’échange avec les chercheurs s’est donc révélé nécessaire et a été créé en 2009 dans le bâtiment Carré Sciences situé à Modane. Près de 3000 personnes découvrent chaque année “les petits secrets de l’univers” et environ 300 chanceux visitent le laboratoire lui-même.

Tubes de Geissler-Plücker, découverte de l ionisation – photo : lsm

A l’entrée de l’exposition se trouve un cosmophone qui révèle en direct le passage des rayons cosmiques et les transforme en une mélodie de l’univers. Conçu par le Centre de Physique des Particules de Marseille (CPPM), cet instrument ludique aide à comprendre pourquoi le laboratoire cherche à se mettre à l’abri des rayons cosmiques.

Suivent ensuite des vidéos, l’exposition d’objets remarquables, des panneaux et des jeux ou encore le petit train de la radioactivité naturelle. Une chambre à brouillard, instrument fascinant, donne une touche artistique et permet de voir concrètement la trace laissée par le passage d’une particule de radioactivité venant de l’air, de la Terre, du cosmos… ou bien même de notre propre corps !

De quoi aiguiser la matière grise en attendant de percer les secrets de la matière noire…

Avec l’essor du tourisme scientifique, la qualité de cette exposition permanente et l’intérêt du laboratoire sont désormais reconnus et mis en avant par les professionnels du tourisme. L’exposition du LSM et le laboratoire lui-même sont cités dans le Guide du Routard(2), le Guide Vert Michelin(3) ou encore le Petit Futé(4). Un coup de pouce qui nous aide à partager la science avec le public. Pas mal non ?

 

 

(1) LSM : laboratoire souterrain de Modane – UMR6417 du CNRS/IN2P3 et du CEA/IRFU
(2) Guide du Routard Savoie Mont-Blanc, page 121
(3) Guide Vert Michelin Alpes du Nord – Savoie Dauphiné, page 422
(4) Petit Futé France souterraine, page 14 – Petit Futé Savoie, page 322 – Petit Futé Alpes

– Article envoyé par le Laboratoire souterrain de Modane –

Share

Portuguese text below….

So, in the last two (first and second) posts about how a calorimeter works, I explained how a particle enters in such detectors, loose its energy producing a shower of other particles and finally how this shower provokes the generation of an electrical signal thanks to the “sampling material”. One detail that is important not to forget is that we have a large number of electrodes (hence, calorimeter “cells” – around 187 thousand of them) collecting information on energy deposition in the calorimeter. A good electron shower can be composed of as many as a few hundred cells. For sure it is very important to measure the signal in every cell for every collision event that happens in ATLAS and that is not exactly something very easy to do. Let’s understand how this is done.

First, I propose to watch the left 12 secs video below. It is an extract of the previous videos on how a particle makes the shower inside the ATLAS Liquid Argon Calorimeter, but now really focusing in the two important parts necessary to understand the format of the output electric signal. First, you see a particle crossing the lead absorber and producing 3 particles. We follow one of these while it crosses the 2 mm space between the absorber (dark gray bar) and the copper electrode (copper-colored?!), this one with a very positive Voltage (~2000V). This space we call “the gap”. Well, despite the “slowliness” implied by the movie, this particle is very close to the speed of light. This means, that the time to cross the gap is less than 0.01 nanoseconds (that’s 0. followed by 10 zeros before the “1” appears – compare with the 25 ns of the collisions time). Even if the particle were at 10% the speed of light, that’s still around 0.1 ns, immediate in terms of LHC collisions interval. This phase is called ionization or, Charge Deposition. The electron created all the negative electrons and positive argon ions and disappeared, going to the next cell.

The second part of the signal is the drift of the electrons freed from the argon atoms towards the electrodes. In the last scene of the movie, you will see three long white trails with the electrons drifting from the absorber until the electrode. If you were in the top of a relatively tall building letting some water leak to the floor and, all of a sudden, you cut the flow, people looking at the column of water would still see the top of the water column falling for a few seconds. That’s exactly the same thing, except that instead of water flow we have electrons flow and in the place of gravity we have the electric pull of the electrons by the positive electrode. During sometime (~400 ns), the electrons will be drifting to the electrode and as time goes by you will have less electrons (again, this movie is part of ATLAS episode II – see here the complete part 1 and part 2 of this movie in English!).

Now, let’s see the signal shape. This is in the second movie. First, you got basically no signal (that never exists in electronics – I should say : you got only noise!). Then, the fast electron crosses almost immediately the gap and you get the highest possible signal. The higher the initial electron energy, the higher the number of electrons freed from the argon atoms and the higher is this initial current. So, all we care for measurement purposes would be this initial current peak. The rest of the time the current gets dimmer and dimmer until we got only noise again. When the time scale on the movie changes, you are just seeing the drift moment. Now, in reality you never see this triangle. All you see is the single measurement value and you have to take a decision about when to “catch” the pulse value. Trying to catch too many tens of samples represents an extra load to the electronics usually hitting a power heating or a data amount limitation and you have to be able to sample as least as possible.

The whole thing happens very quickly, so, you have to use some electronic device to find a better way to work this out. Let’s consider the 3 pictures below. The value is the one marked with a star. In the first picture it is obvious that the shot was taken too soon, our artist was not even in the studio. This means we lost the signal (energy measured = noise!). The second picture is the perfect sampling of the signal at the curve peak. If we always could do like that, this would be perfect. However, most of the time, you would be getting the signal after the peak was reached (third picture) and the energy of the cell would be underestimated. This is very bad.

Sampling of the calorimeter signal performed too early

Sampling of the calorimeter signal performed too early

Sampling of a calorimeter pulse taken at the best timing (pulse peak)

Sampling of a calorimeter pulse taken at the best timing (pulse peak)

Sampling of a calorimeter signal taken too late

Sampling of a calorimeter signal taken too late

So, instead of trying to sample the direct signal and certainly making a mistake, we use an electronic circuit that re-shapes the signal. This circuit stretches the fast rising part so that, in the end, the peak value information is distributed over a much longer time spam (something like 125 ns). The shaped pulse is shown in the figure below together with the original pulse. Now, multiple samples (5) at regular time intervals of this structure are acquired by an analogue-to-digital converter circuit which produces digital numbers related to the pulse value at the sampling moment (marked by dots in the shaped pulse). Using these numbers, it is possible to make a best guess (or what one like to call technically a “fit”) of what the shaped pulse really is, including its height, even if the signal is shifted of 1 or 2 ns. And from that, we can calculate the energy in the cell.

LAr Pulse its shaped version and the samples

LAr Pulse its shaped version and the samples

Due to the very long pulse (400 ns) and the very short interval between collisions (25 ns), it is not impossible (rather, highly probable) that a given cell will receive the signal from one collision while the signal from a previous collision is still in the drifting phase. This effect is called pileUp, and we will discuss it in a much later post.

The discussion today involved complex topics in engineering and physics applied to the detector signal. Design of a good stable and cheap shaping filter, sampling the signal at a cost and power effective rate, dealing with pile Up and performing energy calculation are quite general topics and many different detectors use similar techniques. Many of these topics are whole areas of study, specially in engineering. The signals produced by a detector are usually very fast or very slow and the shaping helps to extract their meaningful properties. For instance, for the Tile Calorimeter discussed in the previous post, the whole pulse is very short (a few ns) and you have to completely stretch it, while maintaining the area produced by the original signal (proportional to the light captured).

Now we will stop the section on how a calorimeter works and we will start another one on how the trigger works to select good collisions for Higgs (??) candidates!

Portuguese part :

Nos últimos dois (primeiro e segundo) posts sobre o funcionamento de um calorímetro, expliquei como uma partícula entra em tais detectores, perde sua energia produzindo uma cascata de outras partículas e, finalmente, como essa partícula provoca a geração de um sinal elétrico graças ao material de amostragem. Um detalhe importante é que o enorme número de eletrodos (ou seja, “células” do calorímetro – cerca de 187 mil) coletam a deposição de energia em todo o calorímetro. Uma cascata razoável de elétrons pode ser composta de algumas centenas destas células. Obviamente é muito importante medir o sinal em todas as células para cada evento de colisão que acontece no ATLAS e essa não é uma tarefa tão simples. Vamos entender como isto é feito.

Primeiramente, podemos ver um filme de 12 segundos no quadro abaixo à esquerda. É um pequeno extrato de um dos vídeos que discutimos anteriormente mostrando uma partícula produzindo a cascada no calorímetro de Argônio Líquido do ATLAS, mas agora focando na geração do sinal elétrico. Primeiro, pode-se ver uma partícula cruzando o absorvedor de chumbo e se produzindo 3 partículas. Seguimos uma destas enquanto ela cruza o pequeno espaço de 2 mm entre o absorvedor (barra cinza escura) e o eletrodo de cobre (na cor do cobre, obviamente! 😉 ) que mantém uma Voltagem alta positiva (~2000V). Apesar da lentidão que o vídeo parece implicar, o elétron está praticamente a velocidade da luz, e isso significa que ele cruza o pequeno intervalo em menos de 0.01 nanosegundo (Ou seja, um “0.” seguido de dez zeros antes de aparecer o “1” – compare com o tempo entre colisões no LHC – 25 ns). Mesmo que fosse um elétron lento (10% da velocidade da luz), ainda teríamos 0.1 ns. Essa fase é chamada de ionização ou Deposição de Carga. O elétron criou todas as cargas negativas e íons positivos dos átomos de Argônio e desapareceu indo para a próxima célula.

A segunda parte do sinal é a tração dos elétrons liberados dos átomos de argônio na direção dos eletrodos. Na última cena do filme, podemos ver os três longos traços brancos relativos aos elétrons atraídos desde o absorvedor até o eletrodo. Se você estivesse no alto de um prédio relativamente alto e observando um vazamento de água até o solo e, de repente, você cortasse o fluxo de água, pessoas observando a coluna de líquido veriam o topo desta coluna demorando alguns segundos até chegar no solo. O efeito é o mesmo para os elétrons, exceto que temos elétrons em vez de água e em vez da força da gravidade temos a atração elétrica do eletrodo positivo!! Durante um certo intervalo de tempo (cerca de 400 ns), os elétrons estarão se dirigindo para o eletrodo e cada vez teremos menos elétrons (Uma vez mais, este filme é parte do Episódio II do ATLAS – veja o filme completo parte 1 e parte 2 deste filme em Inglês).

Agora, vejamos o formato do sinal. Este se encontra no segundo filme. Primeiramente não temos nenhum sinal (isso não existe em eletrônica – eu deveria dizer que só temos ruído!). Então, o elétron rápido cruza praticamente de forma imediata o intervalo e o sinal atinge o seu máximo. Quanto maior a energia do elétron inicial, maior o número de elétrons liberados dos átomos de Argônio e maior é este pico de corrente. Dessa forma, o único valor importante para se realizar a medida da energia é o valor do pico inicial. No resto do tempo, a corrente vai diminuindo lentamente até atingirmos o valor de ruído de novo. Quando a escala de tempo do filme muda, já estamos na parte de tração dos elétrons. Na realidade, o triângulo que se forma no filme não pode realmente ser visto. Tudo o que se vê é o valor a ser medido e temos que tomar a melhor decisão sobre quando realizar a medida. Realizar a medida muitas dezenas de vezes seria o melhor, mas, infelizmente, há um aumento de custo, consumo e dados produzidos, tornando a eletrônica impossível de ser construída. Isso nos leva a tentar capturar o mínimo possível de amostras.

Como a coisa toda acontece muito rapidamente, você tem que usar alguma eletrônica para encontrar uma melhor forma de resolver este problema. Considere as três figuras abaixo. O valor obtido é marcado com uma estrela. No primeiro desenho está claro que a “foto” foi tirada muito cedo, tendo o artista ainda nem entrado na sala. Isso significa que perdemos o sinal (energia medida = nível de ruído!!). No segundo desenho vemos a amostragem perfeita, exatamente no pico. Entretanto, na maior parte das vezes, só conseguimos medir o sinal depois que o pico foi atingido (terceira figura) e a energia da célula fica sub-estimada. Obviamente, isso não é muito bom.

Sampling of the calorimeter signal performed too early

Colhendo a amostra do sinal do Calorímetro muito cedo

Sampling of a calorimeter pulse taken at the best timing (pulse peak)

Colhendo a amostra do sinal do Calorímetro no momento certo (pico de sinal)

Sampling of a calorimeter signal taken too late

Colhendo a amostra do sinal do Calorímetro muito tarde

Para se resolver esse problema, em vez de tentar medir amostras do sinal direto e, quase sempre fazer uma medida errada, usamos um circuito eletrônico que modifica a forma do sinal. Este circuito estica a parte relativa à subida rápida do pulso, “espalhando” a informação num período de tempo bem mais longo (cerca de 125ns). O pulso assim reformatado aparece na figura abaixo junto com o pulso original. Agora, diferentes amostras (5) a intervalos regulares desta estrutura podem ser adquiridas por um circuito que faz a conversão analógico pra digital, produzindo números relativos ao valor do pulso a cada amostra (marcados como pontos no sinal reformatado). Usando estes números, é possível se obter uma “melhor estimativa” (tecnicamente chamada de um “fit”) do que o pulso formato realmente é, incluindo o seu pico, mesmo que o sinal esteja ligeiramente deslocado de 1 ou 2 ns. A partir dessa informação, podemos calcular a energia da célula.

O Pulso do Calorímetro, seu sinal reformatado e suas amostras

O Pulso do Calorímetro, seu sinal reformatado e suas amostras

Como o pulso físico é muito longo (400 ns) e o intervalo entre colisões é bastante curto (25 ns), não é impossível (e na verdade é muito provável) que uma certa célula receba o sinal de uma colisão enquanto o sinal da colisão anterior ainda esta na fase de atração dos elétrons. Este efeito é chamado de empilhamento (PileUp) e discutiremos ele num post futuro.

A discussão de hoje envolveu tópicos complexos em engenharia e física aplicadas ao sinal do detector. O design de um filtro de formatação do sinal estável e barato, amostragem do sinal de forma eficiente em termos de custo e potência utilizada, lidar com o efeito de empilhamento e executar rapidamente o cálculo de energia são tópicos muito gerais e técnicas similares são utilizadas em diferentes detectores. Muitos destes tópicos são áreas inteiras de estudo, especialmente em engenharia. Os sinais produzidos pelos detectores são, normalmente, muito rápidos ou muito lentos e a reformatação ajuda muito a extrair as propriedades realmente importantes. Por exemplo, o Calorímetro de Telhas que discutimos no post anterior tem um sinal rápido demais e a reformatação estica o mesmo enquanto mantém a área sob a curva que é proporcional à energia (proporcional à luz capturada!).

Agora nós vamos fazer uma pausa na seção sobre o funcionamento do calorímetro e vamos começar a discutir o funcionamento do sistema de seleção chamado de Trigger. Este sistema foi responsável por escolher os bons eventos candidatos a Higgs!

Aproveito pra re-anunciar o canal ATLAS/Brasil, agora com uma página melhorada e com mais 9 vídeos :
http://atlas-live-public.web.cern.ch/atlas-live-public/brazil/index.html

Share

Événements cosmiques

Thursday, August 16th, 2012

6 août 2012. Ce soir, sous un ciel plutôt gris, me voilà dans la nacelle d’une montgolfière, au-dessus de la petite ville de Bad Saarow-Pieskow, à environ 50 km au sud-est de Berlin. C’est un baptême de l’air pour moi et pour mes compagnons, parmi lesquels Bill Breisky, un écrivain américain, ancien éditeur de la revue Cap Cod Times. Il est également le petit-fils de Victor Hess, dont le vol en ballon à gaz, il y a tout juste 100 ans, avait jeté une lumière nouvelle sur la matière dans l’Univers. Le 7 août 1912, Hess avait atterri près de Pieskow (personne ne sait exactement où).Toutefois, les ressemblances avec notre petite expédition s’arrêtent là. Hess avait volé durant six heures, porté par un ballon d’hydrogène à plus de 5 000 m. Pendant le vol, il effectua des mesures qui montrèrent que le niveau naturel de rayonnement augmente avec l’altitude, ce qui le mena à conclure qu’« un rayonnement d’une puissance de pénétration très forte entre dans notre atmosphère depuis le ciel ». Cent ans plus tard, nous célébrons cette date comme celle de la découverte des rayons cosmiques.

Même si ce n’est pas le nom que Hess avait donné à sa découverte, il est tout à fait approprié. Nous savons maintenant que les rayons cosmiques sont des particules énergétiques venant de l’espace intersidéral. Quand elles entrent dans l’atmosphère terrestre, elles génèrent des gerbes d’autres particules, qui arrivent jusqu’au sol et pénètrent même sous terre. À chaque seconde environ, pendant que vous lisez ces lignes, un muon issu d’un rayon cosmique (un cousin plus lourd de l’électron) vous traverse, le plus souvent depuis le ciel.

Les études sur les rayons cosmiques ont permis d’accéder à un monde de particules au-delà des limites de l’atome : d’abord le positon (l’antiélectron), ensuite le muon, puis le pion, le kaon et d’autres encore. Jusqu’à l’avènement des accélérateurs de particules de hautes énergies au début des années 1950, ce rayonnement naturel constituait le seul moyen d’étudier ce « zoo » de particules en pleine expansion. Quand le CERN a été fondé, en 1954, les rayons cosmique figuraient dans la liste, inscrite dans la Convention de l’Organisation, des domaines d’intérêt scientifiques. Mais même si les accélérateurs sont devenus le meilleur terrain de chasse pour les nouvelles particules, les rayons cosmiques ont gardé leurs secrets. Les énergies records du LHC sont encore minimes en comparaison des rayonnements cosmiques les plus énergétiques, où un seul proton qui pénètre dans l’atmosphère peut concentrer l’énergie d’une balle de tennis servie par l’un des meilleurs joueur du monde.

Depuis la découverte de Hess, les physiciens ont compris ce que sont les rayons cosmiques (des particules énergétiques), mais ils n’ont pas encore répondu à la question de savoir comment et où ils se forment. Comment fait la nature pour les accélérer à de telles énergies? Où sont les accélérateurs naturels? Cela reste un mystère qui continue de susciter des recherches aventureuses, dans des lieux aussi divers que les profondeurs de la glace du pôle Sud ou le haut plateau central de la Namibie.

Cela nous ramène à la manière dont Bill et moi nous sommes retrouvés ensemble dans une montgolfière. Michael Walter avait organisé, avec d’autres collaborateurs du laboratoire allemand DESY (qui est très impliqué dans l’expérience IceCube au pôle Sud et l’installation HESS en Namibie), une conférence à Bad Saarow pour célébrer le centenaire de la découverte des rayons cosmiques. L’événement a réuni des historiens ainsi que des personnalités des recherches actuelles sur le rayonnement cosmique. Bill était l’un des intervenants invités. Le 7 août, lui et son frère ont dévoilé une plaque commémorative sur un bloc erratique à Peiskow (roche transportée sur une grande distance par un glacier durant la dernière glaciation et laissée sur place lors de la fonte du glacier). C’était un hommage approprié pour Victor Hess, qui était arrivé près de cet endroit après avoir fait un long voyage pour étudier un phénomène naturel mystérieux et nous ouvrir une voie qui mènerait, entre autres choses, au CERN et au LHC.

Christine Sutton.

Share

It’s been over a month since CERN hosted a seminar on the updated searches for the Higgs boson. Since then ATLAS and CMS and submitted papers showing what they found, and recently I got news that the ATLAS paper was accepted by Physics Letters B, a prestigious journal of good repute. For those keeping score, that means it took over five weeks to go from the announcement to publication, and believe it not, that’s actually quite fast.

Crowds watch the historic seminar from Melbourne, Australia (CERN)

Crowds watch the seminar from Melbourne, Australia (CERN)

However, all this was last month’s news. Within a week of finding this new particle physicists started on the precision spin measurement, to see if it really is the Higgs boson or not. Let’s take a more detailed look at the papers. You can see both papers as they were submitted on the arXiv here: ATLAS / CMS.

The Higgs backstory

In order to fully appreciate the impact of these papers we need to know a little history, and a little bit about the Higgs boson itself. We also need to know some of the fundamentals of scientific thinking and methodology. The “Higgs” mechanism was postulated almost 50 years ago by several different theorists: Brout, Englert, Guralnik, Hagen, Higgs, and Kibble. For some reason Peter Higgs seems to have his name attached to this boson, maybe because his name sounds “friendliest” when you put it next to the word “boson”. The “Brout boson” sounds harsh, and saying “Guralnik boson” a dozen times in a presentation is just awkward. Personally I prefer the “Kibble boson”, because as anyone who owns a dog will know, kibble gets everywhere when you spill it. You can tidy it up all you like and you’ll still be finding bits of kibble months later. You may not find bits often, but they’re everywhere, much like the Higgs field itself. Anyway, this is all an aside, let’s get back to physics.

It helps to know some of history behind quantum mechanics. The field of quantum mechanics started around the beginning of the 20th century, but it wasn’t until 1927 that the various ideas started to get resolved into a consistent picture of the universe. Some of the greatest physicists from around the world met at the 1927 Solvay Conference to discuss the different ideas and it turned out that the two main approaches to quantum mechanics, although they looked different, were actually the same. It was just a matter of making everything fit into a consistent mathematical framework. At that time the understanding of nature was that fields had to be invariant with respect to gauge transformation and Lorentz transformations.

The Solvay Conference 1927, where some of the greatest physicists of the 20th century met and formulated the foundations of modern quantum mechanics. (Wikipedia)

The Solvay Conference 1927, where some of the greatest physicists of the 20th century met and formulated the foundations of modern quantum mechanics. (Wikipedia)

A gauge transformation is the result of the kind of mathematics we need to represent particle fields, and these fields must not introduce new physics when they get transformed. To take an analogy, imagine you have the blueprints for a building and you want to make some measurements of various distances and angles. If someone makes a copy of the blueprints, but changes the direction of North (so that the building faces another direction) then this must not change any of the distances or angles. In that sense the distances and angles in blueprint are rotation-invariant. They are rotation-invariant because we need to use Euclidean space to represent the building, and a consequence of using Euclidean space is that any distances and angles described in the space must be invariant with respect to rotation. In quantum mechanics we use complex numbers to represent the field, and a gauge transformation is just a rotation of a complex number.

The Lorentz transformation is a bit simpler to understand, because it’s special relativity, which says that if you have a series of events, observers moving at different speeds and in different directions will agree on the causality of those events. The rest of special relativity is just a matter of details, and those details are a lot of fun to look at.

By the time all of quantum mechanics was coming together there were excellent theories that took these symmetries into account. Things seemed to be falling into place, and running the arguments backwards lead to some very powerful predictions. Instead of observing a force and then requiring it to be gauge and Lorentz invariant, physicists found they could start with a gauge and Lorentz invariant model and use that to predict what forces can exist. Using plain old Euclidean space and making it Lorentz invariant gives us Minkowski space, which is the perfect for making sure that our theories work well with special relativity. (To get general relativity we start with a space which is not Euclidean.) Then we can write the most general description of a field we can think of in this space as long as it is gauge invariant and that’s a valid physical field. The only problem was that there were some interactions that seemed to involve a massive photon-like boson. Looking at the interactions gave us a good idea of the mass of this particle, the \(W\) boson. In the next few decades new particles were discovered and the Standard Model was proposed to describe all these phenomena. There are three forces in the Standard Model, the electromagnetic force, the weak force, and the strong force, and each one has its own field.

Inserting the Higgs field

The Higgs field is important because it unifies two of the three fundamental fields in particle physics, electromagnetism and the weak fields. It does this by mixing all the fields up (and in doing so, it mixes the bosons up.) Flip Tanedo has tried to explain the process from a theorist’s point of view to me privately on more than one occasion, but I must admit I just ended up a little confused by some of the finer points. The system starts with three fields which are pretty much all the same as each other, the \(W_1\), \(W_2\), and the \(W_3\). These fields don’t produce any particles themselves because they don’t obey the relevant physical laws (it’s a bit more subtle in reality, but that’s a blog post in itself.) If they did produce their own fields then they would generate massless particles known as Goldstone bosons, and we haven’t seen these, so we know there is something else going on. Instead of making massless bosons they mix amongst themselves to create new fields, giving us massive bosons, and the Goldstone bosons get converted into extra degrees of freedom. Along comes the Higgs field and suddenly these fields separate and mix, giving us four new fields.

The Higgs field, about to break the symmetry and give mass (Flip Tanedo)

The Higgs field, about to break the symmetry and give mass (Flip Tanedo)

The \(W_1\) and \(W_2\) mix to give us the \(W^+\) and \(W^-\) bosons, and then the \(W_3\) field meets the \(B\) field to give us the \(Z\) boson and the photon. What makes this interesting is that the photon behaves well on its own. It has no mass and this means that its field is automatically gauge invariant. Nature could have decided to create just the electromagnetic field and everything would work out fine. Instead we have the photon and three massive bosons, and the fields of these massive bosons cannot be gauge invariant by themselves, they need something else to make it all balance out. By now you’ve probably guessed what this mystery object is, it’s the Higgs field and with it, the Higgs boson! This field fixes it all up so that the fields mix, we get massive bosons and all the relevant laws (gauge invariance and Lorentz invariance) are obeyed.

Before we go any further it’s worth pointing a few things out. The mass of the \(W\) boson is so large in comparison to other particles that it slows down the interactions of a lot of particles, and this is one of the reasons that the sun burns so “slowly”. If the \(W\) boson was massless then it could be produced in huge numbers and the rate of fusion in the sun would be much faster. The reason we have had a sun for billions of years, allowing the evolution of life on Earth (and maybe elsewhere) is because the Higgs field gives such a large mass to the \(W\) boson. Just let that thought sink in for a few seconds and you’ll see the cosmic significance of the Higgs field. Before we get ahead ourselves we should note that the Higgs field leads to unification of the electromagnetic and weak forces, but it says nothing about the strong force. Somehow the Higgs field has missed out one of the three fundamental forces of the Standard Model. We may one day unite the three fields, but don’t expect it to happen any time soon.

“Observation” vs “discovery”, “Higgs” vs “Higgs-like”

There’s one more thing that needs to be discussed before looking at the papers and that’s a rigorous discussion of what we mean by “discovery” and if we can claim discover of the Standard Model Higgs boson yet. “Discovery” has come to mean a five sigma observation of a new resonance, or in other words that probability that the Standard Model background in the absence of a new particle would bunch up like this is less than one part in several million. If we see five sigma we can claim a discovery, but we still need to be a little careful. Suppose we had a million mass points, what is the probability that there is one five sigma fluctuation in there? It’s about \(20\%\), so looking at just the local probability is not enough, we need to look at the probability that takes all the data points into account. Otherwise we can increase the chance of seeing a fluctuation just by changing the way we look at the data. Both ATLAS and CMS have been conscious of this effect, known as the “Look Elsewhere Effect”, so every time they provide results they also provide the global significance, and that is what we should be looking at when we talk about the discovery.

Regular readers might remember Flip’s comic about me getting worked up over the use of the word “discovery” a few weeks back. I got worked up because the word “discovery” had been misused. Whether an observation is \(4.9\) or \(5.1\) sigma doesn’t matter that much really, and I think everyone agrees about that. What bothered me was that some people decided to change what was meant by a discovery after seeing the data, and once you do that you stop being a scientist. We can set whatever standards we like, but we must stick to them. Burton, on the other hand, was annoyed by a choice of font. Luckily our results are font-invariant, and someone said “If you see five sigma you can present in whatever durn font you like.”

Getting angry over the change of goalposts.  Someone has to say these things.

Getting angry over the change of goalposts. Someone has to say these things.

In addition to knowing what we mean by “discovery” we also need to take hypothesis testing into account. Anyone who claims that we have discovered the Higgs boson is as best misinformed, and at worst willingly untruthful. We have discovered a new particle, there’s no doubt about that, but now we need to eliminate things are not the Higgs until we’re confident that the only thing left is the Higgs boson. We have seen this new particle decay to two photons, and this tells us that it can only only have spin 0 or spin 2. That’s eliminated spin 1, spin 3, spin 4… etc for us, all with a single measurement. What we are doing now trying to exclude both the spin 0 and spin 2 possibilities. Only one of these will be excluded, and then will know for sure what the spin is. And then we know it’s the Standard Model Higgs boson, right? Not quite! Even if we know it’s a spin 0 particle we would still need to measure its branching fractions to confirm that it is what we have been looking for all along. Bear this in mind when thinking about the paper- all we have seen so far is a new particle. Just because we’re searching for the Higgs and we’ve found something new it does not mean that it’s a the Higgs boson.

The papers

Finally we get to the papers. From the titles we can see that both ATLAS and CMS have been suitably agnostic about the particle’s nature. Neither claim it’s the Higgs boson and neither even claim anything more than an “observation”. The abstracts tell us a few useful bits of information (note that the masses quoted agree to within one sigma, which is reassuring) but we have to tease out the most interesting parts by looking at the details. Before the main text begins each experiment dedicates their paper to the memories of those who have passed away before the papers were published. This is no short list of people, which is not surprising given that people have been working on these experiments for more than 20 years. Not only is this a moving start to the papers, it also underlines the impact of the work.

Both papers were dedicated to the memories of colleagues who did not see the observation. (CMS)

Both papers were dedicated to the memories of colleagues who did not see the observation. (CMS)

Both papers waste no time getting into the heart of the matter, which is nature of the Standard Model and how it’s been tested for several decades. The only undiscovered particle predicted by the Standard Model is the Higgs boson, we’ve seen everything else we expected to see. Apart from a handful of gauge couplings, just about every prediction of the Standard Model has been vindicated. In spite of that, the search for the Higgs boson has taken an unusually long time. Searches took place at LEP and Tevatron long before the LHC collided beams, and the good news is that the LEP limit excluded the region that is very difficult for the LHC to rule out (less than \(114GeV\)). CDF and D0 both saw an excess in the favored region, but the significance was quite low, and personally I’m skeptical since we’ve already seen that CDF’s dijet mass scale might have some problems associated with it. Even so we shouldn’t spend too long trying to interpret (or misinterpret) results, we should take them at face value, at least at first. Next the experiments tell us which final states they look for, and this is where things will get interesting later on. Before describing the detectors, each experiment pauses to remind us that the conditions of 2012 are more difficult than those of 2011. The average number of interactions per beam crossing increased by a factor of two, making all analyses more difficult to work with (but ultimately all our searches a little more sensitive.)

At this point both papers summarize their detectors, but CMS goes out of their way to show off how the design of their detector was optimized for general Higgs searches. Having a detector which can reconstruct high momentum leptons, low momentum photons and taus, and also tag b-jets is not as easy task. Both experiments do well to be able to search for the Higgs bosons in the channels they look at. Even if we limit ourselves to where ATLAS looked the detectors would still have trigger on leptons and photons, and be able to reconstruct not only those particles, but also the missing transverse energy. That’s no easy task at a hadron collider with many interactions per beam crossing.

The two experiments have different overall strategies to the Higgs searches. ATLAS focused their attention on just two final states in 2012: \(\gamma\gamma\), and \(ZZ^*\), whereas CMS consider five final sates: \(\gamma\gamma\), \(ZZ^*\), \(WW^*\), \(\tau\tau\), and \(b\bar{b}\). ATLAS focus mostly on the most sensitive modes, the so-called “golden channel”, \(ZZ^*\), and the fine mass resolution channel, \(\gamma\gamma\). With a concerted effort, a paper that shows only these modes can be competitive with a paper that shows many more, and labor is limited on both experiments. CMS spread their effort across several channels, covering all the final states with expected sensitivities comparable to the Standard Model.

\(H\to ZZ^*\)

The golden channel analysis has been presented many times before because it is sensitive across a very wide mass range. In fact it spans the range \(110-600GeV\), which is the entire width of the Higgs search program at ATLAS and CMS. (Constraints from other areas of physics tell us to look as high as \(1000GeV\), but at high masses the Higgs boson would have a very large width, making it extremely hard to observe. Indirect results favor the low mass region, which is less than around \(150GeV\).) Given the experience physicists have had with this channel it’s no surprise that the backgrounds are very well understood at this point. The dominant “irreducible” background comes from Standard Model production of \(Z/\gamma*\) bosons, where there is one real \(Z\) boson, and one “off-shell”, or virtual boson. This is called irreducible because the source of background is the same final state as the signal, so we can’t remove further background without also removing some signal. This off-shell boson can be an off-shell \(Z\) boson or an off-shell photon, it doesn’t really matter which since these are the same for the background. In the lower mass range there are also backgrounds from \(t\bar{t}\), but fortunately these are well understood with good control regions in the data. Using all this knowledge, the selection criteria for \(8TeV\) were revisited to increase sensitivity as much as possible.

The invariant mass spectrum for ATLAS's H→ZZ* search (ATLAS)

The invariant mass spectrum for ATLAS's H→ZZ* search (ATLAS)

Since this mode has a real \(Z\) boson, we can look for two high momentum leptons in the final state, which mames things especially easy. The backgrounds are small, and the events are easy to identify, so the trigger is especially simple. Events are stored to disk if there is at least one very high momentum lepton, or two medium momentum leptons which means that we don’t have to throw any events away. Some triggers fire so rapidly that we can only store some of the events from them, and we call this prescaling. When we keep \(1\) in \(n\) events then we have a prescale of \(n\). For a Higgs search we want to have a high efficiency as possible so we usually require a prescale of \(1\). Things are not quite so nice for the \(\gamma\gamma\) mode, as we’ll see later.

The invariant mass spectrum for CMS's H→ZZ* search (CMS)

The invariant mass spectrum for CMS's H→ZZ* search (CMS)

After applying a plethora of selections on the leptons and reconstructing the \(Z\) and Higgs boson candidates the efficiency for the final states vary from \(15\%-37\%\), which is actually quite high. No detector can cover the whole of the solid angle, and efficiencies vary with the detector geometry. The efficiency needs to be very high because the fraction of Higgs bosons that would decay to these final states is so small. At a mass of \(125GeV\) the branching fraction to the \(ZZ^*\) state is about \(2\%\), and then branching fraction of \(Z\) to two leptons is about \(6\%\). Putting that all together means that only \(1\) in \(10,000\) Higgs bosons would decay to this final state. At a mass of \(125GeV\) the LHC would produce about \(15,000\) Higgs bosons per \(fb^{-1}\). So for \(10fb^{-1}\) we could expect to have about \(11\) Higgs bosons decaying to this final state, and we could expect to see about \(3\) of those events reconstructed. This is a clean mode, but it’s an extremely challenging one.

The selection criteria are applied, the background is estimated, and the results are shown. As you can see there is a small but clear excess over background in the region around \(125GeV\) and this is evidence supporting the Higgs boson hypothesis!

CMS see slightly fewer events than expected, but still see a clear excess (CMS)

CMS see slightly fewer events than expected, but still see a clear excess (CMS)

\(H\to\gamma\gamma\)

Out of the \(H\to ZZ^*\) and \(H\to\gamma\gamma\) modes the \(\gamma\gamma\) final state is the more difficult one to reconstruct. The triggers are inherently “noisy” because they must fire on something that looks like a high energy photon, and there are many sources of background for this. As well as the Standard Model real photons (where the rate of photon production is not small) there are jets faking photons, and electrons faking photons. This makes the mode dominated by backgrounds. In principle the mode should be easy: just reconstruct Higgs candidates from pairs of photons and wait. The peak will reveal itself in time. However ATLAS and CMS are in the middle of a neck and neck race to find the Higgs boson, so both collaborations exploit any advantage they can, and suddenly these analyses become some of the most difficult to understand.

A typical H→γγ candidate event with a striking signature (CMS)

A typical H→γγ candidate event with a striking signature (CMS)

To get a handle on the background ATLAS and CMS each choose to split the mode into several categories, depending on the properties of the photons or the final state, and each one with its own sensitivity. This allows the backgrounds to be controlled with different strategies in each category, leading to increased overall sensitivity. Each category has its own mass resolution and signal-to-background ratio, each is mutually independent of the others, and each has its own dedicated studies. For ATLAS the categories are defined by the presence of two jets, whether or not the photon converts (produces an \(e^-e^+\) pair) in the detector, the pseudorapidity of the photons, and a kinematic quantity called \(p_{T_T}\), with similar categories for CMS.

When modelling the background both experiments wisely chose to use the data. The background for the \(gamma\gamma\) final state is notoriously hard to predict accurately, because there are so many contributions from different backgrounds, from real and fake photon candidates, and many kinematic or detector effects to take into account. The choice of background model even varies on a category by category basis, and choices of model vary from simple polynomial fits to the data, to exponential and skewed Gaussian backgrounds. What makes these background models particularly troublesome is that the background has to be estimated using the signal region, so small deviations that are caused by signal events could be interpreted by the fitting algorithm as a weird background shape. The fitting mechanism must be robust enough to fit the background shapes without being fooled into thinking that a real excess of events is just a slightly different shape.

ATLAS's H→γγ search, where events are shown weighted (top) and unweighted (bottom) (ATLAS)

ATLAS's H→γγ search, where events are shown weighted (top) and unweighted (bottom) (ATLAS)

To try to squeeze even more sensitivity out of the data CMS use a boosted decision tree to aid signal separation. A boosted decision tree is a sophisticated statistical analysis method that uses signal and background samples to decide what looks like signal, and then uses several variables to return just one output variable. A selection can be made on the output variable that removes much of the background while keeping a lot of the signal. Using boosted decision trees (or any multivariate analysis technique) requires many cross checks to make sure the method is not biased or “overtrained”.

CMS's H→γγ search, where events are shown weighted (main plot) and unweighted (inset) (CMS)

CMS's H→γγ search, where events are shown weighted (main plot) and unweighted (inset) (CMS)

After analyzing all the data the spectra show a small bump. The results can seem a little disappointing at first, after all the peak is barely discernable, and so much work has gone into the analyses. Both experiments show the spectra after weighting the events to take the uncertainties into account and this makes the plots a little more convincing. Even so, what matters is the statistical significance of these results, and this cannot be judged by eye. The final results show a clear preference for a boson with a mass of \(125GeV\), consistent with the Higgs boson. CMS see a hint at around \(135GeV\), but this is probably just a fluctuation, given that ATLAS do not see something similar.

ATLAS local significance for H→γγ (ATLAS)

ATLAS local significance for H→γγ (ATLAS)

(If you’ve been reading the blog for a while you may remember a leaked document from ATLAS that hinted at a peak around \(115GeV\) in this invariant mass spectrum. That document used biased and non peer-reviewed techniques, but the fact remains that even without these biases there appear to be a small excess in the ATLAS data around \(115GeV\). The significance of this bump has decreased as we have gathered more data, so it was probably just a fluctuation. However, you can still see a slight bump at \(115GeV\) in the significance plot. Looking further up the spectrum, both ATLAS and CMS see very faint hints of something at \(140GeV\) which appears in both the \(ZZ^*\) and \(\gamma\gamma\) final states. This region has already been excluded for a Standard Model Higgs, but there may be something else lurking out there. The evidence is feeble at the moment, but that’s what we’d expect for a particle with a low production cross section.)

\(H\to WW^*\)

One of the most interesting modes for a wide range of the mass spectrum is the \(WW(*)\) final state. In fact, this is the first mode to be sensitive to the Standard Model Higgs boson searches, and exclusions were seen at ATLAS, CMS, and the Tevatron experiments at around \(160GeV\) (the mass of two on-shell \(W\) bosons) before any other mass region. The problem with this mode is that it has two neutrinos in the final state. It would be nice to have an inclusive sample of \(W\) bosons, including the hadronic final states, but the problems here are the lack of a good choice of trigger, and the irreducible and very large background. That mean that we must select events with two leptons and two neutrinos in them. As the favored region excludes more and more of the high mass region this mode gets more challenging, because at first we lose the mass constraint on the second \(W\) boson (as it must decay off-shell), and secondly because we must be sensitive in the low missing transverse energy region, which starts to approach our resolution for this variable.

While we approach our resolution from above, the limit on the resolution increases from below, because the number of interactions per beam crossing increases, increasing the overall noise in the detector. To make progress in this mode takes a lot of hard work for fairly little gain. Both papers mention explicitly how difficult the search is in a high pileup scenario, with CMS stating

“The analysis of the \(7TeV\) data is described in [referenced paper] and remains unchanged, while the \(8TeV\) analysis was modified to cope with more difficult conditions induced by the higher pileup of the 2012 data taking.”

and ATLAS saying

“The analysis of the \(8TeV\) data presented here is focused on the mass range \(110<m_H<200GeV\) It follows the procedure used for the \(7TeV\) data described in [referenced paper], except that more stringent criteria are applied to reduce the \(W\)+jets background and some selections have been modified to mitigate the impact of the high instantaneous luminosity at the LHC in 2012.”

It’s not all bad news though, because the final branching fraction to this state is much higher than that of the \(ZZ^*\) final state. The branching fraction for the Standard Model Higgs boson to \(WW^*\) is about \(10\) times higher than that for \(ZZ^*\), and the branching fraction of the \(W\) boson to leptons is also about \(3\) times higher than the \(Z\) boson to leptons, which gives another order of magnitude advantage. Unfortunately all these events must be smeared out across a large spectrum. There is one more trick we have up our sleeves though, and it comes from the spin of the parent. Since the Standard Model Higgs boson has zero spin the \(W\) bosons tend to align their spins in opposite directions to make it all balance out. This then favors one decay direction over another for the leptons. The \(W^+\) boson decays with a neutrino in the final state, and because of special relativity the neutrino must align its spin against its direction of motion. The \(W-\) boson decays with an anti-neutrino, which takes its spin with its direction of motion. This forces the two leptons to travel in the same direction with respect to the decay axis of the Higgs boson. The high momenta of the leptons smears things out a bit, but generally we should expect to see one high momentum lepton, and a second lower momentum lepton n roughly the same region of the detector.

The transverse mass for ATLAS's H→WW* search (ATLAS)

The transverse mass for ATLAS's H→WW* search (ATLAS)

ATLAS did not actually present results for the \(WW^*\) final state on July 4th, but they did show it in the subsequent paper. CMS showed the \(WW^*\) final state on July 4th, although it did somewhat reduce their overall significance. Both ATLAS and CMS spend some of the papers discussing the background estimates for the \(WW^*\) mode, but ATLAS seem to go to more significant lengths to describe the cross checks they used in data. In fact this may help to explain why ATLAS did not quite have the result ready for July 4th, whereas CMS did. There’s a trade off between getting the results out quickly and spending some extra time to understand the background. This might have paid off for ATLAS, since they seem to be more sensitive in this mode than CMS.

The invariant mass for CMS's H→WW* search (CMS)

The invariant mass for CMS's H→WW* search (CMS)

After looking at the data we can see that both ATLAS and CMS are right at the limits of their sensitivity in this mode. They are not limited by statistics, they are limited by uncertainties, and the mass point \(125GeV\) sits uncomfortably close some very large uncertainties. The fact that this mode is sensitive at all is a tribute to the hard work of dozens of physicists who went the extra mile to make it work.

CMS's observed and expected limits for H→WW*, showing the dramatic degradation in sensitivity as the mass decreases (CMS)

CMS's observed and expected limits for H→WW*, showing the dramatic degradation in sensitivity as the mass decreases (CMS)

\(H\to b\bar{b}\)

At a mass of \(125GeV\) by far the largest branching fraction of the Standard Model Higgs boson is to \(b\bar{b}\). CDF and D0 have both seen a broad excess in this channel (although personally I have some doubts about the energy scale of jets at CDF, given the dijet anomaly they see that D0 does not see) hinting at a Higgs boson of \(120-135GeV\). The problem with this mode is that the background is many orders of magnitude larger than the signal, so some special tricks must be used to remove the background. What is done at all four experiments is to search for a Higgs boson that is produced in associated with a \(W\) or \(Z\) boson, and this greatly reduces the background. ATLAS did not present an updated search in the \(b\bar{b}\) channel, and taking a look at the CMS limits we can probably see why, the contribution is not as significant as in other modes. The way CMS proceed with the analysis is to use several boosted decision trees (one for each mass point) and to select candidates based on the output of the boosted decision tree. The result is less than \(1\) sigma of significance, about half of what is expected, but if this new boson is the Higgs boson then this significance will increase as we gather more data.

A powerful H→bb search requires a boosted decision tree, making the output somewhat harder to interpret (CMS)

A powerful H→bb search requires a boosted decision tree, making the output somewhat harder to interpret (CMS)

It’s interesting to note that the \(b\bar{b}\) final state is sensitive to both a spin 0 and a spin 2 boson (as I explained in a previous post) and it may have different signal strength parameters for different spin states. The signal strength parameter tells us how many events we see compared to how many events we do see, and it is denoted with the symbol \(\mu\). A there is no signal then \(\mu=0\), if the signal is exactly as large as we expect then \(\mu=1\), and any other value indicates new physics. It’s possible to have a negative value for \(\mu\) and this would indicate quantum mechanical interference of two or more states that cancel out. Such an interference term is visible in the invariant mass of two leptons, as the virtual photon and virtual \(Z\) boson wavefunctions interfere with each other.

\(H\to\tau\tau\)

Finally, the \(\tau\tau\) mode is perhaps the most enlightening and the most exciting right now. CMS showed updated results, but ATLAS didn’t. CMS’s results were expected to approach the Standard Model sensitivity, but for some reason their results didn’t reach that far, and that is crucially important. CMS split their final states by the decay mode of the \(\tau\), where the final states include \(e\mu 4\nu\), \(\mu\mu 4\nu\), \(\tau_h\mu 3\mu\), and \(\tau_h e3\nu\), where \(\tau_h\) is a hadronically decaying \(\tau\) candidate. This mode has at least three neutrinos in the final state, so like the \(WW^*\) mode the events get smeared across a mass spectrum. There are irreducible backgrounds from \(Z\) bosons decaying to \(\tau\tau\) and from Drell-Yan \(\tau\tau\) production, so the analysis must search for an excess of events over these backgrounds. In addition to the irreducible backgrounds there are penalties in efficiency associated with the reconstruction of \(\tau\) leptons, which make this a challenging mode to work this. There are dedicated algorithms for reconstructing hadronically decaying \(\tau\) jets, and these have to balance out the signal efficiency for real \(tau\) leptons and background rejection.

CMS's H→τtau; search, showing no signal (CMS)

CMS's H→τtau; search, showing no signal (CMS)

After looking at the data CMS expect to see an excess of \(1.4\) sigma, but they actually see \(0\) sigma, indicating that there may be no Standard Model Higgs boson after all. Before we jump to conclusions it’s important to note a few things. First of all statistical fluctuations happen, and they can go down just as easily as they can go up, so this could just be a fluke. It’s a \(1.5\) sigma difference, so the probability of this being due a fluctuation if the Standard Model Higgs boson is about \(8\%\). On its own that could be quite low, but we have \(8\) channels to study, so the chance of this happening in any one of the channels is roughly \(50\%\), so it’s looking more likely that this is just a fluctuation. ATLAS also have a \(\tau\tau\) analysis, so we should expect to see some results from them in the coming weeks or months. If they also don’t see a signal then it’s time to start worrying.

CMS's limit of H→ττ actually shows a deficit at 125GeV.  A warning sign for possible trouble for the Higgs search! (CMS)

CMS's limit of H→ττ actually shows a deficit at 125GeV. A warning sign for possible trouble for the Higgs search! (CMS)

Combining results

Both experiments combine their results and this is perhaps the most complicated part of the whole process. There are searches with correlated and uncorrelated uncertainties, there are two datasets at different energies to consider, and there are different signal-to-background ratios to work with. ATLAS and CMS combine their 2011 and 2012 searches, so they both show all five main modes (although only CMS show the \(b\bar{b}\) and \(\tau\tau\) modes in 2012.)

When combining the results we can check to see if the signal strength is “on target” or not, and there is some minor disagreement between the modes. For the \(ZZ^*\) and \(WW^*\) modes, the signal strengths are about right, but for the \(\gamma\gamma\) mode it’s a little high for both experiments, so there is tension between these modes. Since these are the most sensitive modes, and we have more data on the way then this tension should either resolve itself, or get worse before the end of data taking. The \(b\bar{b}\) and \(\tau\tau\) modes are lower than expected for both experiments (although for ATLAS the error bars are so large it doesn’t really matter), suggesting that this new particle may a non-Standard Model Higgs boson, or it could be something else altogether.

Evidence of tension between the γγ and fermionic final states (CMS)

Evidence of tension between the γγ and fermionic final states (CMS)

While the signal strengths seem to disagree a little, the masses all seem to agree, both within experiments and between them. The mass of \(125GeV\) is consistent with other predictions (eg the Electroweak Fit) and it sheds light on what to look for beyond the Standard Model. Many theories favor a lower mass Higgs as part of a multiplet of other Higgs bosons, so we may see some other bosons. In particular, the search for the charged Higgs boson at ATLAS has started to exclude regions on the \(\tan\beta\) vs \(m_{H^+}\) plane, and the search might cover the whole plane in the low mass region by the end of 2012 data taking. Although a mass of \(125GeV\) is consistent with the Electroweak Fit, it is a bit higher than the most favored region (around \(90GeV\)) so there’s certainly space for new physics, given the observed exclusions.

The masses seem to agree, although the poor resolution of the WW* mode is evident when compared to the ZZ* and γγ modes (ATLAS)

The masses seem to agree, although the poor resolution of the WW* mode is evident when compared to the ZZ* and γγ modes (ATLAS)

To summarize the results, ATLAS sees a \(5.9\) sigma local excess, which is \(5.1\) sigma global excess, and technically this is a discovery. CMS sees a \(5.0\) sigma local excess, which is \(4.6\) sigma global excess, falling a little short of a discovery. The differences in results are probably due to good luck on the part of ATLAS and bad luck on the part of CMS, but we’ll need to wait for more data to see if this is the case. The results should “even out” if the differences are just due to fluctuations up for ATLAS and down for CMS.

ATLAS proudly show their disovery (ATLAS)

ATLAS proudly show their disovery (ATLAS)

Looking ahead

If you’ve read this far then you’ve probably picked up on the main message, we haven’t discovered the Standard Model Higgs boson yet! We still have a long road ahead of us and already we have moved on to the next stage. We need to measure the spin of this new boson and if we exclude the spin 0 case then we know it is not a Higgs boson. If exclude the spin 2 case then we still need to go a little further to show it’s the Standard Model Higgs boson. The spin analysis is rather complicated, because we need to measure the angles between the decay products and look for correlations. We need to take the detector effects into account, then subtract the background spectra. What is left after that are the signal spectra, and we’re going to be statistically limited in what we see. It’s a tough analysis, there’s no doubt about it.

We need to see the five main modes to confirm that this is what we have been looking for for so long. If we get the boson modes (\(ZZ^*\), \(WW^*\), \(\gamma\gamma\)) spot on relative to each other, then we may have a fermiophobic Higgs boson, which is an interesting scenario. (A “normal” fermiophobic Higgs boson has already been excluded, so any fermiophobic Higgs boson we may see must be very unusual.)

There are also many beyond the Standard Model scenarios that must be studied. As more regions of parameter space are excluded, theorists tweak their models, and give us updated hints on where to search. ATLAS and CMS have groups dedicated to searching for beyond the Standard Model physics, including additional Higgs bosons, supersymmetry and general exotica. It will be interesting to see how their analyses change in light of the favored mass region in the Higgs search.

A favored Higgs mass has implications for physics beyond the Standard Model.  Combined with the limits on new particles (shown in plot) many scenarios can be excluded (ATLAS)

A favored Higgs mass has implications for physics beyond the Standard Model. Combined with the limits on new particles (shown in plot) many scenarios can be excluded (ATLAS)

2012 has been a wonderful year for physics, and it looks like it’s only going to get better. There are still a few unanswered questions and tensions to resolve, and that’s what we must expect from the scientific process. We need to wait a little longer to get to the end of the story, but the anticipation is all part of the adventure. We’ll know is really happening by the end of Moriond 2013, in March. Only then can we say with certainty “We have proven/disproven the existence of the Standard Model Higgs boson”!

I like to say “We do not do these things because they are easy. We do them because they are difficult”, but I think Winston Churchill said it better:

This is not the end. It is not even the beginning of the end, but it is perhaps the end of the beginning.” W. Churchill

References etc

Plots and photos taken from:
“Webcast of seminar with ATLAS and CMS latest results from ICHEP”, ATLAS Experiment, CERN, ATLAS-PHO-COLLAB-2012-014
Wikipedia
“Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC”, ATLAS Collaboration, arXiv:1207.7214v1 [hep-ex]
“Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC”, CMS Collaboration, arXiv:1207.7235v1 [hep-ex]
Flip Tanedo

It’s been a while since I last posted. Apologies. I hope this post makes up for it!

Share

A snapshot of the recent media coverage on the recently discovered Higgs-like particle is now online as part of the Fermilab Today archives. View television and newspaper coverage of the Tevatron results, opinion pieces on CERN’s particle discovery and photos of groups around the world who watched the CERN seminar broadcast live. See these and other Higgs media highlights.

Share

Cosmic connections

Friday, August 10th, 2012

6 August 2012. It’s a rather grey evening and I’m in the basket of a hot-air balloon, drifting over the small town of Bad Saarow–Pieskow, some 50 km south-east of Berlin. It’s a ‘first’ for me and my companions who include Bill Breisky, an American writer and former editor of the Cap Cod Times. He’s also the grandson of Victor Hess, whose balloon flight 100 years ago opened a new window on matter in the universe. On 7 August 1912, Hess had landed near Pieskow – no one now knows exactly where – but there the similarities with our small adventure end. Hess had flown for six hours, carried by a hydrogen balloon to a height of more than 5000 m. During the flight, he made measurements that showed that the natural level of radiation increases with altitude, leading him to conclude that “a radiation of very high penetrating power enters our atmosphere from above”. This was the moment that 100 years later is being celebrated as marking the discovery of ‘cosmic rays’.

Although it’s not the name that Hess gave his discovery, it’s certainly apt. We now know that cosmic rays are energetic particles from outer space. When they enter the Earth’s atmosphere, they generate showers of further particles that penetrate right down to the ground, and even below ground. As you read this, about one cosmic-ray muon, a heavier sibling of the electron, passes through your head each second, mainly from above.

Studies of cosmic rays opened the door to a world of particles beyond the confines of the atom: first, the positron (the anti-electron), then the muon, followed by the pion, the kaon and several more. Until the advent of high-energy particle accelerators in the early 1950s, this natural radiation provided the only way to investigate the growing particle ‘zoo’. Indeed, when CERN was founded in 1954, its convention included cosmic rays in the list of scientific interests. But even though accelerators came to provide the best hunting ground for new particles, cosmic rays have maintained their mystery. The record energies of the LHC are still puny compared with the highest energy cosmic rays, where a single proton entering the atmosphere can pack the punch of a tennis ball served by a top player.

Since Hess’s discovery, physicists may have answered the ‘what’ of cosmic rays – they are energetic particles – but they still haven’t answered the ‘how’ or ‘where’. Just how does nature accelerate them to such high energies? Where are the natural accelerators? These remain mysteries that continue to drive adventurous research, in places as diverse as the deep ice of the South Pole and the high plateau of central Nambia.

This brings us back to how Bill and I ended up in balloon together. Michael Walter and colleagues at the German laboratory, DESY – which has a big involvement in the IceCube experiment at the South Pole and HESS facility in Namibia – had organized  a conference in Bad Saarow  to celebrate the centenary. The meeting brought together historians as well as key people in the on-going study of cosmic rays. Bill was one of the invited speakers. On 7 August, he and his brother unveiled a plaque on a geological ‘erratic’ in Peiskow – a stone deposited after being carried from afar by a glacier during the last Ice Age. It was a fitting tribute to Victor Hess, who had found himself near the same place after making a long journey to study an intriguing natural phenomenon – and setting us on a road that would, among other things, lead to CERN and the LHC.

Christine Sutton.

 

Share

BOOST!

Sunday, August 5th, 2012

A couple weeks ago, about 80 theorists and experimentalists descended on Valencia, Spain in order to attend the fourth annual BOOST conference (tag-line: “Giving physics a boost!”). On top of the fact that the organizers did a spectacular job of setting up the venue and program (and it didn’t hurt that there was much paella and sangria to be had) overall I’d have to say this was one of the best conferences I’ve attended.

so....much.....sangria......

Differing from larger events such as ICHEP where the physics program is so broad that speakers only have time to give a cursory overview of their topics, the BOOST conferences have more of a workshop feel and are centered specifically around the emerging sub-field of HEP called “boosted physics”. I’ll try to explain what that means and why it’s important below (and in a few subsequent posts).

Intro to top quark decay

In order to discuss boosted physics, something already nicely introduced in Flip’s post here, I’m going to use the decay of the top quark as an example.

Obligatory Particle Zoo plushie portraying the top quark in a happy state

The most massive of all known fundamental particles by far, weighing in at around 173 GeV/c2, the top quark has an extremely short lifetime….much shorter than the time scale of the strong interaction. Thus the top quark doesn’t have time to “hadronize” and form a jet…instead, it will almost always decay into a W boson and a b quark (more than 99% of the time), making it a particularly interesting particle to study. The W boson then decays into either a lepton and a neutrino or two lighter quarks, and the full top decay chain is colloquially called either “leptonic” or “hadronic”, respectively.

From the experimental point of view, top quarks will look like three jets (one from the b and two from the light quarks) about 70% of the time, due to the branching fraction of the W boson to decay hadronically. Only 20% of tops will decay in the leptonic channel with a jet, a muon or electron, and missing energy. (I’m ignoring the tau lepton for the moment which has it’s own peculiar decay modes)

In colliders, top quarks are mostly produced in top/anti-top (or “t-tbar”) pairs….in fact, the top-pair production cross section at the LHC is about 177 pb (running at sqrt(s)=7 TeV), roughly 25 times more than at the Tevatron!! Certainly plenty of tops to study here. Doing some combinatorics and still ignoring decay modes with a tau lepton, the whole system will look:

  1. “Fully hadronic”: two hadronically-decaying tops (about 44% of the time)
  2. “Semi-leptonic”: one leptonically-decaying and one hadronically-decaying top (about 30% of the time)
  3. “Fully leptonic”: two leptonically-decaying tops (only about 4% of the time)

Branching fractions of different decay modes in t-tbar events (from Nature)

 

The point: if a t-tbar event is produced in the detector, it’s fairly likely that at least one (if not both) of the tops will decay into jets! Unfortunately compared to the leptonic mode, it turns out this is a pretty tough channel to deal with experimentally, where at the LHC we’re dominated by a huge multi-jets background.

What does “boost” mean?

If a t-tbar pair was produced with just enough energy needed to create the two top masses, there wouldn’t be energy left over and the tops would be produced almost at rest. This was fairly typical at the Tevatron. With the energies at the LHC, however, the tops are given a “boost” in momentum when produced. This means that in the lab frame (ie: our point of view) we see the decay products with momentum in the same direction as the momentum of the top.

This would be especially conspicuous if, for example, we were able to produce some kind of new physics interaction with a really heavy mediator, such as a Z’ (a beyond-the-Standard-Model heavy equivalent of the Z boson), the mass of which would have to be converted into energy somewhere.

Generally we reconstruct the energy and mass of a hadronically-decaying top by combining the three jets it decays into. But what if the top was so boosted that the three jets merged to a point where you couldn’t distinguish them, and it just looked like one big jet? This makes detecting it even more difficult, and a fully-hadronic t-tbar event is almost impossible to see.

At what point does this happen?

It turns out that this happens quite often already, where at ATLAS we’ve been producing events with jets having a transverse momentum (pT) of almost 2 TeV!

A typical jet used in analyses in ATLAS has a cone-radius of roughly R=0.4. (ok ok, the experts will say that technically it’s not a “cone,” let alone something defined by a “radius,” as R is a “distance parameter used by the jet reconstruction algorithm,” but it gives a general idea.) With enough boost on the top quark, we won’t be able to discern the edge of one of the three jets from the next in the detector. Looking at the decay products’ separation as a function of the top momentum, you can see that above 500 GeV or so, the W boson and the b quark are almost always within R < 0.8. At that momentum, individual R=0.4 jets are hard to tell apart already.

The opening angle between the W and b in top decays as a function of the top pT in simulated PYTHIA Z'->ttbar (m_Z' =1.6 TeV) events.

 

We’ll definitely want to develop tools to identify tops over the whole momentum range, not just stopping at 500 GeV. The same goes for other boosted decay channels, such as the imminently important Higgs boson decay to b-quark pairs channel, or boosted hadronically-decaying W and Z bosons. So how can we detect these merged jets over a giant background? That’s what the study of boosted physics is all about.

Next: Finding boosted objects using jet “mass” and looking for jet substructure

Next next: Pileup at the LHC….a jet measurement nightmare.

Share

When arriving at Fermilab, one of the first people I spoke to was a graduate student in the g-2 collaboration.

“What’s it like working on Gee Two?” I asked.

“Gee MINUS Two, you mean,” the grad student responded to me wearily, like he’d been through this before.

I hope he could forgive my confusion of a subtraction sign for a more commonplace hyphen, but it got me thinking, all these experiment names are a bit confusing to pronounce sometimes.

There are MicroBooNE and MiniBooNE, whose last two capitalized letters almost seem like a prompt to shout “NEH!” at their termini. There’s the dark-matter experiment COUPP, where I think even those involved are unsure if those two p’s are procedurally pronounced or not.

Even the spelling of the neutrino experiment NOvA, which seems fairly straightforward, presents some challenges. That lowercase v in the middle? It’s not one: instead, it’s a Greek letter masquerading as a Latin character, the lone actor on a stage full of Romans. The letter ν, Romanized nu and shaped like a v, is the symbol for neutrinos, hence its appearance in the name. So begins the confusion: is it NOvA or NO”nu”A?

To set the record straight, if you see a nu – also to be seen in MINERvA – just assume it’s a v and carry on with your day.

But then comes Mu2e to further confuse the situation. It too contains a Greek letter – mu, which stands for muon – yet this time it’s spelled out and Romanized. Perhaps it’s because μ’s shape is agonizingly close to that of the Latin u, and most people can’t be bothered to tell the difference. So to prevent people from saying “You 2 e” and mistaking a sophisticated physics experiment for an outtake of Purple Rain, we may as well spell it out.

Also inconsistent is NuMI – there’s that dastardly nu again, this time also Romanized – but I suppose those folks have the same reasoning as with Mu2e. They probably don’t want people calling it “VEE EM EYE” or, worse, “vmee.”

Keep in mind that all these names stand for something. Mu2e stands for “muon to electron,” which is simple enough. MicroBooNE stands for “Booster Neutrino Experiment, Micro-scale.” Contrast that with MiniBooNE, which officially stands for “Booster Neutrino Experiment, Petite Size.” When did “petite” join the metric scale? NOvA is also uncanny, standing for “NuMI Off-axis Electron Neutrino Appearance.” That’s right, an acronym containing another acronym. A meta-acronym. Crazy stuff.

If you’re still sent on a dizzying spiral regarding the proverbial alphabet soup of acronyms, it’s better than it used to be. Experiments used to be named in a simple E- three digit number format. Imagine carrying on a conversation – “E-345 isn’t happy with E-296’s reluctance to read the papers coming out of E-103.” At least we’ve a little creativity now.

To keep up with all of these names, here are a few rules:

– Pronounce the name in the simplest way possible, with the fewest syllables. So yes, the last E in MicroBooNE and MiniBooNE is silent, despite its capitalization.
– If it looks like a mathematical symbol, it probably is.
– You see a nu, you know what to do.
– Almost every experiment name out there is an acronym. If you want to know what they stand for, it’s a fun game to try to guess before Googling it.

Now you are armed to tackle just about any experiment name we at Fermilab can throw at you. For other labs, you’re on your own. Good luck and keep watching the “nus.”

Joseph Piergrossi

Share