• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Archive for the ‘Uncategorized’ Category

It’s amazing that so much hard work (and such high levels of stress) can be condensed down so much… 5 pages, 3 plots and a table – and the new world leading limit on the WIMP-nucleon spin-independent elastic scattering cross section, of course.
Yes, the LUX Run 3 reanalysis results are finally out. It’s been in the works for over a year, and it has been a genuinely wonderful experience to watch this paper grow – and seeing my own plot in there has felt like sending forth a child into the world!
As much as I worked to improve our signal efficiency at low energies, the real star of the LUX reanalysis show was the “D-D” calibration – D-D standing for deuterium-deuterium. We calibrated the detector’s response to nuclear recoils (which we expect WIMP dark matter to cause) with something that sounds like it is out of science fiction, a D-D generator. This generator uses the fusion of deuterium (think heavy hydrogen – one proton, one neutron) to generate neutrons that are focussed into a beam and sent into the detector.

Quick LUX 101 – LUX is a dark matter search experiment. Dark matter is that mysterious dark, massive substance that makes up 27% of our universe. How does LUX look for dark matter? Well, it is a ‘dual phase xenon TPC’ detector, and it lives 4850 feet underground at the Sanford Underground Research Facility.  It must be underground to shield it from as much cosmic radiation as possible – as it is looking for a very rare, weakly interacting dark matter particle called a WIMP. LUX is basically a big tank of liquid xenon, with a gas layer on top. It is sensitive to particles that enter this xenon – photons and electrons cause what we call an electron recoil (think of them bouncing off an atomic electron) whilst neutrons cause a nuclear recoil (bounce off a xenon nucleus). We expect that WIMPs will interact with the atomic nuclei too, just incredibly rarely – so understanding the detector response to these nuclear recoils is of utmost importance.  Both these electron recoils and nuclear recoils, inside the liquid xenon cause a flash of light, a signal we call “S1”, the scintillation signal. Any light in LUX is picked up by two arrays of photomultiplier tubes, 122 in total. Recoils can cause ionisation of electrons; electrons are ‘knocked off’ their atoms by the collision. If you place an electric field over the xenon volume, you can actually push these electrons along, instead of letting them recombine with their atoms. In LUX, the electrons are pushed all the way to the top, and into the gaseous xenon later. They then cause a second flash of light via scintillation in the gas, “S2”, the ionisation signal (as its source is the ionised electrons). Two signals mean two things – discrimination between electron recoils (background) and nuclear recoils (possible dark matter signal!) due to the differing distribution of energy between S1 and S2 for each recoil, and secondly, 3D position reconstruction. XY coordinates can be determined from looking at which photomultiplier tubes light up, whilst the time between the S1 and S2 tells us the depth of the interaction. This XYZ position is very important; we use the xenon to shield itself from radiation from the detector materials itself, or from the surrounding rock. If we have the 3D position of all our events, we can only look in the very inner region of the detector, where it is very quiet, for those rare dark matter interactions.

Schematic of the LUX detector

Schematic of the LUX detector. On the left, it is demonstrated how the S1 and S2 signals can provide 3D position reconstruction. The right shows the inside of the detector, and the position of the photomultiplier tubes that collect light emitted by the scintillation of xenon.


Back to the deuterium-deuterium fusion neutron gun – it’s actually a wonderfully simple but extremely clever idea. We fire a beam of neutrons into our detector, all at the same energy (monoenergetic or monochromatic), at a set position. We then select events in our data along that beam, and look for those neutrons that scattered a second time in the detector. Because of that XYZ position reconstruction, if we have signals from two different scatters, we can actually determine the angle of scattering. As the initial energy is known, allows the energy of the recoil to then be calculated, via simple kinematics. Matching the recoil energy with the size of the two signals allows us to calibrate the nuclear recoil response of the detector extremely well.  The light yield (in S1), tougher to measure than the charge yield (in S2), as we are talking about individual photons, was measured as low as 1.1keV. (keV are kiloelectronvolts, or 1000x the energy of a single electron moved across a potential difference of 1V. In other words, a tiny quantity. 1keV is only 1.6×10-16 joules!)  The charge yield was measured below 1keV. In the previous LUX results, we had assumed a conservative hard cut off – ie we would measure no light for recoils below 3 keV. Now we know that isn’t the case, and can extend our sensitivity to lower energies – which corresponds to lighter WIMPs.

Screenshot 2015-12-14 20.28.48

Upper limits on the spin-independent elastic WIMP-nucleon cross section at 90% CL. Observed limit in black, with the 1- and 2-σ ranges of background-only trials shaded green and yellow.

This improvement in low energy calibration, as well as a more streamlined and improved analysis framework, has led to a huge improvement in LUX’s low WIMP mass limit. In the plot above, which shows the WIMP mass against the probability of interaction with a nucleus, everything above the black line is now ruled out. If you’ve been following the WIMP dark matter saga, you will know that a few experiments were claiming hints of signals in that low mass region, but this new result definitely lays those signals to rest.

Getting a paper ready for publication has turned out to be far harder work than I expected. It requires a lot of teamwork, perseverance, brain power and very importantly, the ability to take on criticism and use it to improve. I must have remade the LUX efficiency plot over 100 times, and a fair few of those times it was because someone didn’t quite like the way I’d formatted it. In a collaboration, you have to be willing to learn from others and compromise. In the last few days before we finished I did not benefit at all from my UK time zone, as I stayed up later and later to finish things off. But – it was worth it! Now, if I search for my name on arXiv, I come up 4 times (3 LUX papers and the LZ Conceptual Design Report). As pathetic as it sounds, this is actually quite exciting for me, and is what I hope to be the foundations of a long career in physics.

The new LUX results are obviously nowhere near as exciting as an actual WIMP discovery, but it’s another step on the way there. LUX Run 4 is in full swing, where we will obtain over 3 times more data, increasing our sensitivity even further , and who knows – those WIMPs might just finally show their faces.


A phrase from William Shakespeare’s Romeo and Juliet states: “What’s in a name? That which we call a rose By any other name would smell as sweet.” This cannot be any further from the truth in the corporate world. The name of a corporation is its face, so setting a brand requires a lot of work and money. But what happens when something goes wrong?  The way to deal with corporate problems often involves re-branding, changing the name and the face of the corporation.  It works as customers usually do not check the history of a company before buying its products or using its services. It simply works.

With the Universities today run according to the corporate model, it was only a matter of time until re-branding came to the academic world. And leading Universities, like Harvard, seem to be embracing the model. Since 2013 article in Harvard Crimson, big Universities became a focus of investigations of many leading newspapers and politicians. Harvard, in particular, has been a focus of a brewing controversy. The University with the largest endowment of any university in the world, has got its name associated with the person who was not, in fact, the founder of Harvard University. As reported, in the very recent internal investigation by Harvard Crimson, John Harvard cannot be the founder of the school, because the Massachusetts Colony’s vote had come two years prior to Harvard’s bequest (compare this to Ezra Cornell’s founding of Cornell University). This led several prominent Massachusetts politicians to suggest that the University will be returned to the ownership by the Commonwealth with its name changed to University of Massachusetts, Cambridge. “We have a fantastic University system here in Massachusetts, with the flagship campus in Amherst,” said one of the prominent politicians who preferred not to be named, “Any University in the World would be proud to be a part of it.”

Returning a prominent private University to the ownership by the State is highly unusual nowadays and is probably highly specific to New England. With tightening budgets many states seek to privatize the Universities to remove them from their budget. For instance, there is a talk that a large public Midwestern school, Wayne State University, will soon change its owners and its name. Two prominent figures, W. Rooney and W. Gretzky, are rumored to work on acquiring the University and re-branding it as simply Wayne’s University. And the changes are rumored go even further. An external company Haleburton has already completed an assessment of the University’s strengths. The company noted WSU’s worldwide reputation in chemistry, physics and medicine and its Carnegie I research status, and recommended that the school should concentrate its efforts on graduating hockey, football, basketball and baseball players. “We are preparing our graduates to have highly successful careers. What job in the United States brings more money than the NFL or NHL player?” a member of WSU’s Academic Senate has been quoted in saying. “We are all excited about the change and looking forward to what else future would bring us.”


Portuguese version below…

Nature sometimes demands a lot of effort in order to reveal its secrets. Particle physics, of course, obeys a very similar pattern. Many, many events have to be analyzed in order to find a few that can be really interesting. Let’s take the case of the ever sought Higgs particle. The probability of generating a single Higgs is quite small. Assuming that in the total period from the start of the physics operation in 2010 and ending just before the July 4th, 2012 announcement comprises, in fact, around 450 real LHC operation days, we had, on average, 480 Higgs per day (all numbers in this post are approximative). However, quite unfortunately, most (~60%) of these Higgs decay in a mode (H->bb) which is very easy to confuse with other production modes or (~20%, H->WW) in a mode not so easy to measure precisely its properties. One of the cleanest modes to study the Higgs is its decay into two photons. The photons were detected with the ATLAS calorimeters (see our previous posts). But the quantity of produced events in this mode is much smaller (basically, around 1 per day!). Sometimes, LHC produces more than 30 million collisions per second. Now, imagine that the LHC could only produce a tenth of this number of collisions (3 million), we would have to wait 10 days for a detectable Higgs. Here comes (at last!) our central topic. Given the rarity of Higgs events, the LHC has to produce a ridiculously high amount of events per second to produce a few interesting ones in a practical rate. Given also the fact that Nature loves to produce other events which are very common and very well known, our detectors are filled up with events that are basically junk (background), at least for Higgs search. If we recorded all of the events produced in ATLAS, more than 40 GBs of data storage space would be necessary per second. That would be simply unmanageable!!! So, the only way to have a reasonable data flow and still be able to make physics at a reasonable rate is to select events before recording. That’s what we call the “trigger”.

Immediately two obsessions related to the trigger system appear : reduce as much as possible the huge rates of events faking good signatures (or the data acquisition system of the detector will not handle the stress); highest possible efficiency : never loose a very good candidate of a given signature or you loose the physics event which is exactly why one builds such gigantic machines!! In the case of a lost Higgs, another working day will be necessary! As you will later on the next posts, the algorithms used in the trigger operate always at the limit rate in which they can guarantee a very high efficiency (usually, not too far from 100%).

Finding the Higgs

The complex work of finding the Higgs. Picture downloaded from : http://www.englishblog.com/2012/07/higgs-boson-cartoons.html#.UER8fkRhq6B

ATLAS Higgs to gamma gamma plot

ATLAS Higgs to gamma gamma mass plot. For more information, check the page : https://twiki.cern.ch/twiki/bin/view/AtlasPublic/HiggsPublicResults

The picture above (stolen from many places in the web – see the caption), is not very far from the truth. To find a Higgs, you have to search a lot. See for instance, the official ATLAS plot for the Higgs detection. In this plot, the amount of Higgs candidates shown as an excess is quite small (around 230-250 events in the 4 bins between 122 and 130 GeV). See in the top plot, that this excess shows up in more than 8000 events (around 2000 in each of the four histogram bins). And this after trigger and offline analysis selection, for a very narrow mass range and only for events in which two photons were detected! In principle, we should expect more Higgs (around 400-450), but some are lost because their photons appear too close to the particle beam and ATLAS does not see them (or at least misses one of them and the pair cannot be formed). Others are lost because the requirements to accept a photon as such are very restrictive and the chance of loosing at least one of the two photons is relatively high. So, loosing a fraction of these events is unfortunate, but unavoidable given the experimental conditions. Another remarkable fact is that we cannot necessarily know which 250 events of those 8250 are really related to Higgs. We just know that the Higgs contributed by increasing the rate of possible events in that mass range. The researchers are always trying to find some clever techniques to avoid the 8000 unnecessary events, but this is no trivial task. If a new technique is developed, it would certainly end up included as a trigger algorithm.

As for the “videos” section of this post, I’d like to make a little propaganda about some sources of information. So, I recorded quick working sessions on two tools that are used in the ATLAS Control Room to visualize a small fraction of the acquired events while the experiment is running. The first is directly from the atlas.ch web site, the so called atlas live events (check the link!). It takes around 15 to 20 seconds to change the event. I made a short extract in the video below. Another very nice tool is Camelia that can make 3D images from events coming directly from the ATLAS detector and you can play with them. The important point here is that most of the events displayed (a random sample) are the ones which have very small signal (lots of tracks with low momentum) but almost no important signal (straight lines with some calorimeter activity). If you wait long enough you will eventually see some interesting events. This demonstrates why we need to apply a strong selection to avoid wasting recording time and space with trivial events. You may want to see these in full screen.

I advise also the third video, where I tried to make a quick analysis using a pre-recorded events. First, one finds two jets in a single event (it could be two photons) and later two muons. Summing the muons momenta, we can see that the pair mass (93 GeV/c2) is quite close to a Z boson mass (91 GeV/c2).

In the next post, we will see the three levels of the ATLAS trigger system with increasing complexity and accessing each time more details of the detector. When it gets to the software trigger levels, we will probably have a post about computing in such environment (yes, the trigger software must be fast, even if you have to loose a bit of precision!). If you are interested in understanding how one of these detectors work, I advise a look at my latest 3 posts (first, second and third). You will need that information to understand the trigger.

Portuguese version :

Por vezes, a Natureza exige um grande esforço antes de revelar seus segredos. A física de partículas obedece um padrão similar. Muitos, muitos eventos têm de ser analisados antes de se encontrar alguns que sejam realmente interessantes. Vejamos o caso da famosa partícula de Higgs. A probabilidade de gerar um Higgs é bem pequena. Assumindo que o período total de operação do LHC real em torno de 450 dias em 2010, 2011 e parte de 2012, tivemos, na média, cerca de 480 Higgs por dia (os números discutidos são aproximações). Entretanto, infelizmente, cerca de 60% desses Higgs têm um modo de decaimento (H->bb) que é muito similar à outros processos físicos, e, logo, difícil de detectar. Outro modo de decaimento (H->WW, 20%) é um pouco difícil de ter suas propriedades medidas. Um dos modos mais “limpos” de se detectar o Higgs é o seu decaimento em dois fótons. Estes fótons sensibilizaram os calorímetros do ATLAS (veja posts anteriores). Mas a quantidade de eventos produzidos dessa forma é muito menor (mais ou menos uma vez por dia!). Algumas vezes, o LHC produz mais de 30 milhões de colisões por segundo. Agora, imagine que o LHC produzisse um décimo desde número (3 milhões), teríamos que esperar 10 dias para encontrar um Higgs detectável. Assim, chegamos (finalmente!) no nosso tópico central. Dada a raridade com que eventos possuindo Higgs acontecem, o LHC tem que produzir uma quantidade ridiculamente gigante de eventos por segundo para produzir alguns interessantes numa taxa praticável. Também dado o fato de que a Natureza adora produzir eventos que são já muito conhecidos, nossos detectores são mantidos cheios de eventos inúteis, pelo menos inúteis para a procura do Higgs. Se gravássemos todos os eventos produzidos no ATLAS, seriam necessários mais de 40 GB de espaço de armazenagem de dados por segundo! O manuseio de tantos dados seria impraticável. Ou seja, a única forma de se ter um regime de armazenamento mais razoável e ainda ser capaz de fazer os estudos da física é selecionar os eventos antes de gravá-los. Esse processo é chamado de “trigger” (cuja tradução literal seria disparo).

Imediatamente, vemos dois tópicos que são verdadeiras obsessões no domínio do trigger : redução ao máximo das gigantescas taxas de eventos fingindo ser boas assinaturas (para reduzir o stress sobre o sistema de aquisição de dados); maior eficiência possível : nunca perder um evento interessante que é exatamente o motivo pelo qual construímos essas máquinas gigantescas! No caso do Higgs, isso pode ser traduzir na necessidade de esperar por todo um novo dia de trabalho! Como ficará claro nos próximos posts, os algoritmos usados no trigger operam sempre na taxa em que eles ainda possam garantir altíssima eficiência (usualmente, próximo a 100%).

Finding the Higgs

O trabalho complicado para encontrar o Higgs. Figura capturada da página : http://www.englishblog.com/2012/07/higgs-boson-cartoons.html#.UER8fkRhq6B

ATLAS Higgs to gamma gamma plot

Massa do par Higgs para gamma gamma medida pelo detector. Para maiores informações, veja https://twiki.cern.ch/twiki/bin/view/AtlasPublic/HiggsPublicResults

Veja a figura acima (encontrada na web – veja o link no texto da figura). A brincadeira é muito próxima da realidade. Para encontrar um Higgs, deve haver uma extensiva procura. Veja por exemplo, a figura da detecção do Higgs pelo detector ATLAS. Nessa figura, a quantidade de candidatos à Higgs aparece como um pequeno excesso (cerca de 230-250 eventos nos quatro bins entre 122 e 130 GeV). Veja na parte superior da figura que esse excesso acontece sobre cerca de 8000 eventos (distribuídos em quatro bins do histograma). E isso, depois da seleção do trigger e da análise offline, por uma pequena faixa de massa e para os eventos nos quais dois fótons foram encontrados. Em princípio, deveríamos ter mais Higgs (cerca de 400-450), mas muitos são perdidos porque seus fótons aparecem muito perto do feixe de partículas e ATLAS não os vê (ou não vê um deles, logo o par não pode ser estudado). Outros são perdidos porque os requisitos para aceitar um fóton são muitos restritivos e a chance de se perder um deles é relativamente alta. Assim, perder uma fração destes eventos é muito ruim, mas inevitável dadas as condições experimentais. Outro fato remarcável é que não podemos saber quais 250 dos 8250 eventos são realmente relativos ao Higgs. Apenas sabemos que o Higgs contribui para o aumento da taxa de eventos naquela faixa de massa. O pesquisadores tentam inventar novas técnicas inteligentes que permitam evitar os 8000 eventos desnecessários, mas essa não é uma tarefa simples. Se uma nova técnica for desenvolvida, ela vai acabar sendo aplicada no trigger.

Para a seção de vídeos deste post, eu gostaria de aproveitar para fazer uma pequena propaganda de algumas fontes de informações. Assim, gravei uma rápida seção de trabalho em duas ferramentas que são utilizadas na sala de Controle do ATLAS para visualizar uma pequena fração dos eventos adquiridos enquanto o experimento está tomando dados. O primeiro está disponível no site atlas.ch com o nome de live events (eventos ao vivo – visite o link!). Demora cerca de 15 a 20 segundos para se trocar o evento. Outra ferramenta é a Camelia que pode fazer imagens 3D dos eventos vindos diretamente do detector ATLAS, e você pode brincar com o detector. O ponto importante aqui é observar que uma amostra aleatória é basicamente composta com eventos com pouco sinal (muitos traços com pouco momento) mas quase nenhuma atividade no calorímetro. Se você esperar algum tempo, você verá alguns eventos com um nível de atividade razoável. Isso demonstra o quanto precisamos fazer uma seleção forte para evitar desperdício de tempo e espaço de dados com eventos triviais. Talvez seja mais prático ver estes eventos em tela cheia.

Também tentei fazer uma análise rápida de alguns eventos pré-gravados no terceiro vídeo. No primeiro, vemos dois jatos num mesmo evento (podem ser dois fótons não corretamente identificados) e, depois, vemos dois múons. Somando o momentum desses múons, podemos ver que a massa do par (93 GeV/c2) é muito próxima à do bóson Z (91 GeV/c2).

No próximo post, veremos os três níveis de trigger do ATLAS acessando a cada nível mais detalhes do detector. Quando chegamos no nível de seleção do trigger por software, provavelmente teremos um post sobre computação neste ambiente (o algoritmo tem que tomar decisões rápidas mesmo que seja a perder um pouco de precisão!). Se você está interessado em como funcionam os detectores, revise meu últimos três posts (primeiro, segundo e terceiro). Você vai precisar disso para entender o trigger.


My Science Scout Badges

Thursday, November 10th, 2011

For translations, click or roll over each badge, or see the web page of the Order of the Science Scouts of Exemplary Repute and Above Average Physique.




Congratulations goes out to fellow US LHC Blogger Prof. Sarah Demers for just being awarded the Department of Energy’s Early Career Award.  The announcement is naturally featured prominently on the website of her home institution, Yale University Physics Department.  This award has recently replaced the long-standing DOE Outstanding Junior Investigator Award (OJI), which has awarded grants to promising junior faculty members from 1978-2008, an impressive run!  The new Early Career award and has brought the previous National Science Foundation’s Early Career Award and DOE OJI awards together to a more similar format and award level.

These awards can mean a tremendous amount to a new faculty member in particle physics. I was fortunate enough to receive an OJI from the DOE, and fellow blogger Prof. Ken Bloom was fortunate to receive a Career Award from NSF, when we were both new junior professors.  This allowed us both to support perhaps a graduate student and part of a postdoc’s salary as well as our own summer salaries while we established our research programs as new faculty members.  Now Sarah has earned a peer-reviewed grant, which is a major milestone for a new professor, and which enables her to proceed with her successful research program without relying on university start-up funds (which eventually dry up).  Here’s to Sarah’s future success!

Photo by Waldo Jaquith



Friday, May 6th, 2011

CERN is the place to be if you’re a particle physicist! It has everything you could want here: the most promising experiments, all kinds of experts on hand, some of the most powerful computing systems in the world, fascinating seminars. It’s enough to draw people in from all over the world. The only downside is that it’s a bit tricky to get away from CERN for an evening in the city. Well not anymore! This week the tram arrived at CERN, giving us an essential lifeline to Geneva, with all its services and nightlife.

CERN tram

The CERN tram!

The town of Meryin saw the new tram as cause for a street party, with all kinds of entertainers, a jazz band, and free rides on an historic tram. So I went along to see what there was to offer, and how people reacted to the new transport link. Everyone seemed to be very happy about it (except perhaps for a few motorists!) “Great!” I thought, this gives us an easy way to get around. We can socialize more often, making it easier to meet people, enjoy ourselves, and making short trips to CERN all the more fun. There are many people who come to CERN for a few weeks or months at a time over the summer, and there’s pressure to cram as much into their time here as possible. Trimming some minutes off the journeys to and from Geneva makes things just that little bit easier for everyone!

People coming to explore CERN

People coming to explore CERN

What impressed me most was how CERN used this opportunity to reach out the public. In retrospect it was silly that I didn’t realize the tram went to CERN as well as from CERN! The new service included a tram advertising CERN, taking people right up the Microcosm and the Globe, where they were welcomed in to see what CERN has to offer. Presumably this is only the start of a new way of approaching CERN (literally and figuratively.) This is the first time people can get directly from the heart of Geneva to the center of CERN’s public spaces. The icing on the cake is the tram itself, which is so modern and spacious. First impressions matter, and no longer relying on the rickety number 56 bus to go the final mile will make a big difference to people’s perceptions of CERN. It’s a place which is modern, relevant, well connected and a vital part of the greater Geneva area. It’s deserved a tram stop for years and one has finally arrived!


A couple of weeks ago we met the Higgs boson and discussed its Feynman rules.


I had forgotten to put up the obligatory Particle Zoo plush Higgs picture in my last post, but US LHC readers will know that Burton has the best photos of the [plushy] Higgs. (It seems that the Higgs has changed color over that the Particle Zoo.)

We learned that the Higgs is a different kind of particle from the usual gauge boson “force” particles or the fermion “matter” particles: it’s a scalar particle which, for those who want to be sophisticated, means that it carries no intrinsic quantum mechanical spin. Practically for these posts, it means that we ended up drawing the Higgs as a dashed line. For the most part, however, the Feynman rules that we presented in the previous post were pretty boring…

Recall the big picture for how to draw Feynman diagrams:

  1. Different particles are represented by lines. We now have three kinds: fermions (solid lines with arrows), gauge bosons (wiggly lines), and scalars (dashed lines).
  2. When these particles interact, their lines intersect. The “rules” above tell us what kinds of intersections are allowed.
  3. If we want to figure out whether a process is possible, we have to decide whether or not we can use the rules to convert the initial set of particles into the final set of particles.

If you’ve been following our posts on Feynman diagrams, then you might already be bored of this process. We could see how electrons could turn into muons, or even how the Higgs boson might be produced at the LHC; but now we’ve arrived at the Higgs boson—one of the main goals of the LHC—where is the pizzazz? What makes it special, and how do we see it in our Feynman rules?

The Higgs is special

It turns out that the Higgs has a trick up it’s sleeve that the other particles in the Standard Model do not. In the language of Feynman diagrams, a Higgs line can terminate:

The “x” means that the line just ends; there are no other particles coming out. Very peculiar! We know that ordinary particles don’t do this… we don’t see matter particles disappearing into nothing, nor do we see force particles disappearing without being absorbed by other particles. We can think about what happens when matter and anti-matter annihilate, but there we usually release energy in the form of force particles (usually photons). The above rule tells us that a single Higgs line—happily doing its own thing—can be suddenly be cut off. It shouldn’t be read as an initial state or final state particle. It’s just some intermediate line which happens to stop.

We’ll discuss the physical meaning of this in upcoming posts. Sometimes when people try to explain the physical meaning they can get caught up in their own analogies. Instead, let us use the Feynman diagrams as a crutch to see the effects of this weird Feynman rule. Recall that in the previous post we introduced a four-point Higgs self-interaction (“four-point” means four Higgs lines intersecting):

If we take one of the lines and terminate it, we end up with a three-point Higgs self interaction:

In fact, since the crossed out line isn’t doing anything, we might as well say that there is a new Feynman rule of the form

Now that’s somewhat interesting. We could have forgotten about the “crossed out Higgs line” rule and just postulated a three-point vertex. In fact, usually this is the way people write out Feynman rules (this is why our method has been “idiosyncratic“); however, for our particular purposes it’s important to emphasize that what people really mean is that there is implicitly a “crossed out Higgs line.” The significance is closely tied up to what makes the Higgs so special.

We could play this game again and cross one one of these three lines. This would lead us to a two-point Higgs interaction.

Once again, we could just as well chop off the two terminated lines and say that there is a ‘new’ two-point Higgs Feynman rule. But this is really just a line, and we already knew that we could draw lines as part of our Feynman rules. In fact, we know that that lines just mean that a particle moves from one place to another. So it seems like this interaction with two crossed out lines doesn’t give us anything news.

… except there’s more to it, and this is where we start to get a hint of the magic associated with the Higgs. Let me make the following statement without motivation:

Claim: the above Feynman rule is a contribution to the Higgs mass.

At this point, you should say something incredulous like, “Whaaaaaat?” Until now, we’ve said that particles have some particular mass. The number never really mattered that much, some particles are lighter than others, some particles have zero mass. Mass is just another property that each particle seems to have. Now, however, we’ve made a rather deep statement that puts us at the tip of a rather large iceberg: we’re now relating a particular Feynman rule to the mass of the particle, which we had previously assumed was just some number that we had to specify with our theory.

We’ll have to wait until my next post to really get into why such a relation should exist and really what we even mean by mass, but this should at least start to lend credence to the idea that the Higgs boson can give masses to particles. At this point this should still feel very mysterious and somewhat unsatisfying—that’s okay! We’ll get there. For now, I just want you to feel comfortable with the following string of ideas:

  1. The Higgs boson has a special Feynman rule where a line can terminate.
  2. This means we can take any interaction and effectively remove the Higgs line by terminating it immediately after the vertex.
  3. In particular, this means that we generate a vertex with just two lines.
  4. This vertex with two lines should—for reasons which are presently mysterious—be identified with mass.

Giving mass to the other particles

Now that we see how this game works, we should immediately go back to the first two Feynman rules we wrote down:

These are the interactions of the Higgs with fermions and gauge bosons. Here’s what you should be thinking:

Hm… I know that the Higgs boson line can terminate; I can just cross out the end points of a dashed line. And I just saw that when I do this to the Higgs self-interaction vertex enough times, I end up with a two-point interaction which Flip tells me is a mass for some weird reason. Now I these two vertexes representing the Higgs interaction with two matter particles or two force particles. Does terminating the Higgs line also give mass to these particles?

The answer is yes! We end up with vertices like this:

For aesthetic reasons (and really only for aesthetic reasons) we can shrink this diagram to:

We can even drop the “x” if you want to be even more of a purist… but for clarity we’ll leave it here to distinguish this from a normal line. These diagrams indeed represent a mass contribution to fermions and gauge bosons. Again, I’m just telling you this as a mysterious fact—we’ll explain why this interpretation is accurate later on. We’ll need to first understand what “mass” really is… and that will require some care.

Bumping up against the Higgs

In fact, instead of saying that particles “start out” with any masses, one can formulate our entire Feynman diagram program in terms of completely massless particles. In such a picture, particles like the top quark or Z boson undergo lots of the aforementioned two-point “mass” interactions and so are observed to have larger masses. Heuristically, heavy particles barrel along and have lots of these two-point interactions:

For comparison, a light particle like the electron would have fewer of these interactions. Their motion (again, heuristically) looks more like this:

We should remember that each of these crosses is really a terminated Higgs line. To use some fancy parlance which will come up in a later post, we say that the Higgs has a “vacuum expectation value” and that these particles are bumping up against it. The above pictures are just ‘cartoons’ of Feynman diagrams, but you can see how this seems to convey a sense of “inertia.” More massive particles (like the top quark) are harder to push around because they keep bumping up against the Higgs. Light particles, like the electron, don’t interact with the Higgs so much and so can be pushed more easily.

In this sense, we can think of all particles as being massless, but their interactions with the Higgs generates a two-point interaction which is effectively a mass. Particles which interact more strongly with the Higgs have more mass, while particles which interact weakly with the Higgs have less mass. In fact, once we assume this, we might as well drop all of the silly crosses on these lines—and then we’re left with the usual Feynman rules (with no terminating Higgs lines) that are usually presented.

(A small technical note: the Higgs isn’t actually responsible for all mass. For example, bound states get masses from their binding energy. Just look up the mass of the proton and compare it to the mass of its constituent quarks. The proton has a mass of about 1 GeV, while the up/down quarks are only one thousandth of this. Most of the proton mass comes from the binding energy of QCD.)

Some closing remarks

Before letting you ponder these things a bit more, let me make a few final remarks to whet your appetite for our next discussion.

  • The photon, as we know, is massless. We thus expect that the Higgs does not interact with the photon, or else we could have ‘terminated’ the Higgs lines in the interaction vertex and generated a photon mass.
  • On the other hand, the Higgs gives the W and Z bosons mass. This means that it costs energy to produce these guys and so the weak is only really effective over a short distance. Compare this to photons, which are massless, and so can produce a long range force. (Gluons are also massless, but they have a short range force due to their confinement.) Thus the Higgs is responsible for the “weakness” of the weak force.
  • … on that note, it’s worth noting that the “weak” force isn’t really so weak—it only appears weak at long distances due to the mass of the W and Z. If you look at shorter distances—say on distances shorter than the distance between two Higgs crosses in the cartoon picture above—then you’d find that the weak force is actually quite potent compared to electromagnetism. Thus a more accurate statement is that the Higgs is responsible for the short-ranged-ness of the weak force.

There are also a few open questions that are worth pointing out at this point. We’ll try to wrap these up in the upcoming posts on this subject.

  • The big elephant in the room is the question of why the two-point interaction from terminating a Higgs line should be interpreted as a mass. We got a hint in the picture above of how “bumping off the Higgs” can at least heuristically appear to have something to do with inertia. We’d like to better understand what we really mean by mass.
  • We also very glibly talked about treating everything as massless and only generating ‘effective’ masses through such Higgs interactions. Special relativity tells us that there is a very big difference between a particle with exactly no mass and those with some mass… this has to do with whether or not it is possible in principle to catch up to a particle. How does this mesh with our picture above that masses can come from ‘bumping off the Higgs?”
  • What does it mean physically that the Higgs line can terminate? What do we mean by the “vacuum expectation value?” This will turn out to be related to the idea that all of our particles are manifested as quantum fields. What does this mean?
  • This whole business is related to something called electroweak symmetry breaking, and that is the phenomenon associated with the Higgs which is really, really magical.

How many different particles can you make from quarks? A lot. Every two years or so, the particle data group puts out a catalog of the ones we know about. I always love getting mine in the mail. It’s as big as a phone book, with thin paper like a Bible. The compilation of all the particles and their properties represents a truly massive intellectual effort. Most of the hadrons are just labeled with Greek letters, but they’re festooned with all kinds of superscripts and asterisks, and their properties have names as colorful and idiosyncratic as their discoverers. For example, the neutral Ξ or “cascade” hyperon is a doubly-strange baryon with negative half-integer isospin. To my ear, most science fiction falls flat compared to real conversations between particle physicists.

By adding energy to hadrons, they can change their nature and go into excited states called resonances. The idea is loosely analogous to exciting atoms in a laser or fluorescent lamp, except more relativistic. Their mass can change. The humble proton, for example, can be excited into something called a Δ resonance, which is around 30% more massive, because some of the absorbed energy converts to mass. They don’t hang around very long, but as you look at higher and higher masses, you see more and more of them. By the 1960’s, the number of newly discovered particles and resonances had grown rapidly in step with the energy of the accelerators that produced them. This proliferation led to questions about how to explain such large variety, and what, if any, the limitations are in the number of states. When the quantum-mechanical rules governing properties like spin, charge, angular momentum, etc. were taken into account, the number of hadronic states was found to rise exponentially with mass. This plot is a fairly recent example:


Up to a certain mass, the number of hadrons rises exponentially. The red curve includes particles that weren't plotted in earlier references, represented by the green curve.

When you see a straight line on a semi-log plot, it’s a dead giveaway for an exponential form. Why is that pattern followed? What’s even more interesting is that the number of particles rises with mass at the same rate as it falls with increasing (transverse) momentum, at least below a few GeV. Several creative ideas emerged as attempts to explain the hadron spectra, but a physicist named Rolf Hagedorn gets the credit for developing a theory using statistical mechanics. This is before the era of quarks, remember: he referred to hadrons as “fireballs”, and considered that the heavy resonances were compositions of lighter ones, which were in turn composed of still lighter ones. In one of his lively papers, he said:

His mathematical line of reasoning implied that if you were to collect a bunch of hadrons together and treat them as a gas of particles, their energy would become infinite as the temperature approached a limiting value. He seems to have been quite a character. In the same paper, he concluded:

It follows that T is the highest possible temperature—a kind of ‘boiling point of hadronic matter’ in whose vicinity particle creation becomes so vehement that the temperature cannot increase anymore, no matter how much energy is fed in.

And now we come to the point. Hagedorn’s argument implies a change in the number of fundamental degrees of freedom of the system. In other words, it has to break down to more fundamental building blocks. Instead of remaining as a gas of hadrons, a superheated system would melt into a phase with simpler constituents at a temperature near what is now known as the Hagedorn temperature. Using the best data available, he extrapolated from the known spectra to obtain a value of the critical temperature near 160 MeV, or in more familiar units, a trillion degrees Celsius.

With a more sophisticated understanding thanks to Quantum Chromodynamics (QCD), more tools have become available to check this number. It’s a tough job, because this physics lies in the so-called “non-perturbative” regime,  where pencil-and-paper solutions to the QCD equations don’t work well. But that’s what supercomputers are for. The founders of QCD devised a way to crunch out the answers by dividing space-time itself into a grid of points called the lattice, “playing” the equations forward numerically in steps. It takes a lot of CPU cycles, but the answer seems to corroborate Hagedorn’s estimate.

So nuclear matter melts if you get it hot enough. It was suggested over 40 years ago, and theoretical innovations only seem to confirm it. So what happens then? And is this temperature achievable in the lab? I’ll post again soon to follow up on these questions.



Why Frank loves SUSY

Tuesday, May 3rd, 2011

This week I’ve been in Arlington Texas, attending the excellent south western ATLAS analysis jamboree. As a special treat the jamboree dinner was held in conjunction with an event at Southern Methodist University just to the north of Dallas.

The key speaker at this event was Frank Wilczek, the 2004 winner of the Nobel prize in physics. Frank won this prize for work he began during his Ph.D. studies (take note all you students) concerning the nature of the strong force. Tonight though, he did not talk about this, instead he focused on the LHC and on its ability to discover Supersymmetry (SUSY).


Me and Frank Wilczek

Me and Frank Wilczek


I’ve name dropped SUSY before, and once again explaining SUSY is way beyond the scope of what I intend to say today. In brief, SUSY solves a number of problems present in the Standard Model by introducing a new symmetry to the theory which allows the transformation of force particle (bosons) into matter particles (fermions). Essentially presenting these as two facets of the same thing.

SUSY has a lot of interesting and beautiful implications. It brings a greater level of symmetry to the Standard Model and by doing so explains all of the known particles and forces in a concise and elegant way.

Frank’s favourite property of SUSY is its ability to explain the strong, weak and electromagnetic forces each as manifestations of a single “grand-unified” force. These forces then only appear to be different to us as we’re forced to study them at the exceptionally low energies available in everyday life. However, if we were to look at these forces more closely, that is to say at much much higher energy, then SUSY predicts that we’d see that they are all one and the same thing.

The motivation for this grand-unification claim comes from, among other things, studying the how the strengths of these forces change with increasing energy. The idea being that if they are all the same force, then at some energy their strengths should all be the same.

If the Standard Model is the final word then this doesn’t happen. But, if we throw SUSY into the equation then, miraculously, it does. Moreover it happens at an energy that fits nicely(-ish) into our understanding of the universe.


The evolution of the strengths of the forces with energy in the Standard Model (1).



The evolution of the strengths of the forces with energy in the Minimal Supersymmetric Standard Model (1). Gravity is also shown in red.


Unfortunately even with the LHC studying the unification energy is way way out of reach. But, if SUSY is able to provide grand unification, then we’ll certainly be able to see it at the LHC.

Whether you buy this as a suitable motivation for SUSY or not is a matter of taste. Not everyone is convinced, one of the reason being that to get to the unification scale you have to extrapolate the strengths of the various forces over thirteen orders of magnitude. Yet, to date, we’ve only measured them over the first three.

Frank, however, doesn’t seem to feel this is an issue and as he’s the one with the Nobel prize maybe you should listen to him.


[1] Anticipating a New Golden Age, Frank Wilczek, arXiv:0708.4236v3.


Hi everyone! Readers of this blog might enjoy some of the following recent multimedia by some well-known  particle physicists.

  • First, a podcast from Jim Gates of the University of Maryland about his path in  Go Tell It on the Mountain (link to iTunes, link to mp3) from The Moth. The talk is from the 2008 World Science Festival, which will be held again this year in New York City in a month.
  • Next, a very nice animated discussion with Daniel Whiteson and Jonathan Feng from UC Irvine on PhD Comics. They discuss dark matter, particle physics, and the Large Hadron Collider.
  • Along the lines of dark matter and particle physics, here’s a mission briefing from NASA on AMS-2, the “particle detector in space,” featuring principal investigator (and Nobel laureate for the discovery of the J/ψ particle) Sam Ting. Matt mentioned AMS-2 in his inaugural post. A lot of particle physicists are excited about AMS due to recent anomalies in the spectrum cosmic positrons and anti-protons that may be a result of dark matter interactions.
  • Finally, some time ago I had a general-public-level post about Nima Arkani-Hamed‘s (and collaborators) work in scattering amplitudes. For those with a technical background who interested in learning more, his informal lectures to the Cornell particle theory group are now posted online: part 1, part 2, part 3, part 4, part 5. For those who can’t get enough, there’s also an ongoing program at the KITP with lots of recorded talks. These links are at the level of theoretical physicists doing work in the field; for a general public version, see Nima’s messenger lectures.