• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘Theory’

What is “Model Building”?

Thursday, August 18th, 2016

Hi everyone! It’s been a while since I’ve posted on Quantum Diaries. This post is cross-posted from ParticleBites.

One thing that makes physics, and especially particle physics, is unique in the sciences is the split between theory and experiment. The role of experimentalists is clear: they build and conduct experiments, take data and analyze it using mathematical, statistical, and numerical techniques to separate signal from background. In short, they seem to do all of the real science!

So what is it that theorists do, besides sipping espresso and scribbling on chalk boards? In this post we describe one type of theoretical work called model building. This usually falls under the umbrella of phenomenology, which in physics refers to making connections between mathematically defined theories (or models) of nature and actual experimental observations of nature.

One common scenario is that one experiment observes something unusual: an anomaly. Two things immediately happen:

  1. Other experiments find ways to cross-check to see if they can confirm the anomaly.
  2. Theorists start figure out the broader implications if the anomaly is real.

#1 is the key step in the scientific method, but in this post we’ll illuminate what #2 actually entails. The scenario looks a little like this:

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

Theorists, who have spent plenty of time mulling over the open questions in physics, are ready to apply their favorite models of new physics to see if they fit. These are the models that they know lead to elegant mathematical results, like grand unification or a solution to the Hierarchy problem. Sometimes theorists are more utilitarian, and start with “do it all” Swiss army knife theories called effective theories (or simplified models) and see if they can explain the anomaly in the context of existing constraints.

Here’s what usually happens:

Usually the nicest models of new physics don't fit! In the explicit example, the minimal supersymmetric Standard Model doesn't include a good candidate to explain the 750 GeV diphoton bump.

Usually the nicest models of new physics don’t fit! In the explicit example, the minimal supersymmetric Standard Model doesn’t include a good candidate to explain the 750 GeV diphoton bump.

Indeed, usually one needs to get creative and modify the nice-and-elegant theory to make sure it can explain the anomaly while avoiding other experimental constraints. This makes the theory a little less elegant, but sometimes nature isn’t elegant.

Candidate theory extended with a module (in this case, an additional particle). This additional model is "bolted on" to the theory to make it fit the experimental observations.

Candidate theory extended with a module (in this case, an additional particle). This additional model is “bolted on” to the theory to make it fit the experimental observations.

Now we’re feeling pretty good about ourselves. It can take quite a bit of work to hack the well-motivated original theory in a way that both explains the anomaly and avoids all other known experimental observations. A good theory can do a couple of other things:

  1. It points the way to future experiments that can test it.
  2. It can use the additional structure to explain other anomalies.

The picture for #2 is as follows:

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of 'taste' for when a module is elegant enough.

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of ‘taste’ for when a module is elegant enough.

Even at this stage, there can be a lot of really neat physics to be learned. Model-builders can develop a reputation for particularly clever, minimal, or inspired modules. If a module is really successful, then people will start to think about it as part of a pre-packaged deal:

A really successful hack may eventually be thought of as it's own variant of the original theory.

A really successful hack may eventually be thought of as it’s own variant of the original theory.

Model-smithing is a craft that blends together a lot of the fun of understanding how physics works—which bits of common wisdom can be bent or broken to accommodate an unexpected experimental result? Is it possible to find a simpler theory that can explain more observations? Are the observations pointing to an even deeper guiding principle?

Of course—we should also say that sometimes, while theorists are having fun developing their favorite models, other experimentalists have gone on to refute the original anomaly.

pheno_05

Sometimes anomalies go away and the models built to explain them don’t hold together.

 

But here’s the mark of a really, really good model: even if the anomaly goes away and the particular model falls out of favor, a good model will have taught other physicists something really neat about what can be done within the a given theoretical framework. Physicists get a feel for the kinds of modules that are out in the market (like an app store) and they develop a library of tricks to attack future anomalies. And if one is really fortunate, these insights can point the way to even bigger connections between physical principles.

I cannot help but end this post without one of my favorite physics jokes, courtesy of T. Tait:

 A theorist and an experimentalist are having coffee. The theorist is really excited, she tells the experimentalist, “I’ve got it—it’s a model that’s elegant, explains everything, and it’s completely predictive.”The experimentalist listens to her colleague’s idea and realizes how to test those predictions. She writes several grant applications, hires a team of postdocs and graduate students, trains them,  and builds the new experiment. After years of design, labor, and testing, the machine is ready to take data. They run for several months, and the experimentalist pores over the results.

The experimentalist knocks on the theorist’s door the next day and says, “I’m sorry—the experiment doesn’t find what you were predicting. The theory is dead.”

The theorist frowns a bit: “What a shame. Did you know I spent three whole weeks of my life writing that paper?”

Share

Frenzy among theorists

Thursday, February 4th, 2016

Since December 15, I have counted 200 new theoretical papers, each one suggesting one or several possible explanations for a new particle not yet discovered. This flurry of activity started when the CMS and ATLAS Collaborations both reported having found a few events that could possibly reveal the presence of a new particle decaying to two photons. Its mass would be around 750 GeV, that is, five times the mass of the Higgs boson.

No one knows yet if all this excitement is granted but it clearly illustrates how much physicists are hoping for a huge discovery in the coming years. Will it be like with the Higgs boson, which was officially discovered in July 2012 but had already given some faint signs of its presence a year earlier? Right now, there is not enough data. And just as I wrote in July 2011, it is as if we were trying to guess if the train is coming by looking in the far distance on a grey winter day. Only time will tell if the indistinct shape barely visible above the horizon is the long awaited train or just an illusion. But until more data become available, everybody will keep their eyes on that spot.

LeTrainDeMidi

The noon train, Jean-Paul Lemieux, National Gallery of Canada

Due to the difficulties inherent to the restart of the LHC at higher energy, the amount of data collected at 13 TeV in 2015 by ATLAS and CMS was very limited. Given that small data samples are always prone to large statistical fluctuations, the experimentalists exerted much caution when they presented these results, clearly stating that any claim was premature.

But theorists, who have been craving for signs of something new for decades, jumped on it. Within a single month, including the end-of-the-year holiday period, 170 theoretical papers were published to suggest just as many possible different interpretations for this yet undiscovered new particle.

No new data will come for a few more months due to annual maintenance. The Large Hadron Collider is due to restart on March 21 and should deliver the first collisions to the experiments around April 18. The hope is to collect a data sample of 30 fb-1 in 2016, to be compared with about 4 fb-1 in 2015. Later this summer, when more data will be available, we will know if this new particle exists or not.

This possibility is however extremely exciting since the Standard Model of particle physics is now complete. All expected particles have been found. But since this model leaves many open questions, theorists are convinced that there ought to be a more encompassing theory. Hence, discovering a new particle or measuring anything with a value different from its predicted value would reveal at long last what the new physics beyond the Standard Model could be.

No one knows yet what form this new physics will take. This is why so many different theoretical explanations have been proposed for this possible new particle. I have compiled some of them in the table below. Many of these papers described the properties needed by a new boson to fit the actual data. The solutions proposed are incredibly diversified, the most recurrent ones being various versions of dark matter or supersymmetric, new gauge symmetries, Hidden Valley, Grand Unified Theory, extra or composite Higgs bosons and extra dimensions. There enough to suit every taste: axizillas, dilatons, dark pion cousins of a G-parity odd WIMP, one-family walking technipion or trinification.

It is therefore crystal clear: it could be anything or nothing at all… But every time accelerators have gone up in energy, new discoveries have been made. So we could be in for a hot summer.

Pauline Gagnon

Learn more on particle physics, don’t miss my book, which will come out in English in July.

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification.

table

A partial summary of the number of papers published so far with the type of solutions they proposed to explain the nature of the new particle, if new particle there is. Just about all known theoretical models can be adapted to produce a new particle with characteristics compatible with the few events observed. This is just indicative and by no means, strictly exact since many proposals were rather hard to categorize. Will one of these ideas be the right one?

Share

Regular readers of Quantum Diaries will know that in the world of particle physics, there is a clear divide between the theorists and the experimentalists. While we are all interested in the same big questions — what is the fundamental nature of our world, what is everything made of and how does it interact, how did the universe come to be and how might it end — we have very different approaches and tools. The theorists develop new models of elementary particle interactions, and apply formidable mathematical machinery to develop predictions that experimenters can test. The experimenters develop novel instruments, deploy them on grand scales, and organize large teams of researchers to collect data from particle accelerators and the skies, and then turn those data into measurements that test the theorists’ models. Our work is intertwined, but ultimately lives in different spheres. I admire what theorists do, but I also know that I am much happier being an experimentalist!

But sometimes scientists from the two sides of particle physics come together, and the results can be intriguing. For instance, I recently came across a new paper by two up-and-coming physicists at Caltech. One, S. Cooper, has been a noted prodigy in theoretical pursuits such as string theory. The other, L. Hofstadter, is an experimental particle physicist who has been developing a detector that uses superfluid liquid helium as an active element. Superfluids have many remarkable properties, such as friction-free flow, that can make them very challenging to work with in particle detectors.

Hofstadter’s experience in working with a superfluid in the lab gave him new ideas about how it could be used as a physical model for space-time. There have already been a number of papers that posit a theory of the vacuum as having properties similar to that of a superfluid. But the new paper by Cooper and Hofstadter take this theory in a different direction, positing that the universe actually lives on the surface of such a superfluid, and that the negative energy density that we observe in the universe could be explained by the surface tension. The authors have difficulty generating any other testable hypotheses from this new theory, but it is inspiring to see how scientists from the two sides of physics can come together to generate promising new ideas.

If you want to learn more about this paper, watch “The Big Bang Theory” tonight, February 5, 2015, on CBS. And Leonard and Sheldon, if you are reading this post — don’t look at the comments. It will only be trouble.

In case you missed the episode, you can watch it here.

Like what you see here? Read more Quantum Diaries on our homepage, subscribe to our RSS feed, follow us on Twitter, or befriend us on Facebook!

Share

How to build a universe

Thursday, January 8th, 2015

How do you make a world? This is the purview of theologists, science fiction authors and cosmologists. Broadly speaking, explaining how the universe evolves is no different from any other problem in science: we need to come up with an underlying theory, and calculate the predictions of this theory to see if they match with the real world. The tricky part is that we have no observations of the universe earlier than about 300,000 years after the big bang. Particle colliders give us a glimpse of conditions far earlier than that, but to a cosmologist even the tiniest fraction of a second after the big bang is vitally important. Any theorist who tries his or her hand at this is left with a trail of refuse models before reaching a plausible vision of the universe. Of course, how and why one does this is deeply personal, but I would like to share my own small experience with trying to make a universe.

2014_SL_DM_Abell_teaser

For me, the end was very clear; I wanted to design a universe that explained the existence of dark and visible matter in a particular way. Asymmetric dark matter is a class of theories that try to link the origins of dark and visible matter, and my goal was to explore a new way of creating matter in the universe. So what do you start with? As a particle physicist, the most obvious (but not the only) building blocks at our disposal are particles themselves. Starting with the Standard Model, the easiest way to build a new theory is to just start adding particles. While adding a new particle every time you want to explain a new phenomenon seems indulgent (and some people take this to excess), historically this is a very successful tactic. The neutrino, W and Z bosons, the charm, bottom and top quarks, and Higgs boson were all introduced before they were discovered to explain various theoretical or experimental problems. While back in 1930 Pauli apologised for the bad grace of introducing a particle no one had ever seen, theoretical physicists have well and truly overcome this reticence.

So what ingredients does a dark matter model need? Clearly there must be a viable candidate for dark matter, so at least one new particle must be introduced. While the simplest case is for dark matter to be made of one particle, there is no reason for a substance that makes up 85% of matter in the universe to be any simpler than the matter that we are made of. But, for the sake of simplicity, let us say just one particle for now. For my work to explain the creation of visible matter as well as dark matter, there must also be some interaction between the two. To do this there must be a “mediator”, something that communicates between the visible and the dark. So at least two particles are necessary. Now, two particles doesn’t sound so bad, not when we already know of 17.

The model I was originally going to study (one that already existed) was like this, with dark matter interacting with neutrons. Unfortunately, this is also when the realities of model building sank in; it is rare for any model to be this simple and still work as advertised. Under closer scrutiny it turned out that there was no satisfactory way to make the dark matter stick around for the lifetime of the universe – it quickly decayed away unless you made some theoretical sacrifices I wasn’t comfortable making. Thus began my first foray into model building.

The first hurdle to overcome, for a first-time model builder, is simply the vast size of the literature itself. I was constantly worried that I had missed some paper that had beaten me to it, or had already considered some aspect of my work. Even though this was not the case, even the simplest of possible universes has a lot of complicated physics going on in a variety of areas – and any single aspect of the model failing could mean scrapping the whole thing. Most of these failing points are already known to those experienced in these matters, but a first timer has to find them out the hard way.

In the weeks I spent trying to come up with a model worth studying in detail, I had almost a dozen “Eureka” moments, which were almost always followed by me finding a flaw in the next few days. When you have no strict time limits, this is simply disheartening, but occasionally you can find flaws, or potential flaws, when you are already significantly invested and close to a deadline (such as thesis submission). Unfortunately the only real way to avoid this is to develop a level of care bordering on paranoia, to try and think of all the possible ways a theory might implode before getting bogged down in calculations. Of course, some things are inherently unforeseeable (otherwise why is it research) but many can be divined beforehand with enough experience and thought. This was driven home to me after spending a month working out the predictions of a theory, only to discover that I had drastically underestimated the consequences of a minor change to the model. Fortunately in research little is wasted; even though no part of that work appeared in the final version of my thesis, the methods I learnt certainly did.

leptontree

My pride and joy, a model of ADM via a lepton portal. Leptons (like electrons and neutrinos) interact with scalar dark matter (the phi) to create the matter we see today.

Trying to come up with a theory yourself also forces you to confront your theoretical biases – naturalness, simplicity, renormalisability, testability and fine tuning are all considered by theorists to be important considerations, but it is almost impossible to satisfy all of these at once. Even worse, there are often many different competing interpretations of all of these. So, almost inevitably, sacrifices must be made. Perhaps your theory has to give up on technical naturalness, or has a hell of a hierarchy problem (which mine definitely did). That being said, this is not always an issue;  many models are made to explore a particular avenue, or to provide a working example. The fact that some of these traits cannot be satisfied is important information. You have to pick and choose what you care about, because if the history of physics has shown us anything, it is that theoretical biases, even very well grounded ones, can simply be wrong. The discovery of CP (and consequently time reversal) violation and the non-deterministic (or the apparently non-deterministic, depending on whether you prefer a many worlds interpretation) nature of quantum mechanics are just a couple of examples where “essential” elements of a proper theory turned out to simply not apply.

While this seems like a frustrating experience, I actually greatly enjoyed model building. Too much of university coursework is rushed – you have to learn all of a subject in 12 weeks, and are tested in an exam that only lasts four hours, sometimes in quite shallow ways. This kind of research emphasises patience and care, and allows (or requires) you to deeply understand the physics involved. Calculations are irrelevant for a large part of the process. You simply don’t have time to try and brute force your way through dozens of theories, so you must devise more elegant ways to discriminate and choose those worth the time. I very much doubt that the model I worked on is the underlying truth of our world, but it was very fun to try.

 

 

 

Share

Geometry and interactions

Tuesday, November 25th, 2014

Or, how do we mathematically describe the interaction of particles?

In my previous post, I addressed some questions concerning the nature of the wavefunction, the most truthful mathematical representation of a particle. Now let us make this simple idea more complete, getting closer to the deep mathematical structure of particle physics. This post is a bit more “mathematical” than the last, and will likely make the most sense to those who have taken a calculus course. But if you bear with me, you may also come to discover that this makes particle interactions even more attractive!

The field theory approach considers wavefunctions as fields. In the same way as the temperature field \(T(x,t)\) gives the value of the temperature in a room at space \(x\) and time \(t\), the wavefunction \(\phi (x,t)\) quantifies the probability of presence of a particle at space point \(x\) and time \(t\).
Cool! But if this sounds too abstract to you, then you should remember what Max Planck said concerning the rise of quantum physics: “The increasing distance between the image of the physical world and our common-sense perception of it simply indicates that we are gradually getting closer to reality”.

Almost all current studies in particle physics focus on interactions and decays of particles. How does the concept of interaction fit into the mathematical scheme?

The mother of all the properties of particles is called the Lagrangian function. Through this object a lot of properties of the theory can be computed. Here let’s consider the Lagrangian function for a complex scalar field without mass (one of the simplest available), representing particles with electric charge and no spin:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) \).

Mmm… Is it just a bunch of derivatives of fields? Not really. What do we mean when we read \(\phi(x)\)? Mathematically, we are considering \(\phi\) as a vector living in a vector space “attached” to the space-time point \(x\). For the nerds of geometry, we are dealing with fiber bundles, structures that can be represented pictorially in this way:

fibers

Click on image for larger version

The important consequence is that, if \(x\) and \(y\) are two different space-time points, a field \(\phi(x)\) lives in a different vector space (fiber) with respect to \(\phi(y)\)! For this reason, we are not allowed to perform operations with them, like taking their sum or difference (it’s like comparing a pear with an apple… either sum two apples or two pears, please). This feature is highly non-trivial, because it changes the way we need to think about derivatives.

In the \(L\) function we have terms containing derivatives of the field \(\phi(x)\). Doing this, we are actually taking the difference of the value of the field at two different space-time points. But … we just outlined that we are not allowed to do it! How can we solve this issue?

If we want to compare fields pertaining to the same vector space, we need to slightly modify the notion of derivative introducing the covariant derivative \(D\):

\( D_\mu = \partial_\mu + ig A_\mu(x) \).

Here, on top of the derivative \(\partial\), there is the action of the “connection” \(A(x)\), a structure which takes care of “moving” all the fields in the same vector space, and eventually allows us to compare apples with apples and pears with pears.
So, a better way to write down the Lagrangian function is:

\(L(x) = D_\mu \phi(x)^* D^\mu \phi(x) \).

If we expand \(D\) in terms of the derivative and the connection, \(L\) reads:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) +ig A_\mu (\partial^\mu \phi^* \phi – \phi^* \partial^\mu \phi) + g^2 A^2 \phi^* \phi \).

Do you recognize the role of these three terms? The first one represents the propagation of the field \(\phi\). The last two are responsible for the interactions between the fields \(\phi, \phi^*\) and the \(A\) field, referred to as the “photon” in this context.

interactions

Click on image for larger version

This slightly hand-waving argument involving fields and space-time is a simple handle to understand how the interactions among particles emerge as a geometric feature of the theory.

If we consider more sophisticated fields with spin and color charges, the argument doesn’t change. We need to consider a more refined “connection” \(A\), and we could see the physical interactions among quarks and gluons (namely QCD, Quantum Chromo Dynamics) emerging just from the mathematics.

 Probably the professor of geometry in my undergrad course would call this explanation “Spaghetti Mathematics”, but I think it can give you a flavor of the mathematical subtleties involved in the theory of particle physics.

Share

Naturalness

Monday, April 22nd, 2013

This article originally appeared in symmetry on April 16, 2013.

When a scientific result fails the test of “naturalness,” it can point to new physics.

Suppose a team of auditors is tasked with understanding a particular billionaire’s bank account. Each month, millions of dollars flow into and out of the account. If the auditors look at the account on random days, they see varying amounts of money. However, on the last day of every month, the balance is briefly set to exactly zero dollars.

It’s hard to imagine that this zero balance is an accident; it seems as if something is causing the account to follow this pattern. In physics, theorists consider improbable cancellations like this one to be signs of undiscovered principles governing the interactions of particles and forces. This concept is called “naturalness”—the idea that theories should make seeming coincidences feel reasonable.

In the case of the billionaire, the surprising thing is that, on a set schedule, the cash flow reaches perfect equilibrium. But one would expect it to be more erratic. The ups and downs of the stock market should cause monthly variations in the tycoon’s dividends. A successful corporate raid could lead to a windfall. And an occasional splurge on a Lamborghini could cause a bigger withdrawal than usual.

This unnatural fiscal balance simply screams for an explanation. One explanation that would make this ebb and flow of funds make sense would be if this account worked as a charity fund. Each month, on the first day of the month, a specific amount would be deposited. Over the course of the month, a series of checks would be cut for various charities, with the outflow carefully planned to match identically the initial deposit. Under this situation, it would be easy to explain the recurring monthly zero balance. In essence, the “charity account principle” makes what at first seemed to be unnatural now appear to be natural indeed.

In physics, we see a similar phenomenon when we predict the mass of the Higgs boson. While Higgs bosons get their mass in the same way as all other fundamental particles (by interacting with the Higgs field), that mass is also affected by another process—one in which the Higgs boson temporarily fluctuates into a pair of virtual particles, either two bosons or two fermions, and then returns to its normal state. These fluctuations affect the mass of the Higgs boson, and the size of this effect can be calculated using the Standard Model—a theory that predicts, among other things, the behavior of Higgs bosons.

To calculate how much these quantum fluctuations affect the mass, scientists multiply two terms. The first involves the maximum energy for which the Standard Model applies—a huge number. The second is the sum of the effect of the fluctuations to different virtual bosons minus the sum of the effect of the fluctuations to different virtual fermions. If the Higgs mass is small, as recent measurements at the LHC suggest, the product of these two numbers must also be small.  This means the sum effect of the bosons must be almost identical to the sum effect of the fermions, an unlikely scenario that turns out to be true. For this near cancellation to happen “just by accident” is so utterly improbable that it beggars the imagination. A coincidence like this is simply unnatural.

Without some underlying (and currently unknown) physical principle that makes it obvious why this occurs, it is quite strange for the mass of the Higgs to be so low. That is why discovering the Higgs boson is not the end of the story. Theorists have come up with several different explanations for its low mass, and now it is up to the experimentalists to test them.

Don Lincoln

Share

Higgs for the Holidays

Friday, December 23rd, 2011

 —  By Theorist David Morrissey & Particle Physicist Anadi Canepa

 Last week we hosted two particle physics workshops at TRIUMF – an ATLAS Canada collaboration meeting and a joint meeting for theorists and experimentalists to study new LHC results.  Everything went smoothly, no participants were lost to the wilds of Vancouver, and we had some really great discussions and seminars.  During one of these presentations, it occurred to me that these kinds of scientific meetings are not so different from a typical holiday gathering.  In both situations, you frequently run into people you know but that you haven’t seen in a long time.  You catch up, you gossip, and you eat too much food at the coffee breaks.  There’s usually a large group dinner where you often meet new people and strike up conversations about future work.  And every so often one of the participants has too much holiday cheer.

Despite these similarities, most scientific meetings don’t involve gifts.  But this time around we were really lucky, and our workshops had a gift exchange of sorts as well.  In this case, the gifts were the presentations by the ATLAS and CMS collaborations of exciting new results from their searches for the Higgs boson particle.  On top of the live streaming presentations from CERN in the early hours of the morning, we were treated to a longer seminar in the afternoon at TRIUMF by Rob McPherson.  His talk was standing-room only, and we had a great time bombarding him with questions about the ATLAS analysis.

The reason for all this excitement over a single particle is that the Higgs boson, first proposed nearly fifty years ago, is central to our current understanding of all known elementary particles, called the Standard Model.  (See here, here, and here for more details.)   In this theory, the Higgs is responsible for creating the masses of nearly all elementary particles and for making the weak force much weaker than electromagnetism.  Even though we have not yet seen the Higgs directly, we have indirect evidence for it from precision measurements of the weak and electromagnetic forces.  Discovering the Higgs boson would confirm the Standard Model, while not finding it would force us to drastically rethink our description of elementary particles and fundamental forces, which would perhaps be an even greater discovery.

 

Excitement about finding the Higgs has been building since the summer, when it became clear that the LHC would be able to collect enough data by the end of the year to possibly find it.  In the past few weeks the level has gone through the roof as rumours started to appear that the LHC experiments would soon release a significant result.  What we learned this week is that these latest searches did not discover the Higgs boson, but that they do suggest that it might be there with a mass close to 133 times that of a proton (125 GeV).  Finding a Higgs is hard work, and its delicate characteristic signal must be extracted from a huge amount of background noise.  What we have at the moment is an intersting bump, as you can see in the figure above taken from the ATLAS search, where we see more signal events than would typically be expected from the background alone for a candidate Higgs mass of about 125 GeV.  We just don’t have enough data right now to confirm that this bump is from a Higgs boson, and not just an especially unlucky spike in the background noise.  Fortunately, the ATLAS and CMS collaborations will be taking much more data in the new year.

So, for this year all we get is a gift-wrapped box that we’re allowed to shake and prod.  But if we’re good, we’ll get to open the box and find what’s inside at some point in 2012.  Dear Santa…


Share