The discovery of the Higgs boson was a triumph for particle physics. Its discovery completes the tremendously successful Standard Model of particle physics. Of course, we know there are other phenomena — like dark matter, the dominance of matter over anti-matter, the mass of neutrinos, etc. — that aren’t explained by the Standard Model. However, the Higgs itself is the source of one of the deepest mysteries of particle physics: the fine tuning problem.
The fine-tuning problem is related to the slippery concept of naturalness, and has driven the bulk of theoretical work for the last several decades. Unfortunately, it is notoriously difficult to explain. I took on this topic recently for a public lecture and came up with an analogy that I would like to share.
Why we take our theory seriously
Before discussing the fine tuning, we need need a few prerequisites. The first thing to know is that the Standard Model (and most other theories we are testing) is based on a conceptual framework called Relativistic Quantum Field Theory (QFT). As you might guess from the name, it’s based on the pillars of relativity, quantum mechanics, and field theory. The key point here is that relativistic quantum field theory goes beyond the initial formulation of quantum mechanics. To illustrate this difference, let’s consider a property of the electron and muon called its “g-factor” that relates its magnetic moment and spin [more]. In standard quantum mechanics, the prediction is that g=2; however, with relativistic quantum field theory we expect corrections. Those corrections are shown pictorially in the Feynman diagrams below.

It turns out that this correction is small — about one part in a thousand. But we can calculate it to an exquisite accuracy (about ten digits). Moreover, we can measure it to a comparable accuracy. The current result for the muon is
g = 2.0023318416 ± 0.000000001
This is a real tour de force for relativistic quantum field theory and represents one of the most stringent tests of any theory in the history of science [more]. To put it into perspective, it’s slightly better than hitting a hole in one from New York to China (that distance is about 10,000 km =1 billion cm).
It is because of tests like these that we take the predictions of this conceptual framework very seriously.

The Higgs, fine tuning, and an analogy
It turns out that all quantities that we can predict receive similar quantum corrections, even the mass of the Higgs boson. In the Standard Model, there is a free parameter that can be thought of as an initial estimate for the Higgs mass, let’s call it M₀. There will also be corrections, let’s call them ΔM (where Δ is pronounced “delta” and it indicates “change to”). The physical mass that we observe is this initial estimate plus the corrections. [For the aficionados: usually physicists talk about the mass squared instead of the mass, but that does not change the essential message].
The funny thing about the mass of the Higgs is that the corrections are not small. In fact, the naive size of the corrections is enormously larger than the 126 GeV mass of that we observe!
Confused? Now is a good time to bring in the analogy. Let’s think about the budget of a large country like the U.S. We will think of positive contributions to the Higgs mass as income (taxes) and negative contributions to the Higgs mass as spending. The physical Higgs mass that we measure corresponds to the budget surplus.
Now imagine that there is no coordination between the raising of taxes and government spending (maybe it’s not that hard). Wouldn’t you be surprised that a large economy of trillions of dollars would have a budget balanced to better than a penny? Wouldn’t that be unnatural to expect such a fine tuning between income and spending if they are just independent quantities?
This is exactly the case we find ourselves in with the Standard Model… and we don’t like it. With the discovery of the Higgs, the Standard Model is now complete. It is also the first theory we have had that can be extrapolated to very high energies (we say that it is renormalizable). But it has a severe fine tuning problem and does not seem natural.


The analogy can be fleshed out a bit more. It turns out that the size of the corrections to the Higgs mass is related to something we call the cutoff, which is the energy scale where the theory is no longer a valid approximation because some other phenomena become important. For example, in a grand unified theory the strong force and the electroweak force would unify at approximately 10¹⁶ GeV (10 quadrillion GeV), and we would expect the corrections to be of a similar size. Another common energy scale for the cutoff is the Planck Scale — 10¹⁹ GeV — where the quantum effects of gravity become important. In the analogy, the cutoff energy corresponds to the fiscal year. As time goes on, the budget grows and the chance of balancing the budget so precisely seems more and more unnatural.
Going even further, I can’t resist pointing out that the analogy even offers a nice way to think about one of the most enigmatic concepts in quantum field theory called renormalization. We often use this term to describe how fundamental constants aren’t really constant. For example, the charge of an electron depends on the energy you use to probe the electron. In the analogy, renormalization is like adjusting for inflation. We know that a dollar today isn’t comparable to a dollar fifty years ago.
Breaking down the budget
The first thing one wants to understand before attempting to balance the budget is to find out where the money is going. In the U.S. the big budget items are the military and social programs like social security and medicare. In the case of the Higgs, the biggest corrections come from the top quark (the top diagrams on the right). Of course the big budget items get most of the attention, and so it is with physics as well. Most of the thinking that goes into to solving the fine tuning problem is related to the top quark.

Searching for a principle to balance the budget
Maybe it’s not a miraculous coincidence that the budget is balanced so well. Maybe there is some underlying principle. Maybe someone came to Washington DC and passed a law to balance the budget that says that for every dollar of spending there must be a dollar of revenue. This is an excellent analogy for supersymmetry. In supersymmetry, there is an underlying principle — a symmetry — that relates two types of particles (fermions and bosons). These two types of particles give corrections to the Higgs mass with opposite signs. If this symmetry was perfect, the budget would be perfectly balanced, and it would not be unnatural for the Higgs to be 126 GeV.
That is one of the reasons that supersymmetry is so highly motivated, and there is an enormous effort to search for signs of supersymmetry in the LHC data. Unfortunately, we haven’t seen any evidence for supersymmetry thus far. In the analogy that is a bit like saying that if there is some sort of law to balance the budget, it allows for some wiggle room between spending and taxes. If the laws allow for too much wiggle room between spending and taxes then it may still be a law, but it isn’t explaining why the budget is balanced as well as it is. The current state of the LHC experiments indicates that budget is balanced about 10-100 times better than the wiggle room allows — which is better than we would expect, but not so much better that it seems unnatural. However, if we don’t see supersymmetry in the next run of the LHC the situation will be worse. And if we were to build a 100 TeV collider and not see evidence of supersymmetry, then the level of fine tuning would be high enough that most physicists probably would consider the situation unnatural and abandon supersymmetry as the solution to the fine tuning problem.

Since the fine tuning problem was first recognized, there have been essentially two proposed solutions. One of them is supersymmetry, which I discussed above. The second is often referred to as strong dynamics or compositeness. The idea there is that maybe the Higgs is not a fundamental particle, but instead it’s a composite of some more fundamental particles. My colleague Jamison Galloway and I tried to think through the analogy in that situation. In that case, one must start to think of different kinds of currencies… say the dollar for the Higgs boson and something other currencies like bitcoin for the more fundamental particles. You would imagine that as time goes on (energy increases) that there is a transition from one currency to another. At early times the budget is described entirely in terms of dollars, but at later times the budget is described in terms of bitcoin. That transition can be very complicated, but if it happened at a time when the total budget in dollars wasn’t too large, then a well balanced budget wouldn’t seem too unnatural. Trying to explain the rest of the compositeness story took us from a simple analogy to the basis for a series of sci-fi fantasy books, and I will spare you from that.
There are a number of examples where this aesthetic notion of naturalness has been a good guide, which is partially why physicists hold it so dear. However, another avenue of thinking is that maybe the theory is unnatural, maybe it is random chance that the budget is balanced so well. That thinking is bolstered by the idea that there may be a huge number of universes that are part of a larger complex we call the multiverse. In most of these universes the budget wouldn’t be balanced, the Higgs mass would be very different. In fact, most universes would not form atoms, would not form starts, and would not support life. Of course, we are here here to observe our universe, and the conditions necessary to support life select very special universes out of the larger multiverse. Maybe it is this requirement that explains why our universe seems so finely tuned. This reasoning is called the anthropic principle, and it is one of the most controversial topics in theoretical physics. Many consider it giving up on a more fundamental theory that would explain why nature is as it is. The very fact that we are resorting to this type of reasoning is evidence that the fine tuning problem is a big deal. I discuss this at the end of the public lecture (starting around the 30 min mark) with another analogy for the multiverse, but maybe I will leave that for another post.
Nota bene: After developing this analogy I learned about a similar analogy from Tommaso Dorigo. They both use the idea of money, but the budget analogy goes a bit further.