• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Byron Jennings | TRIUMF | Canada

View Blog | Read Bio

Effective Field Theory for Turtles

The story is told (original source unknown) of an elderly woman who attended a talk on Copernicanism. She objected to the speaker, claiming that the he was wrong and that the world was supported on the back of a giant elephant. “And what was the elephant supported by?” asked the speaker. “It stood back of a giant turtle,” replied the lady. Before the speaker could reply she added: “Don’t ask what the turtle stands on—I have you there smarty pants, it is turtles all the way down.” Now, what has this to do with science? Quite a bit actually.

Einstein once said that most un-understandable thing about the universe is that it is understandable. But there are two reasons it is understandable. First, the human mind is very well adapted to finding (or creating) patterns and, second, the universe separates itself into different turtles, that is, bite-sized pieces, each characterized by a different scale or size, that can be studied and modeled independently of the other turtles or pieces. Typical scales might be the size of the observable universe, a typical galaxy, the solar system, the earth, people, the atom, the nucleus or the Plank length. One does not have to understand the universe as whole, one can study it one turtle at a time.

For example, I attended a seminar on ab inito calculations of nuclear physics where the speaker started with nucleons and the nuclear potential and derived the properties of nuclei without any additional input. Ab intio means from the beginning and a member of the audience objected quite strenuously (it had high entertainment value) that this was not ab inito because speaker did not start with quantum chromo dynamics (QCD), the assumed underlying model. More pedantically, he should have objected that the speaker should have started with the ultimate theory of everything.

Well, the ultimate theory of everything is not known so where should one start? Obviously, where it is most convenient. In low-energy nuclear physics, this has been a matter of great debate. Historically, the nuclear physicist dealt with nucleons (neutrons and protons) and the interactions between them derived without any reference to an underlying theory (which was unknown at the time). Then along came QCD. The QCD practitioners claimed the nuclear physicists were nincompoops and were wasting their time since they did not start with QCD. This was one of the reasons for the global collapse of nuclear physics in the 1980s. Now it turns out, that by separating the scales using what are called effective field theories, the nuclear physicists were right all along. All the effects of QCD needed for low energy nuclear physics can be taken account of by introducing a few phenomenological parameters to describe the nucleon-nucleon potential (and for purists, many-body forces). Thus there is no need to handle all the complexities of QCD. The QCD practitioners can then calculate the parameters at their leisure. Or not, as the case may be. It really does not matter to the nuclear physicist; all he will ever need is his phenomenological parameters. In the same vein, condensed matter physicists do not have to sit twiddling their thumbs while the particle physicists derive the mass and charge of the electron; they just use phenomenological values. It is the same for other quantities, like the nuclear masses, that condensed matter physicists might need.  They use phenomenological values and move on.

This separation into bite-sized pieces happens all the way up and down the set of turtles. Each scale has a different preferred model connected to neighbouring scales and their models by a few parameters. If there are many parameters you have chosen the wrong place to do the separation. In studying ecology, one does not need to know all the chemical interactions, in doing chemistry one does not need to know all about the quantum mechanical underpinnings, in studying gases one can determine the volume, pressure and temperature without worrying about the motion of the individual atoms making up the gas. No findings at the LHC will have any effect on biology, chemistry, nuclear physics, or QCD. Except perhaps in developing new experimental or theoretical techniques; they are at vastly different scales. The LHC findings will, however, be crucial for determining the validity of and extensions to the standard model of particle physics.

Deriving the parameters needed at one scale in terms of the smaller scales is reductionism. Sweeping the details of the smaller scales into a few parameters is emergence. There is potentially interesting science at every scale. As always, where one does the division of scales is determined by simplicity and convenience. It is effective field theories (not turtles) all the way down and you can do the separation anywhere you like but if you do it in the wrong place you will be sorry. Cutting turtles in half is messy[1].

 

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] No turtles were injured in preparing this post.

Share

Tags: ,

  • Torbjorn Larsson, OM

    Good show!

    Though as many times the end left me confused, is there an artificial barrier raised between emergence in reductionism and other forms? Why would one want to do that, emergence is plenty observed during reduction methods applied to systems?

    Reductionism means, as I understand it, foremost sufficiency of “(interactions of) parts” in description as opposed to holism aka non-sufficiency of parts as description or in other words “system dualism”.* Which parts of course can be distinguished as deriving from smaller scales at times or not resolved as such in others.

    ————-
    * So we need reduction generalized to reductionism to avoid inserting a “dualism of the gaps”. Getting back to my question, I guess, why would one want to open up for unnecessary entities?

  • I am not sure what you mean by other forms. I would say reductionism and emergence are the opposite sides of the same coin. Reductionism is reducing things to its parts, looking at finer and finer details. Emergence is looking at things more globally. Effective field theories just give one another way of looking at the difference between reductionism and emergence.