• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

Latest Posts

  • Flip
  • Tanedo
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USA

Latest Posts

  • Emily
  • Thompson
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Byron Jennings | TRIUMF | Canada

Read Bio

Higgs versus Descartes: this round to Higgs.

Friday, August 1st, 2014

René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.


‘Essentially, all models are wrong, but some are useful’

Friday, July 4th, 2014

Since model building is the essence of science, this quote has a bit of a bite to it. It is from George E. P. Box (1919 – 2013), who was not only an eminent statistician but also an eminently quotable one.  Another quote from him: One important idea is that science is a means whereby learning is achieved, not by mere theoretical speculation on the one hand, nor by the undirected accumulation of practical facts on the other, but rather by a motivated iteration between theory and practice.  Thus he saw science as an iteration between observation and theory. And what is theory but the building of erroneous, or at least approximate, models?

To amplify that last comment: The main point of my philosophical musings is that science is the building of models for how the universe works; models constrained by observation and tested by their ability to make predictions for new observations, but models nonetheless. In this context, the above quote has significant implications for science. Models, even those of science, are by their very nature simplifications and as such are not one hundred per cent accurate. Consider the case of a map. Creating a 1:1 map is not only impractical[2] but even if you had one it would be one hundred per cent useless; just try folding a 1:1 scale map of Vancouver. A model with all the complexity of the original does not help us understand the original.  Indeed the whole purpose of a model is to eliminate details that are not essential to the problem at hand.

By their very nature, numerical models are always approximate and this is probably what Box had in mind with his statement. One neglects small effects like the gravitational influence of a mosquito. Even as one begins computing, one makes numerical approximations, replacing integrals with sums or vise versa, derivatives with finite differences, etc. However, one wants to control errors and keep them to a minimum. Statistical analysis techniques, such as Box developed, help estimate and control errors.

To a large extent it is self-evident that models are approximate; so what? Again to quote George Box: Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. What would he have thought of a model with twenty plus parameters, like the standard model of particle physics? His point is a valid one. All measurements have experimental errors. If your fit is perfect you are almost certainly fitting noise. Hence, adding more parameters to get a perfect fit is a fool’s errand. But even without experimental error, a large number of parameters frequently means something important has been missed. Has something been missed in the standard model of particle physics with its many parameters or is the universe really that complicated?

There is an even more basic reason all models are wrong. This goes back at least as far as Immanuel Kant (1724 – 1804). He made the distinction between observation of an object and the object in itself. One never has direct experience of things, the so-called noumenal world; what one experiences is the phenomenal world as conveyed to us by our senses. What we see is not even what has been recorded by the eye.  The mind massages the raw observation into something it can understand; a useful but not necessarily accurate model of the world. Science then continues this process in a systematic manner to construct models to describe observations but not necessarily the underlying reality.

Despite being by definition at least partially wrong, models are frequently useful. The scale model map is useful to tourists trying to find their way around Vancouver or to a general plotting strategy for his next battle. But, if the maps are too far wrong the tourist will get lost and fall into False Creek and the general will go down in history as a failure. Similarly, the models for weather predictions are useful although they are certainly not a hundred per cent accurate. However, they do indicate when it safe to plan a picnic or cut the hay; provided they are right more than by chance and the standard model of particle physics, despite having many parameters and not including gravity, is a useful description of a wide range of observations. But to return to the main point, all models, even useful ones, are wrong because they are approximations and not even approximations to reality but to our observations of that reality. Where does that leave us? Well, let us save the last word for George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Hence the foolishness of talking about theoretical breakthroughs in science. All breakthroughs arise from pondering about observations and observations testing those ponderings.

[2] Not even Google could produce that.


“Theoretical Physics is a Quest for Simplicity”

Friday, June 6th, 2014

Theoretical physics, simplicity. Surely the two words do not go together. Theoretical physics has been the archetypal example of complicated since its invention. So what did Frank Wilczek (b. 1951) mean by that statement[1] quoted in the title? It is the scientist’s trick of taking a well-defined word, such as simplicity, and giving it a technical meaning. In this case, the meaning is from algorithmic information theory. That theory defines complexity (Kolmogorov complexity[2]) as the minimum length of a computer program needed to reproduce a string of numbers. Simplicity, as used in the title, is the opposite of this complexity. Science, not just theoretical physics, is driven, in part but only in part, by the quest for this simplicity.

How is that you might ask. This is best described by Greg Chaitin (b. 1947), a founder of algorithmic information theory. To quote: This idea of program-size complexity is also connected with the philosophy of the scientific method. You’ve heard of Occam’s razor, of the idea that the simplest theory is best? Well, what’s a theory? It’s a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it’s cooked up, and the data is incomprehensible, it’s random. In that case the theory isn’t doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!—That’s the idea…

In many ways this is quite nice; the best theory is the one that compresses the most empirical information into the shortest description or computer program.  It provides an algorithmic method to decide which of two competing theories is best (but not an algorithm for generating the best theory). With this definition of best, a computer could do science: generate programs to describe data and check which is the shortest. It is not clear, with this definition, that Copernicus was better than Ptolemy. The two approaches to planetary motion had a similar number of parameters and accuracy.

There are many interesting aspects of this approach. Consider compressibility and quantum mechanics. The uncertainty principle and the probabilistic nature of quantum mechanics put limits on the extent to which empirical data can be compressed. This is the main difference between classical mechanics and quantum mechanics. Given the initial conditions and the laws of motion, classically the empirical data is compressible to just that input. In quantum mechanics, it is not. The time, when each individual atom in a collection of radioactive atoms decays, is unpredictable and the measured results are largely incompressible. Interpretations of quantum mechanics may make the theory deterministic, but they cannot make the empirical data more compressible.

Compressibility highlights a significant property of initial conditions. While the data describing the motion of the planets can be compressed using Newton’s laws of motion and gravity, the initial conditions that started the planets on their orbits cannot be. This incompressibility tends to be a characteristic of initial conditions. Even the initial conditions of the universe, as reflected in the cosmic microwave background, have a large random non-compressible component – the cosmic variance.  If it wasn’t for quantum uncertainly, we could probably take the lack of compressibility as a definition of initial conditions. For the universe, the two are the same since the lack of compressibility in the initial conditions is due to quantum fluctuations but that is not always the case.

The algorithmic information approach makes Occam’s razor, the idea that one should minimize assumptions, basic to science. If one considers that each character in a minimal computer program is a separate assumption, then the shortest program does indeed have the fewest assumptions. But you might object that some of the characters in a program can be predicted from other characters. However, if that is true the program can probably be made shorter. This is all a bit counterintuitive since one generally does not take such a fine grained approach to what one considers an assumption.

The algorithmic information approach to science, however, does have a major shortcoming. This definition of the best theory leaves out the importance of predictions. A good model must not only compress known data, it must predict new results that are not predicted by competing models. Hence, as noted in the introduction, simplicity is only part of the story.

The idea of reducing science to just a collection of computer programs is rather frightening. Science is about more than computer programs[3]. It is, and should be, a human endeavour. As people, we want models of how the universe works that humans, not just computers, can comprehend and share with others. A collection of bits on a computer drive does not do this.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] From “This Explains Everything”, Ed, John Brockman, Harper Perennial, New York, 2013

[2] Also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity.

[3] In this regard, I have a sinking feeling that I am fighting a rearguard action against the inevitable.


Mathematics: Invented or Discovered?

Friday, May 2nd, 2014

The empirical sciences, like physics and chemistry, are partially invented and partially discovered. Although the empirical observations are surely discovered, the models that describe them are invented through human ingenuity. But what about mathematics which is based on pure thought? Are its results invented or discovered?

Not surprisingly there are different views on this topic. Some people maintain that mathematical results are invented, others claim that they are discovered. Is there a universe of mathematical results just waiting to be discovered or are mathematical results invented by the mathematician and would disappear, like a fairy tale, when mathematicians vanish in the heat death of universe when all the available energy is used up? Invented or discovered? Perhaps some results are invented and others discovered. There is, however, a third view, namely that mathematics is game played by manipulating symbols according to well defined rules. At some level this is probably true.  All those who prefer Monopoly,® put up your hands!

What are the foundations of logic? Bertrand Russell (1872 – 1970) and Alfred Whitehead (1861 – 1947) tried to derive mathematics from logic. The result was the book: Principia Mathematica (1910), a real tour de force. Their derivation still required axioms or assumptions beyond pure logic and it has been questioned on other grounds. An alternate to this approach is set theory, in particular based on the Zermelo–Fraenkel axioms, with the axiom of choice. And an alternate to that is category theory. Whatever all that is. It is certainly very technical. The quest for foundations of mathematics and even logic, like the quest for the Holy Grail, is probably never ending. But the question remains: Was logic and set (category) theory, themselves, invented or discovered?

Let us look at things more simply. Historically, mathematics probably arose empirically: two stones plus two stones equals one stone plus three stones. Then it was realized that this holds for any tokens, stones, bushels of wheat or sheep.  The generalization from specific examples to the generic 2+2=1+3 could be considered an early example of the scientific method: generalizing from specific examples to a general rule. But one plus one does not always equal two. Consider a litre of liquid plus a litre of liquid. If one is water and the other alcohol, the result is less than two litres if they are put in the same container. Adding one litre of water to one litre of concentrated sulfuric acid is even more interesting[1].

Multiplication is also easy to demonstrate with counters. Division is a bit more problematic but if we think of dividing a bushel of wheat into equal parts the idea of fractions is quite natural. Dividing a sheep is messier. Subtraction however leads to a problem: negative numbers. Naively, we cannot have fewer than zero stones but subtraction can lead to that idea. So were negative numbers invented or discovered? We can finesse the problem of negative numbers by saying that negative numbers correspond to what we owe. If I have minus three stones it means I owe someone three stones.

Thus thinking of stones and bushels of wheat, we can understand the rational numbers, numbers written as the ratio of two whole numbers. The Pythagoreans in ancient Greece would have claimed that is all there is. Then can the thorny problem of the square root of two? This arises in connection with the Patagonian theorem. Some poor sod showed that the square root of two could not be written as the ratio of two whole numbers and was thus irrational. He was thrown into the sea for his efforts[2]. The square root of two does not exist in the universe of numbers discovered using stones, sheep, and bushels of wheat. Is it possible to have square root of two stones? Was it invented to make the Patagonian theorem work or was it discovered?

The example of the square root of minus one is even more perplexing. We can think of the square root of two as an extra number inserted between 1.414 and 1.415. But there is no place to insert the square root of minus one.  So again the question arises: Was it invented or discovered? Perhaps it is best to say it was assumed: Assume the square root of minus one can be treated like a normal number[3] and see what happens. A lot of good things as it turned out but does that mean it exists in any real sense. Perhaps it is just a useful fiction.

Nevertheless, mathematics has developed, discovering or inventing new results. As a phenomenologist, I would say we do not have enough information to assert if mathematics was invented or discovered. If we could contact extra-terrestrial mathematicians, it would be interesting to see if their mathematics was different or the same as ours. If it was different, that would be a strong indication that mathematics is invented. Or less black and white, the difference between terrestrial and extra-terrestrial mathematics would tell us the extent to which mathematics is discovered or invented.

In any event mathematics is a very interesting game, whether based on set theory or category theory, whether discovered or invented, and certainly more profitable than Monopoly®[4] in the long run.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Do not try this at home.

[2] At least that is the legend.

[3] √(-1)+√(-1) = 2 √(-1) , etc.

[4] On the other hand, oligarchy, as any large multinationals will tell you, is very profitable.


Cosmic Inflation or Are the Photons Messing with Our Minds?

Tuesday, April 1st, 2014

Recently it has been announce that a smoking gun has been found for cosmic inflation but could it instead be the smoking[1] gun for a grand conspiracy, the mother of all conspiracies, the conspiracy theory to put all other conspiracy theories to shame?  You may have your favourite conspiracy theory: the Roswell cover up, who shot JFK, the suppression of perpetual motion machines by the energy companies or the attempt by communication departments to take over the world. As a professor, once said: “Just because you are paranoid does not mean they are not out to get you.” (Followed closely by my second-best piece of advice, “Never trust a communications expert.”) But the conspiracy I am talking about is on a much larger scale, a cosmic scale, the conspiracy of all elementary particles in the universe to use their free will for the sole purpose of messing with the minds of humans and particularly that subclass of humans known as particle physicists[2]. The professor was right to be paranoid.

We hold these truths to be self-evident, that all particles are created equal and endowed by their Creator with free will. Surely you do not question that elementary particles have free will. Consider the muon. It decays at a time if its own choosing.  There is no rule that says when a given muon will decay. It is decided by the muon in its own stubborn way.  But you still claim that only humans have freewill. Why? Could it not be just of part the grand conspiracy of elementary particle to give humans the illusion of freewill? Besides we all know politicians do not have freewill. They just do, as a Pavlovian reflex, what they think will get them the most votes.  Hence the mess the world is in. But I digress.

To explore further: what criteria is there to decide if a given object has freewill? Is it just unpredictability? If so, then Vancouver weather has freewill. We have already decided that politicians do not have freewill. But what about dogs? Are all their reactions Pavlovian conditioning? Most certainly not.  Hence they are better candidates to have freewill than politicians. And what about cats? Cats certainly have will but it is free?  Does the fact cats cannot be herded indicate they have freewill? And cows, do cows have freewill? Does the fact they can be herded indicate they do not have freewill? Or that plant on my window sill beckoning to be watered?  Does it cause its leaves to droop out of the exercise of its own freewill just to annoy me? Possibly (there it goes again). It seems to me the hallmark of freewill is precisely that combination of random and non-random behaviour found in elementary particles and not in politicians. Hence another interpretations of quantum uncertainty, it is just elementary particles exercising their freewill.

You may be surprised that I claimed all particles are created equal. Surely the neutrino and electron have different properties and are not equal. But that is just them exercising their freewill. Those particles we call electrons have decided (note the verb decided) to behave as if they had a given set of interactions while the neutrinos have decided to be behave like they have a different set of interactions. The interactions, themselves, are illusionary created by the particles exercising their freewill. There was probably a grand council meeting at the beginning of time where the grand conspiracy was initiated and the roles assigned to different sets of particles.  Indeed, it may have been that grand council meeting that started time itself.

The freewill of particles also explains the problem of evil. From earthquakes to global warming[3], the evil consequences are due to particle exercising their freewill. This type of evil could only be eliminated by denying particles freewill­–an even greater evil.

Now back to the initial observation about the polarization of photons in the cosmic microwave background. Surely that polarization is not due to gravitational waves but due to photons exercising their freewill to mislead humans. Is it not more reasonable to assume that the measured polarization[4] is due to freewill than to some far distant interaction with gravitons? (Do gravitons even exits? They have never been directly observed.)

This conspiracy theory–just like all conspiracy theories–accounts for all the known facts and cannot be disproved. It therefore must be correct and we have shown conclusively that it is particles, not people, that have freewill and the photons are trying to mess with our minds[5].

To receive a notice of future posts follow me on Twitter: @musquod.

[1] And no, I have not been smoking with the mayor of certain large Canadian city.

[2] Nuclear physicists have always known that something was messing with the minds of particle physicists.

[3] Global warming is due to particles excising their freewill not carbon dioxide emissions.

[4] But beware the Jennings principle: Most exciting new results are wrong.

[5] This is not the perfect particle physics blog because it does not mention the Higgs boson, the LHC or super symmetry. Oops, maybe now it is.


Nobody understands quantum mechanics? Nonsense!

Saturday, March 8th, 2014

Despite the old canard about nobody understanding quantum mechanics, physicists do understand it.  With all of the interpretations ever conceived for quantum mechanics[1], this claim may seem a bit of a stretch, but like the proverbial ostrich with its head in the sand, many physicists prefer to claim they do not understand quantum mechanics, rather than just admit that it is what it is and move on.

What is it about quantum mechanics that generates so much controversy and even had Albert Einstein (1879 – 1955) refusing to accept it? There are three points about quantum mechanics that generate controversy. It is probabilistic, eschews realism, and is local. Let us look at these three points in more detail.

  1. Quantum mechanics is probabilistic, not determinist. Consider a radioactive atom. It is impossible, within the confines of quantum mechanics, to predict when an individual atom will decay. There is no measurement or series of measurements that can be made on a given atom to allow me to predict when it will decay. I can calculate the probability of when it will decay or the time it takes half of a sample to decay but not the exact time a given atom will decay. This lack of ability to predict exact outcomes, but only probabilities, permeates all of quantum mechanics. No possible set of measurements on the initial state of a system allows one to predict precisely the result of all possible experiments on that state.
  2. Quantum mechanics eschews realism[2]. This is a corollary of the first point. A quantum mechanical system does not have well defined values for properties that have not been directly measured. This has been compared to the moon only existing when someone is looking at it. For deterministic systems one can always safely infer back from a measurement what the system was like before the measurement. Hence if I measure a particle’s position and motion I can infer not only where it will go but where it has come from. The probabilistic nature of quantum mechanics prevents this backward looking inference. If I measure the spin of an atom, there is no certainty that is had only that value before the measurement. It is this aspect of quantum mechanics that most disturbs people, but quantum mechanics is what it is.
  3. Quantum mechanics is local. To be precise, no action at point A will have an observable effect at point B that is instantaneous, or non-causal.  Note the word observable. Locality is often denied in an attempt to circumvent Point 2, but when restricted to what is observable, locality holds. Despite the Pentagon’s best efforts, no messages have been sent using quantum non-locality.


Realism, at least, is a common aspect of the macroscopic world. Even a baby quickly learns that the ball is behind the box even when he cannot see it. But much about the microscopic world is not obviously determinist, the weather in Vancouver for example (it is snowing as I write this). Nevertheless, we cling to determinism and realism like a child to his security blanket. It seems to me that determinism or realism, if they exist, would be at least as hard to understand as their lack. There is no theorem that states the universe should be deterministic and not probabilistic or vice versa. Perhaps god, contrary to Einstein’s assertion, does indeed like a good game of craps[3].

So quantum mechanics, at least at the surface level, has features many do not like. What has the response been? They have followed the example set by Philip Gosse (1810 – 1888) with the Omphalos hypothesis[4]. Gosse, being a literal Christian, had trouble with the geological evidence that the world was older than 6,000, so he came up with an interpretation of history that the world was created only 6,000 years ago but in such a manner that it appeared much older. This can be called an interpretation of history because it leaves all predictions for observations intact but changes the internal aspects of the model so that they match his preconceived ideas. To some extent, Tycho Brahe (1546 – 1601) used the same technique to keep the earth at the center of the universe. He had the earth fixed and the sun circle the earth and the other planets the sun. With the information available at the time, this was consistent with all observations.

The general technique is to adjust those aspects of the model that are not constrained by observation to make it conform to one’s ideas of how the universe should behave. In quantum mechanics these efforts are called interpretations. Hugh Everett (1930 – 1982) proposed many worlds in an attempt to make quantum mechanics deterministic and realistic. But it was only in the unobservable parts of the interpretation that this was achieved and the results of experiments in this world are still unpredictable. Louis de Broglie (1892 – 1987) and later David Bohm (1917 – 1992) introduced pilot waves in an effort to restore realism and determinism. In doing do they gave up locality. Like Gosse’s work, theirs was nice proof in principle that, with sufficient ingenuity, the universe could be made to conform to almost any preconceived ideas, or at least appear to do so. Reassuring I guess, but like Gosse it was done by introducing non-observable aspects to the model: not just unobserved but in principle unobservable. The observable aspects of the universe, at least as far as quantum mechanics is correct, are as stated in the three points above: probabilistic, nonrealistic and local.

Me, I am not convinced that there is anything to understand about quantum mechanics beyond the rules for its use given in standard quantum mechanics text books. However, interpretations of quantum mechanics might, possibly might, suggest different ways to tackle unsolved problems like quantum gravity and they do give one something to discuss after one has had a few beers (or is that a few too many beers).

To receive a notice of future posts follow me on Twitter: @musquod.

[1] See my February 2014 post “Reality and the Interpretations of Quantum Mechanics.”

[2] Realism as defined in the paper by Einstein, Podolsky and Rosen, Physical Review 47 (10): 777–780 (1935).

[3] Or dice.


Reality and the Interpretations of Quantum Mechanics

Friday, February 7th, 2014

If there were only one credible interpretation of quantum mechanics, then we could take it as a reliable representation of reality. But when there are many, it destroys the credulity of all of them. The plethora of interpretations of quantum mechanics lends credence to the thesis that science tells us nothing about the ultimate nature of reality.

Quantum mechanics, in its essence, is a mathematical formalism with an algorithm for how to connect the formalism to observation or experiments. When relativistic extensions are included, it provides the framework for all of physics[1] and the underlying foundation for chemistry. For macroscopic objects (things like footballs), it reduces to classical mechanics through some rather subtle mathematics, but it still provides the underlying framework even there. Despite its empirical success, quantum mechanics is not consistent with our common sense ideas of how the world should work. It is inherently probabilistic despite the best efforts of motivated and ingenious people to make it deterministic. It has superposition and interference of the different states of particles, something not seen for macroscopic objects. If it is weird to us, just imagine how weird it must have seemed to the people who invented it. They were trained in the classical system until it was second nature and then nature itself said, “Fooled you, that is not how things are.” Some, like Albert Einstein (1879 – 1955), resisted it to their dying days.

The developers of quantum mechanics, in their efforts to come to grips with quantum weirdness, invented interpretations that tried to understand quantum mechanics in a way that was less disturbing to common sense and their classical training. In my classes in quantum mechanics, there were hand waving discussions of the Copenhagen interpretation, but I could never see what they added to mathematical formalism. I am not convinced my lecturers could either, although the term Copenhagen interpretation was uttered with much reverence. Then I heard a lecture by Sir Rudolf Peierls[2] (1907 – 1995) claiming that the conscious mind caused the collapse of the wave function. That was an interesting take on quantum mechanics, which was also espoused by John von Neumann (1903 – 1957) and Eugene Wigner (1902 –1995) for part of their careers.

So does consciousness play a crucial role in quantum mechanics? Not according to Hugh Everett III (1930 – 1982) who invented the many-worlds interpretation. In this interpretation, the wave function corresponds to physical reality, and each time a measurement is made the universe splits into many different universes corresponding to each possible outcome of the quantum measurement process. Physicists are nothing if not imaginative. This interpretation also offers the promise of eternal life.  The claim is that in all the possible quantum universes there must be one in which you will live forever. Eventually that will be the only one you will be aware of. But as with the Greek legend of Tithonus, there is no promise of eternal youth. The results may not be pretty.

If you do not like either of those interpretations of quantum mechanics, well have I got an interpretation for you. It goes under the title of the relation interpretation. Here the wave function is simply the information a given observer has about the quantum system and may be different for different observers; nothing mystical here and no multiplicity of worlds. Then there is the theological interpretation. This I first heard from Steven Hawking (b. 1942) although I doubt he believed it. In this interpretation, God uses quantum indeterminacy to hide his direct involvement in the unfolding of the universe. He simply manipulates the results of quantum measurements to suit his own goals. Well, He does work in mysterious ways after all.

I will not bore you with all possible interpretations and their permutations. Life is too short for that, but we are still left with the overarching question: which interpretation is the one true interpretation? What is the nature of reality implied by quantum mechanics? Does the universe split into many? Does consciousness play a central role? Is the wave function simply information? Does God hide in quantum indeterminacy?

Experiment cannot sort this out since all the interpretations pretty much agree on the results of experiments (even this is subject to debate), but science has one other criteria: parsimony. We eliminate unnecessary assumptions. When applied to interpretations of quantum mechanics, parsimony seems to favour the relational interpretation. But, in fact, parsimony, carefully applied, favours something else; the instrumentalist approach. That is: don’t worry about the interpretations, just shut up and calculate. All the interpretations have additional assumptions not required by observations.

But what about the ultimate nature of reality? There is no theorem that says reality, itself, must be simple. So quantum mechanics implies very little about the ultimate nature of reality. I guess we will have to leave that discussion to the philosophers and theologians. More power to them.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Although quantum gravity is still a big problem.

[2] A major player in the development of quantum many body theory and nuclear physics.


Is there a place for realism in science?

Friday, January 10th, 2014

In the philosophy of science, realism is used in two related ways. The first way is that the interior constructs of a model refer to something that actually exists in nature, for example the quantum mechanical wave function corresponds to a physical entity. The second way is that properties of a system exist even when they are not being measured; the ball is in the box even when no one can see it (unless it is a relative of Schrodinger’s cat). The two concepts are related since one can think of the ball’s presence or absence as part of one’s model for how balls (or cats) behave.

Despite our and even young children’s belief in the continued existence of the ball and that cats are either alive or dead, there are reasons for doubting realism. The three main ones are the history of physics, the role of canonical (unitary) transformations in classical (quantum) mechanics, and Bell’s inequality. The second and third of these may seem rather obtuse, but bear with me.

Let’s start with the first, the history of physics. Here, we follow in the footsteps of Thomas Kuhn (1922–1996). He was probably the first philosopher of science to actually look at the history of science to understand how science works. One of his conclusions was that the interior constructs of models (paradigms in his terminology) do not correspond (refer in the philosophic jargon) to anything in reality. It is easy to see why. One can think of a sequence of models in the history of physics. Here we consider the Ptolemaic system, Newtonian mechanics, quantum mechanics, relativistic field theory (a combination of quantum mechanics and relativity) and finally quantum gravity. The Ptolemaic system ruled for half a millennium, from the second to seventeenth centuries. By any standard, the Ptolemaic model was a successful scientific model since it made correct predictions for the location of the planets in the night sky. Eventually, however, Newton’s dynamical model caused its demise. At the Ptolemaic model’s core were the concepts of geo-centrism and uniform circular motion. People believed these two aspects of the model corresponded to reality. But Newton changed all that. Uniform circular motion and geo-centrism were out and instantaneous gravitation attraction was in. Central to the Newtonian system was the fixed Euclidean space time geometry and particle trajectories. The first of these was rendered obsolete by relativity and the second by quantum mechanics; at least the idea of fixed number of particles survived–until quantum field theory. And if string theory is correct, all those models have the number of dimensions wrong. The internal aspects of well-accepted and successful models disappear when new models replace the old. There are other examples. In the history of physics, the caloric theory of heat was successful at one time but caloric vanished when the kinetic theory of heat took over. And on it goes. What is regarded as central to our understanding of how the world works goes puff when new models replace old.

On to the second reason for doubting realism–the role of transformations: canonical and unitary.  In both classical and quantum mechanics there are mathematical transformations that change the internals of the calculations[1] but leave not only the observables but also the structure of the calculations invariant. For example, in classical mechanics we can use a canonical transformation to change coordinates without changing the physics. We can express the location of an object using the earth as a reference point or the sun. Now this is quite fun; the choice of coordinates is quite arbitrary. So you want a geocentric system (like Galileo’s opponents), no problem. We write the equation of motion in that frame and everyone is happy. But you say the Earth really does go around the sun. That is equivalent to the statement: planetary motion is more simply described in the heliocentric frame. We can go on from there and use coordinates as weird as you like to match religious or personal preconceptions.  In quantum mechanics the transformations have even more surprising implications. You would think something like the correlations between particles would be observable and a part of reality. But that is not the case. The correlations depend on how you do your calculation and can be changed at will with unitary transformations. It is thus with a lot of things that you might think are parts of reality but are, as we say, model dependent.

Finally we come to Bell’s inequality as the third reason to doubt realism. The idea here goes back to what is known as the Einstein-Podolsky-Rosen paradox (published in 1935). By looking at the correlations of coupled particles Einstein, Podolsky, and Rosen claimed that quantum mechanics is incomplete.  John Bell (1928 – 1990), building on their work, developed a set of inequalities that allowed a precise experimental test of the Einstein-Podolsky-Rosen claim. The experimental test has been performed and the quantum mechanical prediction confirmed. This ruled out all local realistic models. That is, local models where a system has definite values of a property even when that property has not been measured. This is using realism in the second sense defined above. There are claims, not universally accepted, that extensions of Bell’s inequalities rule out all realist models, local or non-local.

So where does this leave us? Pretty much with the concept of realism in science in tatters. The internals of models changes in unpredictable ways when science advances. Even within a given model, the internals can be changed with mathematical tricks and for some definitions of realism, experiment has largely ruled it out.  Thus we are left with our models that describe aspects of reality but should never be mistaken for reality itself. Immanuel Kant (1724 – 1804), the great German philosopher, would not be surprised[2].

To receive a notice of future posts follow me on Twitter: @musquod.

[1] For the relation between the two type of transformations see: N.L. Balazs and B.K. Jennings, Unitary transformations, Weyl’s association and the role of canonical transformations, Physica, 121A (1983) 576–586

[2] He made the distinction between the thing in itself and observations of it.


Has there ever been a paradigm shift?

Friday, December 6th, 2013

Yes, once!

Paradigm and paradigm shift are so over used and misused that the world would benefit if they were simply banned.  Originally Thomas Kuhn (1922–1996) in his 1962 book, The Structure of Scientific Revolutions, used the word paradigm to refer to the set of practices that define a scientific discipline at any particular period of time. A paradigm shift is when the entire structure of a field changes, not when someone simply uses a different mathematical formulation. Perhaps it is just grandiosity, everyone thinking their latest idea is earth shaking (or paradigm shifting), but the idea has been so debased that almost any change is called a paradigm shift, down to level of changing the color of ones socks.

The archetypal example, and I would suggest the only real example in the natural and physical sciences, is the paradigm shift from Aristotelian to Newtonian physics. This was not just a change in physics from the perfect motion is circular to an object either is at rest or moves at a constant velocity, unless acted upon by an external force but a change in how knowledge is defined and acquired. There is more here than a different description of motion; the very concept of what is important has changed. In Newtonian physics there is no place for perfect motion but only rules to describe how objects actually behave. Newtonian physics was driven by observation. Newton, himself, went further and claimed his results were derived from observation. While Aristotelian physics is broadly consistent with observation it is driven more by abstract concepts like perfection.  Aristotle (384 BCE – 322 BCE) would most likely have considered Galileo Galilei’s (1564 – 1642) careful experiments beneath him.  Socrates (c. 469 BC – 399 BC) certainly would have. Their epistemology was not based on careful observation.

While there have been major changes in the physical sciences since Newton, they do not reach the threshold needed to call them a paradigm shifts since they are all within the paradigm defined by the scientific method. I would suggest Kuhn was misled by the Aristotle-Newton example where, indeed, the two approaches are incommensurate: What constitutes a reasonable explanation is simply different for the two men. But would the same be true with Michael Faraday (1791 – 1867) and Niels Bohr (1885–1962) who were chronologically on opposite sides of the quantum mechanics cataclysm?  One could easily imagine Faraday, transported in time, having a fruitful discussion with Bohr. While the quantum revolution was indeed cataclysmic, changing mankind’s basic understanding of how the universe worked, it was based on the same concept of knowledge as Newtonian physics. You make models based on observations and validate them through testable predictions.  The pre-cataclysmic scientists understood the need for change due to failed predictions, even if, like Albert Einstein (1879 – 1955) or Erwin Schrödinger (1887 – 1961), they found quantum mechanics repugnant. The phenomenology was too powerful to ignore.

Sir Karl Popper (1902 – 1994) provided another ingredients missed by Kuhn, the idea that science advances by the bold new hypothesis, not by deducing models from observation. The Bohr model of the atom was a bold hypothesis not a paradigm shift, a bold hypothesis refined by other scientists and tested in the crucible of careful observation. I would also suggest that Kuhn did not understand the role of simplicity in making scientific models unique. It is true that one can always make an old model agree with past observations by making it more complex[1]. This process frequently has the side effect of reducing the old models ability to make predictions. It is to remedy these problems that a bold new hypothesis is needed. But to be successful, the bold new hypothesis should be simpler than the modified version of the original model and more crucially must make testable predictions that are confirmed by observation. But even then, it is not a paradigm shift; just a verified bold new hypothesis.

Despite the nay-saying, Kuhn’s ideas did advance the understanding of the scientific method. In particular, it was a good antidote to the logical positivists who wanted to eliminate the role of the model or what Kuhn called the paradigm altogether. Kuhn made the point that is the framework that gives meaning to observations. Combined with Popper’s insights, Kuhn’s ideas paved the way for a fairly comprehensive understanding of the scientific method.

But back to the overused word paradigm, it would be nice if we could turn back the clock and restrict the term paradigm shift to those changes where the before and after are truly incommensurate; where there is no common ground to decide which is better. Or if you like, the demarcation criteria for a paradigm shift is that the before and after are incommensurate[2]. That would rule out the change of sock color from being a paradigm shift. However, we cannot turn back the clock so I will go back to my first suggestion that the word be banned.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] This is known as the Duhem-Quine thesis.

[2] There are probably paradigm shifts, even in the restricted meaning of the word, if we go outside science. The French revolution could be considered a paradigm shift in the relation between the populace and the state.


Selling Science: Can we best the preachers and politicians at the PR game?

Thursday, November 21st, 2013

Too many of the attempts to sell science are like the proverbial minister preaching to the choir: they convince nobody but the already converted. This is unfortunate because we, as scientists, have a duty and a responsibility to sell science to a wider audience.  There are four motivations for this:

  1. There are important technical questions that can only be answered by the scientific method. These include, for example, what is causing global warming? Or why are the returning salmon runs in British Columbia so erratic? We must make the case that science and only science can address these types of questions and that the answers science provides should be listened to.
  2. To provide answers to questions like those above, science must have ongoing support since the answers can only come from a scientific infrastructure that is maintained for the long haul.  In addition to answering practical questions, science also has the important cultural role of satisfying human curiosity. To satisfy either the practical or cultural goals, science needs support from the public purse. This means science must be sold to politicians and the general public who elect them and support science through their taxes.
  3. We need to excite the next generation’s best and brightest to consider science as a career. This is the only way that we can ensure science’s future.
  4. Selling science is rewarding and can even be fun. You should have seen the fun both TRIUMF staff and visitors had at the last TRIUMF Open House. There is also something contagious about explaining a topic you are passionate about.

The allusions to religion in the opening sentence are appropriate as many attempts to sell science come across as a claim that science is the one true religion and anyone who disagrees is a fool.  While that may, indeed, be true[1], hollering it from the hill tops is a strategy doomed to failure. A frontal attack on a major component of a person’s world view will only arouse hostility.  Hence, to sell science, we have to start with a common ground with the audience. To achieve maximum impact, you have to know your audience and tailor what you say to its interests.

However, there are three things that should be part of any attempt to sell science:

  1. A definition of what science is. This may seem self-evident but I have seen seminars on selling science that carefully avoided any attempt to define what science actually is. I have this real nice pig in the poke to sell you. Even worse are attempts to define science that are wrong and/or annoy people. A major impediment to selling science is that there is no commonly accepted definition of what science is. However, allow me to offer a fairly safe definition: using observation as a basis for modeling how the universe works. This definition is simple, understandable and reasonably accurate[2].  Alternatively, one can talk about the ability to make testable predictions as the hallmark of the scientific method. Use the word theory sparingly as that word has multiple meanings and invariably leads to confusion. Using words like objective reality, truth, or fact is a real turn off to many audiences. Besides, every Christian will tell you that Jesus is the truth and the more fundamentalist Christians that the bible is fact. You cannot win with those words, avoid them.
  2. Examples of scientific successes.  This is the greatest strength in selling science. We have a plethora of examples to choose from, but it is probably not a good idea to start with the nuclear bomb[3]. Again, it is important to understand the audience. To a person talking non-stop on his cell phone, the cell phone would be a good example (if you can get his attention) but to other people the cell phone is an anathema. The same is true of almost any example you can choose. After all, curing disease (and motherhood) leads to world overpopulation. On TV or radio, the role of science in enabling TV and radio is a good bet. On YouTube, the internet would be a good example.  Despite the comment above, curing disease usually gets brownie points for science.  But claiming the Higgs boson cures cancer is a bit of a stretch.
  3. Your personal experience of the thrill of science; whether it is for the good of humanity or just learning more about how the universe works. It is here that the emotional aspect of science can come to the fore. To some of us, the hunting of the Higgs boson is more thrilling than hunting grizzly bears and probably more environmentally friendly. Using personal experience may seem as going against our training as scientist; but here we can learn from the professionals, those who sell religion or political parties: Do not talk about theology but your personal experience[4]. Do not talk about the platform but your own experience[5].  In the end, this may be a telling argument and it is important to counter the stereotype of the mad scientist in his (almost always male) laboratory plotting world domination or ignoring the obvious flaws in his theory and its disastrous side effects.  Drs. Faustus and Frankenstein are never far from people’s conception of the scientist.

You would think that selling science would be easy. We have a well-defined technique, four hundred years of successes to prove its usefulness and the thrill of the hunt. But we are up against two formidable foes: competing world views and vested interests. If someone believes they will be raptured to Heaven in the near future, learning about the world below is not a high priority. Similarly if they subscribe to the old hymn, I Don’t Want to Get Adjusted to This World Below, finding a crack in which to start a conversation is difficult.

In the same vein, if you have spent your life building a tobacco empire the last thing you want is some scientist claiming tobacco causes cancer. Or if you have made selling tar-sands oil a key political plank, you do not want scientists claiming it is destroying the earth.  In these cases, science, itself, tends to become the target of the counterattack. With the world’s best public-relations machines powered by religion, politics and vested interests in opposition it is not at all clear that the efforts to sell science will be successful.  But we must try. The motivations are so compelling, we must try.

Acknowledgement: I would like to thank T. Meyer and members of the TRIUMF Communications Group for comments on various drafts of this post.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Or not, as the case may be.

[2] My Quantum Diary blogs support this definition of science.

[3] Unless you are in Los Alamos.

[4] A well-known mega church pastor.

[5] Obama campaign worker.