• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USA

Latest Posts

  • Emily
  • Thompson
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Posts Tagged ‘Philosophy of science’

Isaac Asimov (1920 – 1992) “expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”. Albert Einstein (1870 – 1955) claimed: “The most incomprehensible thing about the world is that it is comprehensible”. Indeed there is general consensus in science that not only is the universe comprehensible but is it mostly well described by our current models. However, Daniel Kahneman counters: “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance”.

Well, that puts a rather different perspective on Asimov’s and Einstein’s claims.  So who is this person that is raining on our parade? Kahneman is a psychologist who won the 2002 Nobel Prize in economics for his development of prospect theory. A century ago everyone quoted Sigmund Freud (1856 – 1939) to show how modern they were. Today, Kahneman seems to have assumed that role.[1]

Kahneman’s Nobel Prize winning prospect theory, developed with Amos Tversky (1937 –1996), replaced expected utility theory. The latter assumed that people made economic choices based on the expected utility of the results, that is they would behave rationally. In contrast, Kahneman and company have shown that people are irrational in well-defined and predictable ways. For example, it is understood that the phrasing of a question can (irrationally) change how people answer, even if the meaning of the question is the same.

Kahneman’s book, Thinking, Fast and Slow, really should be required reading for everyone. It explains a lot of what goes on (gives the illusion of comprehension?) and provides practical tips for thinking rationally. For example, when I was on a visit in China, the merchants would hand me a calculator to type in what I would pay for a given item. Their response to the number I typed in was always the same: You’re joking, right?  Kahneman would explain that they were trying to remove the anchor set by the first number entered in the calculator. Anchoring is a common aspect of how we think.

Since, as Kahneman argues, we are inherently irrational one has to wonder about the general validity of the philosophic approach to knowledge; an approach based largely on rational argument. Science overcomes our inherent irrationality by constraining our rational arguments by frequent, independently-repeated observations.  Much as with project management, we tend to be irrationally overconfident of our ability to estimate resource requirements.  Estimates of project resource requirements not constrained by real world observations leads to the project being over budget and delivered past deadlines. Even Kahneman was not immune to this trap of being overly optimistic.

Kahneman’s cynicism has been echoed by others. For example, H.L. Mencken (1880 –1956) said:  “The most common of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind”. Are the cynics correct? Is our belief that the universe is comprehensible, and indeed mostly understood, a mirage based on our unlimited ability to ignore our ignorance? A brief look at history would tend to support that claim.  Surely the Buddha, after having achieved enlightenment, would have expressed relief and contentment for living in a century in which we finally got the basis of the universe straight. Saint Paul, in his letters, echoes the same claim that the universe is finally understood. René Descartes, with the method laid out in the Discourse on the Method and Principles of Philosophy, would have made the same claim.  And so it goes, almost everyone down through history believes that he/she comprehends how the universe works. I wonder if the cow in the barn has the same illusion. Unfortunately, each has a different understanding of what it means to comprehend how the universe works, so it is not even possible to compare the relative validity of the different claims. The unconscious mind fits all it knows into a coherent framework that gives the illusion of comprehension in terms of what it considers important. In doing so, it assumes that what you see is all there is.  Kahneman refers to this as WYSIATI (What You See Is All There Is).

To a large extent the understandability of the universe is mirage based on WYSIATI—our ignorance of our ignorance. We understand as much as we are aware of and capable of understanding; blissfully ignoring the rest. We do not know how quantum gravity works, if there is intelligent life elsewhere in the universe[2], or for that matter what the weather will be like next week. While our scientific models correctly describe much about the universe, they are, in the end, only models and leave much beyond their scope, including the ultimate nature of reality.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Let’s hope time is kinder to Kahneman than it was to Freud.

[2] Given our response to global warming, one can debate if there is intelligent life on earth.


René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.


Since model building is the essence of science, this quote has a bit of a bite to it. It is from George E. P. Box (1919 – 2013), who was not only an eminent statistician but also an eminently quotable one.  Another quote from him: One important idea is that science is a means whereby learning is achieved, not by mere theoretical speculation on the one hand, nor by the undirected accumulation of practical facts on the other, but rather by a motivated iteration between theory and practice.  Thus he saw science as an iteration between observation and theory. And what is theory but the building of erroneous, or at least approximate, models?

To amplify that last comment: The main point of my philosophical musings is that science is the building of models for how the universe works; models constrained by observation and tested by their ability to make predictions for new observations, but models nonetheless. In this context, the above quote has significant implications for science. Models, even those of science, are by their very nature simplifications and as such are not one hundred per cent accurate. Consider the case of a map. Creating a 1:1 map is not only impractical[2] but even if you had one it would be one hundred per cent useless; just try folding a 1:1 scale map of Vancouver. A model with all the complexity of the original does not help us understand the original.  Indeed the whole purpose of a model is to eliminate details that are not essential to the problem at hand.

By their very nature, numerical models are always approximate and this is probably what Box had in mind with his statement. One neglects small effects like the gravitational influence of a mosquito. Even as one begins computing, one makes numerical approximations, replacing integrals with sums or vise versa, derivatives with finite differences, etc. However, one wants to control errors and keep them to a minimum. Statistical analysis techniques, such as Box developed, help estimate and control errors.

To a large extent it is self-evident that models are approximate; so what? Again to quote George Box: Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. What would he have thought of a model with twenty plus parameters, like the standard model of particle physics? His point is a valid one. All measurements have experimental errors. If your fit is perfect you are almost certainly fitting noise. Hence, adding more parameters to get a perfect fit is a fool’s errand. But even without experimental error, a large number of parameters frequently means something important has been missed. Has something been missed in the standard model of particle physics with its many parameters or is the universe really that complicated?

There is an even more basic reason all models are wrong. This goes back at least as far as Immanuel Kant (1724 – 1804). He made the distinction between observation of an object and the object in itself. One never has direct experience of things, the so-called noumenal world; what one experiences is the phenomenal world as conveyed to us by our senses. What we see is not even what has been recorded by the eye.  The mind massages the raw observation into something it can understand; a useful but not necessarily accurate model of the world. Science then continues this process in a systematic manner to construct models to describe observations but not necessarily the underlying reality.

Despite being by definition at least partially wrong, models are frequently useful. The scale model map is useful to tourists trying to find their way around Vancouver or to a general plotting strategy for his next battle. But, if the maps are too far wrong the tourist will get lost and fall into False Creek and the general will go down in history as a failure. Similarly, the models for weather predictions are useful although they are certainly not a hundred per cent accurate. However, they do indicate when it safe to plan a picnic or cut the hay; provided they are right more than by chance and the standard model of particle physics, despite having many parameters and not including gravity, is a useful description of a wide range of observations. But to return to the main point, all models, even useful ones, are wrong because they are approximations and not even approximations to reality but to our observations of that reality. Where does that leave us? Well, let us save the last word for George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Hence the foolishness of talking about theoretical breakthroughs in science. All breakthroughs arise from pondering about observations and observations testing those ponderings.

[2] Not even Google could produce that.


Theoretical physics, simplicity. Surely the two words do not go together. Theoretical physics has been the archetypal example of complicated since its invention. So what did Frank Wilczek (b. 1951) mean by that statement[1] quoted in the title? It is the scientist’s trick of taking a well-defined word, such as simplicity, and giving it a technical meaning. In this case, the meaning is from algorithmic information theory. That theory defines complexity (Kolmogorov complexity[2]) as the minimum length of a computer program needed to reproduce a string of numbers. Simplicity, as used in the title, is the opposite of this complexity. Science, not just theoretical physics, is driven, in part but only in part, by the quest for this simplicity.

How is that you might ask. This is best described by Greg Chaitin (b. 1947), a founder of algorithmic information theory. To quote: This idea of program-size complexity is also connected with the philosophy of the scientific method. You’ve heard of Occam’s razor, of the idea that the simplest theory is best? Well, what’s a theory? It’s a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it’s cooked up, and the data is incomprehensible, it’s random. In that case the theory isn’t doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!—That’s the idea…

In many ways this is quite nice; the best theory is the one that compresses the most empirical information into the shortest description or computer program.  It provides an algorithmic method to decide which of two competing theories is best (but not an algorithm for generating the best theory). With this definition of best, a computer could do science: generate programs to describe data and check which is the shortest. It is not clear, with this definition, that Copernicus was better than Ptolemy. The two approaches to planetary motion had a similar number of parameters and accuracy.

There are many interesting aspects of this approach. Consider compressibility and quantum mechanics. The uncertainty principle and the probabilistic nature of quantum mechanics put limits on the extent to which empirical data can be compressed. This is the main difference between classical mechanics and quantum mechanics. Given the initial conditions and the laws of motion, classically the empirical data is compressible to just that input. In quantum mechanics, it is not. The time, when each individual atom in a collection of radioactive atoms decays, is unpredictable and the measured results are largely incompressible. Interpretations of quantum mechanics may make the theory deterministic, but they cannot make the empirical data more compressible.

Compressibility highlights a significant property of initial conditions. While the data describing the motion of the planets can be compressed using Newton’s laws of motion and gravity, the initial conditions that started the planets on their orbits cannot be. This incompressibility tends to be a characteristic of initial conditions. Even the initial conditions of the universe, as reflected in the cosmic microwave background, have a large random non-compressible component – the cosmic variance.  If it wasn’t for quantum uncertainly, we could probably take the lack of compressibility as a definition of initial conditions. For the universe, the two are the same since the lack of compressibility in the initial conditions is due to quantum fluctuations but that is not always the case.

The algorithmic information approach makes Occam’s razor, the idea that one should minimize assumptions, basic to science. If one considers that each character in a minimal computer program is a separate assumption, then the shortest program does indeed have the fewest assumptions. But you might object that some of the characters in a program can be predicted from other characters. However, if that is true the program can probably be made shorter. This is all a bit counterintuitive since one generally does not take such a fine grained approach to what one considers an assumption.

The algorithmic information approach to science, however, does have a major shortcoming. This definition of the best theory leaves out the importance of predictions. A good model must not only compress known data, it must predict new results that are not predicted by competing models. Hence, as noted in the introduction, simplicity is only part of the story.

The idea of reducing science to just a collection of computer programs is rather frightening. Science is about more than computer programs[3]. It is, and should be, a human endeavour. As people, we want models of how the universe works that humans, not just computers, can comprehend and share with others. A collection of bits on a computer drive does not do this.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] From “This Explains Everything”, Ed, John Brockman, Harper Perennial, New York, 2013

[2] Also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity.

[3] In this regard, I have a sinking feeling that I am fighting a rearguard action against the inevitable.


If there were only one credible interpretation of quantum mechanics, then we could take it as a reliable representation of reality. But when there are many, it destroys the credulity of all of them. The plethora of interpretations of quantum mechanics lends credence to the thesis that science tells us nothing about the ultimate nature of reality.

Quantum mechanics, in its essence, is a mathematical formalism with an algorithm for how to connect the formalism to observation or experiments. When relativistic extensions are included, it provides the framework for all of physics[1] and the underlying foundation for chemistry. For macroscopic objects (things like footballs), it reduces to classical mechanics through some rather subtle mathematics, but it still provides the underlying framework even there. Despite its empirical success, quantum mechanics is not consistent with our common sense ideas of how the world should work. It is inherently probabilistic despite the best efforts of motivated and ingenious people to make it deterministic. It has superposition and interference of the different states of particles, something not seen for macroscopic objects. If it is weird to us, just imagine how weird it must have seemed to the people who invented it. They were trained in the classical system until it was second nature and then nature itself said, “Fooled you, that is not how things are.” Some, like Albert Einstein (1879 – 1955), resisted it to their dying days.

The developers of quantum mechanics, in their efforts to come to grips with quantum weirdness, invented interpretations that tried to understand quantum mechanics in a way that was less disturbing to common sense and their classical training. In my classes in quantum mechanics, there were hand waving discussions of the Copenhagen interpretation, but I could never see what they added to mathematical formalism. I am not convinced my lecturers could either, although the term Copenhagen interpretation was uttered with much reverence. Then I heard a lecture by Sir Rudolf Peierls[2] (1907 – 1995) claiming that the conscious mind caused the collapse of the wave function. That was an interesting take on quantum mechanics, which was also espoused by John von Neumann (1903 – 1957) and Eugene Wigner (1902 –1995) for part of their careers.

So does consciousness play a crucial role in quantum mechanics? Not according to Hugh Everett III (1930 – 1982) who invented the many-worlds interpretation. In this interpretation, the wave function corresponds to physical reality, and each time a measurement is made the universe splits into many different universes corresponding to each possible outcome of the quantum measurement process. Physicists are nothing if not imaginative. This interpretation also offers the promise of eternal life.  The claim is that in all the possible quantum universes there must be one in which you will live forever. Eventually that will be the only one you will be aware of. But as with the Greek legend of Tithonus, there is no promise of eternal youth. The results may not be pretty.

If you do not like either of those interpretations of quantum mechanics, well have I got an interpretation for you. It goes under the title of the relation interpretation. Here the wave function is simply the information a given observer has about the quantum system and may be different for different observers; nothing mystical here and no multiplicity of worlds. Then there is the theological interpretation. This I first heard from Steven Hawking (b. 1942) although I doubt he believed it. In this interpretation, God uses quantum indeterminacy to hide his direct involvement in the unfolding of the universe. He simply manipulates the results of quantum measurements to suit his own goals. Well, He does work in mysterious ways after all.

I will not bore you with all possible interpretations and their permutations. Life is too short for that, but we are still left with the overarching question: which interpretation is the one true interpretation? What is the nature of reality implied by quantum mechanics? Does the universe split into many? Does consciousness play a central role? Is the wave function simply information? Does God hide in quantum indeterminacy?

Experiment cannot sort this out since all the interpretations pretty much agree on the results of experiments (even this is subject to debate), but science has one other criteria: parsimony. We eliminate unnecessary assumptions. When applied to interpretations of quantum mechanics, parsimony seems to favour the relational interpretation. But, in fact, parsimony, carefully applied, favours something else; the instrumentalist approach. That is: don’t worry about the interpretations, just shut up and calculate. All the interpretations have additional assumptions not required by observations.

But what about the ultimate nature of reality? There is no theorem that says reality, itself, must be simple. So quantum mechanics implies very little about the ultimate nature of reality. I guess we will have to leave that discussion to the philosophers and theologians. More power to them.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Although quantum gravity is still a big problem.

[2] A major player in the development of quantum many body theory and nuclear physics.


In the philosophy of science, realism is used in two related ways. The first way is that the interior constructs of a model refer to something that actually exists in nature, for example the quantum mechanical wave function corresponds to a physical entity. The second way is that properties of a system exist even when they are not being measured; the ball is in the box even when no one can see it (unless it is a relative of Schrodinger’s cat). The two concepts are related since one can think of the ball’s presence or absence as part of one’s model for how balls (or cats) behave.

Despite our and even young children’s belief in the continued existence of the ball and that cats are either alive or dead, there are reasons for doubting realism. The three main ones are the history of physics, the role of canonical (unitary) transformations in classical (quantum) mechanics, and Bell’s inequality. The second and third of these may seem rather obtuse, but bear with me.

Let’s start with the first, the history of physics. Here, we follow in the footsteps of Thomas Kuhn (1922–1996). He was probably the first philosopher of science to actually look at the history of science to understand how science works. One of his conclusions was that the interior constructs of models (paradigms in his terminology) do not correspond (refer in the philosophic jargon) to anything in reality. It is easy to see why. One can think of a sequence of models in the history of physics. Here we consider the Ptolemaic system, Newtonian mechanics, quantum mechanics, relativistic field theory (a combination of quantum mechanics and relativity) and finally quantum gravity. The Ptolemaic system ruled for half a millennium, from the second to seventeenth centuries. By any standard, the Ptolemaic model was a successful scientific model since it made correct predictions for the location of the planets in the night sky. Eventually, however, Newton’s dynamical model caused its demise. At the Ptolemaic model’s core were the concepts of geo-centrism and uniform circular motion. People believed these two aspects of the model corresponded to reality. But Newton changed all that. Uniform circular motion and geo-centrism were out and instantaneous gravitation attraction was in. Central to the Newtonian system was the fixed Euclidean space time geometry and particle trajectories. The first of these was rendered obsolete by relativity and the second by quantum mechanics; at least the idea of fixed number of particles survived–until quantum field theory. And if string theory is correct, all those models have the number of dimensions wrong. The internal aspects of well-accepted and successful models disappear when new models replace the old. There are other examples. In the history of physics, the caloric theory of heat was successful at one time but caloric vanished when the kinetic theory of heat took over. And on it goes. What is regarded as central to our understanding of how the world works goes puff when new models replace old.

On to the second reason for doubting realism–the role of transformations: canonical and unitary.  In both classical and quantum mechanics there are mathematical transformations that change the internals of the calculations[1] but leave not only the observables but also the structure of the calculations invariant. For example, in classical mechanics we can use a canonical transformation to change coordinates without changing the physics. We can express the location of an object using the earth as a reference point or the sun. Now this is quite fun; the choice of coordinates is quite arbitrary. So you want a geocentric system (like Galileo’s opponents), no problem. We write the equation of motion in that frame and everyone is happy. But you say the Earth really does go around the sun. That is equivalent to the statement: planetary motion is more simply described in the heliocentric frame. We can go on from there and use coordinates as weird as you like to match religious or personal preconceptions.  In quantum mechanics the transformations have even more surprising implications. You would think something like the correlations between particles would be observable and a part of reality. But that is not the case. The correlations depend on how you do your calculation and can be changed at will with unitary transformations. It is thus with a lot of things that you might think are parts of reality but are, as we say, model dependent.

Finally we come to Bell’s inequality as the third reason to doubt realism. The idea here goes back to what is known as the Einstein-Podolsky-Rosen paradox (published in 1935). By looking at the correlations of coupled particles Einstein, Podolsky, and Rosen claimed that quantum mechanics is incomplete.  John Bell (1928 – 1990), building on their work, developed a set of inequalities that allowed a precise experimental test of the Einstein-Podolsky-Rosen claim. The experimental test has been performed and the quantum mechanical prediction confirmed. This ruled out all local realistic models. That is, local models where a system has definite values of a property even when that property has not been measured. This is using realism in the second sense defined above. There are claims, not universally accepted, that extensions of Bell’s inequalities rule out all realist models, local or non-local.

So where does this leave us? Pretty much with the concept of realism in science in tatters. The internals of models changes in unpredictable ways when science advances. Even within a given model, the internals can be changed with mathematical tricks and for some definitions of realism, experiment has largely ruled it out.  Thus we are left with our models that describe aspects of reality but should never be mistaken for reality itself. Immanuel Kant (1724 – 1804), the great German philosopher, would not be surprised[2].

To receive a notice of future posts follow me on Twitter: @musquod.

[1] For the relation between the two type of transformations see: N.L. Balazs and B.K. Jennings, Unitary transformations, Weyl’s association and the role of canonical transformations, Physica, 121A (1983) 576–586

[2] He made the distinction between the thing in itself and observations of it.


Yes, once!

Paradigm and paradigm shift are so over used and misused that the world would benefit if they were simply banned.  Originally Thomas Kuhn (1922–1996) in his 1962 book, The Structure of Scientific Revolutions, used the word paradigm to refer to the set of practices that define a scientific discipline at any particular period of time. A paradigm shift is when the entire structure of a field changes, not when someone simply uses a different mathematical formulation. Perhaps it is just grandiosity, everyone thinking their latest idea is earth shaking (or paradigm shifting), but the idea has been so debased that almost any change is called a paradigm shift, down to level of changing the color of ones socks.

The archetypal example, and I would suggest the only real example in the natural and physical sciences, is the paradigm shift from Aristotelian to Newtonian physics. This was not just a change in physics from the perfect motion is circular to an object either is at rest or moves at a constant velocity, unless acted upon by an external force but a change in how knowledge is defined and acquired. There is more here than a different description of motion; the very concept of what is important has changed. In Newtonian physics there is no place for perfect motion but only rules to describe how objects actually behave. Newtonian physics was driven by observation. Newton, himself, went further and claimed his results were derived from observation. While Aristotelian physics is broadly consistent with observation it is driven more by abstract concepts like perfection.  Aristotle (384 BCE – 322 BCE) would most likely have considered Galileo Galilei’s (1564 – 1642) careful experiments beneath him.  Socrates (c. 469 BC – 399 BC) certainly would have. Their epistemology was not based on careful observation.

While there have been major changes in the physical sciences since Newton, they do not reach the threshold needed to call them a paradigm shifts since they are all within the paradigm defined by the scientific method. I would suggest Kuhn was misled by the Aristotle-Newton example where, indeed, the two approaches are incommensurate: What constitutes a reasonable explanation is simply different for the two men. But would the same be true with Michael Faraday (1791 – 1867) and Niels Bohr (1885–1962) who were chronologically on opposite sides of the quantum mechanics cataclysm?  One could easily imagine Faraday, transported in time, having a fruitful discussion with Bohr. While the quantum revolution was indeed cataclysmic, changing mankind’s basic understanding of how the universe worked, it was based on the same concept of knowledge as Newtonian physics. You make models based on observations and validate them through testable predictions.  The pre-cataclysmic scientists understood the need for change due to failed predictions, even if, like Albert Einstein (1879 – 1955) or Erwin Schrödinger (1887 – 1961), they found quantum mechanics repugnant. The phenomenology was too powerful to ignore.

Sir Karl Popper (1902 – 1994) provided another ingredients missed by Kuhn, the idea that science advances by the bold new hypothesis, not by deducing models from observation. The Bohr model of the atom was a bold hypothesis not a paradigm shift, a bold hypothesis refined by other scientists and tested in the crucible of careful observation. I would also suggest that Kuhn did not understand the role of simplicity in making scientific models unique. It is true that one can always make an old model agree with past observations by making it more complex[1]. This process frequently has the side effect of reducing the old models ability to make predictions. It is to remedy these problems that a bold new hypothesis is needed. But to be successful, the bold new hypothesis should be simpler than the modified version of the original model and more crucially must make testable predictions that are confirmed by observation. But even then, it is not a paradigm shift; just a verified bold new hypothesis.

Despite the nay-saying, Kuhn’s ideas did advance the understanding of the scientific method. In particular, it was a good antidote to the logical positivists who wanted to eliminate the role of the model or what Kuhn called the paradigm altogether. Kuhn made the point that is the framework that gives meaning to observations. Combined with Popper’s insights, Kuhn’s ideas paved the way for a fairly comprehensive understanding of the scientific method.

But back to the overused word paradigm, it would be nice if we could turn back the clock and restrict the term paradigm shift to those changes where the before and after are truly incommensurate; where there is no common ground to decide which is better. Or if you like, the demarcation criteria for a paradigm shift is that the before and after are incommensurate[2]. That would rule out the change of sock color from being a paradigm shift. However, we cannot turn back the clock so I will go back to my first suggestion that the word be banned.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] This is known as the Duhem-Quine thesis.

[2] There are probably paradigm shifts, even in the restricted meaning of the word, if we go outside science. The French revolution could be considered a paradigm shift in the relation between the populace and the state.


Modern science has assumed many of the roles traditionally played by religion and, as a result, is often mistaken for just another religion; one among many. But the situation is rather more complicated and many of the claims that science is not a religion come across as a claim that science is The One True Religion. In the past, religion has supplied answers to the basic questions of how the universe originated, how people were created, what determines morality, and how humans relate to the rest of the universe. Science is slowly but surely replacing religion as the source of answers to these questions. The visible universe originated with the big bang, humans arose through evolution, morality arose through the evolution of a social ape and humans are a mostly irrelevant part of the larger universe. One may not agree with science’s answers but they exist and influence even those who do not explicitly believe them.

More importantly, through answering questions like these, religion has formed the basis for people’s worldview, their overall perspective from which they see and interpret the world. Religious beliefs and a person’s worldview were frequently so entangled that they are often viewed as one and the same thing. In the past this was probably true, but in this modern day and age, science presents an alternative to religion as the basis for a person’s worldview. Therefore science is frequently seen as a competing religion not just the basis of a competing world view. Despite this, there is a distinct difference between science and religion and it has profound implications for how they function.

The prime distinction was recognized at least as far back as Thomas Aquinas (1225 – 1274). The idea is this: Science is based on public information while religion is based on private information, information that not even the NSA can spy on. Anyone can, if they wait long enough, observe an apple fall as Sir Isaac Newton (1642–1727) did, but no one can know by independent observation what Saint Paul (c. 5 – c. 67) saw in the third heaven. Anyone sufficiently proficient in mathematics can repeat Albert Einstein’s (1879 – 1955) calculations but no one can independently check Joseph Smith’s (1805 – 1844) revelations that are the foundation of Mormonism, although additional private inspiration may, or may not, support them.  As a result of the public nature of the information on which science is founded, science tends to develop consensuses which only change when new information becomes available. In contrast, religion, being based on private information, tends to fragment when not constrained by the sword or at least the law. Just look at the number of Christian denominations and independent churches. While not as fragmented as Christianity, most major religions have had at least one schism. Even secularism, the none-of-the-above of religion, has its branches, one for example belonging to the new atheists.

The consensus-forcing nature of the scientific method and the public information on which it is based lead some to the conclusion that science is based on objective reality.  But in thirty years of wandering around a physics laboratory, I have never had the privilege of meeting Mr. Objective Reality—very opinionated physicists, yes, but Mr. Objective Reality, no.  Rather, science is based on two assumptions:

  1. Meaningful knowledge can be extracted from observation. While this may seem self-evident, it has been derided by various philosophers from Socrates on down.
  2. What happened in the past can be used to predict what will happen in the future. This is a sophisticated version of the Mount Saint Helens fallacy that had people refusing to leave that mountain before it erupted because it has not erupted in living memory.


Science and religion are, thus, both based on assumptions but differ in the public versus private nature of the information that drives their development. This difference in their underlying epistemology means that their competing claims cannot be systematically resolved; they are different paradigms.  Both can, separately or together, be used as a basis of a person’s worldview and it is here that conflict arises. People react rather strongly when their worldview is challenged and the competing epistemologies both claim to be the only firm basis on which a worldview can be based.

To receive a notice of future posts follow me on Twitter: @musquod.



Simplicity plays a crucial, but frequently overlooked, role in the scientific method (see the posters in my previous post). Considering how complicated science can be, simplicity may seem to be far from a driving source in science. Is string theory really simple? If scientists need at least six, seven or more years of training past high school, how can we consider science to be anything but antithetical to simplicity?

Good questions, but simple is relative. Consider the standard model of particle physics. First, it is widely agreed upon what the standard model is. Second, there are many alternatives to the standard model that agree with the standard model where there is experimental data but disagree elsewhere. One can name many[1]: Little Higgs, Technicolor, Grand Unified Models (in many varieties), and Super Symmetric Grand Unified Models (also in many varieties). I have even attended a seminar where the speaker gave a general technique to generate extensions to the standard model that also have a dark matter candidate. So why do we prefer the standard model? It is not elegance. Very few people consider the Standard Model more elegant than its competitors. Indeed, elegance is one of the main motivations driving the generation of alternate models. The competitors also keep all the phenomenological success of the standard model. So, to repeat the question, why do we prefer the standard model to the competitors? Simplicity and only simplicity. All the pretenders have additional assumptions or ingredients that are not required by the current experimental data. At some point they may be required as more data is made available but not now.  Thus we go with the simplest model that describes the data.

This is true across all disciplines and over time. The elliptic orbits of Kepler (1571–1630) where simpler than the epicycles of Ptolemy random graph(c. 90 – c. 168) or the epicyclets of Copernicus (1473–1543). There it is. We draw straight lines through the data rather than 29th order polynomials. If the data has bumps and wiggles, we frequently assume they are experimental error as in the randomly[2] chosen graph to the left where the theory lines do not go through all the data points. No one would take me seriously if I fit every single bump and wiggle. Simplicity is more important than religiously fitting each data point.

Going from the sublime to the ridiculous consider Russell’s teapot.  Bertrand Russell (1872–1970) argued as follows: If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense. But what feature of the scientific method rules out the orbiting teapot? Or invisible pink unicorns? Or anyone of a thousand different mythical beings? Not observation! But they fail the simplicity test. Like the various extensions to the standard model, they are discounted because there are extra assumptions that are not required by the observational data.  This is otherwise known as Occam’s razor.

The argument for simplicity is rather straight forward. Models are judged by their ability to describe past observations and make correct predictions for future ones. As a matter of practical consideration, one should drop all features of a model that are not conducive to that end. While the next batch of data may force one to a more complicated model, there is no way to judge in advance which direction the complication will take. Hence we have all the extensions of the standard model waiting in the wings to see which, if any, the next batch of data will prefer – or rule out.

The crucial role of simplicity in choosing one model from among the many solves one of the enduring problems in the philosophy of science. Consider the following quote from Imre Lakatos (1922 – 1974) a leading philosopher of science from the last century: But, as many skeptics pointed out, rival theories are always indefinitely many and therefore the proving power of experiment vanishes.  One cannot learn from experience about the truth of any scientific theory, only at best about its falsehood: confirming instances have no epistemic value whatsoever (emphasis in the original). Note the premise of the argument: rival theories are always indefinitely many. While rival theories may be infinitely many, one or at most a very few are always chosen by the criteria of simplicity.  We have the one standard model of particle physics not an infinite many and his argument fails at the first step. Confirming instances, like finding the Higgs boson, do have epistemic value.

[1] This list is time dependent and may be out of date.

[2] Chosen randomly from one of my papers.


This essay makes a point that is only implicit in most of my other essays–namely that scientists are arro—oops that is for another post. The point here is that science is defined not by how it goes about acquiring knowledge but rather by how it defines knowledge. The underlying claim is that the definitions of knowledge as used, for example, in philosophy are not useful and that science has the one definition that has so far proven fruitful. No, not arrogant at all.

The classical concept of knowledge was described by Plato (428/427 BCE – 348/347 BCE) as having to meet three criteria: it must be justified, true, and believed. That description does seem reasonable. After all, can something be considered knowledge if it is false? Similarly, would we consider a correct guess knowledge? Guess right three times in a row and you are considered an expert –but do you have knowledge? Believed, I have more trouble with that: believed by whom? Certainly, something that no one believes is not knowledge even if true and justified.

The above criteria for knowledge seem like common sense and the ancient Greek philosophers had a real knack for encapsulating the common sense view of the world in their philosophy. But common sense is frequently wrong, so let us look at those criteria with a more jaundiced eye. Let us start with the first criteria: it must be justified. How do we justify a belief? From the sophists of ancient Greece, to the post-modernists and the-anything-goes hippies of the 1960s, and all their ilk in between it has been demonstrated that what can be known for certain is vanishingly small.

Renee Descartes (1596 – 1960) argues in the beginning of his Discourse on the Method that all knowledge is subject to doubt: a process called methodological skepticism. To a large extend, he is correct. Then to get to something that is certain he came up with his famous statement: I think, therefore I am.  For a long time this seemed to me like a sure argument. Hence, “I exist” seemed an incontrovertible fact. I then made the mistake of reading Nietzsche[1] (1844—1900). He criticizes the argument as presupposing the existence of “I” and “thinking” among other things. It has also been criticized by a number of other philosophers including Bertrand Russell (1872 – 1970). To quote the latter: Some care is needed in using Descartes’ argument. “I think, therefore I am” says rather more than is strictly certain. It might seem as though we are quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. Oh, well back to the drawing board.  

The criteria for knowledge, as postulated by Plato, lead to knowledge either not existing or being of the most trivial kind. No belief can be absolutely justified and there is no way to tell for certain if any proposed truth is an incontrovertible fact.  So where are we? If there are no incontrovertible facts we must deal with uncertainty. In science we make a virtue of this necessity. We start with observations, but unlike the logical positivists we do not assume they are reality or correspond to any ultimate reality. Thus following Immanuel Kant (1724 – 1804) we distinguish the thing-in-itself from its appearances. All we have access to are the appearances. The thing-in-itself is forever hidden.

But all is not lost. We make models to describe past observations. This is relatively easy to do. We then test our models by making testable predictions for future observations. Models are judged by their track record in making correct predictions–the more striking the prediction the better. The standard model of particle physics prediction of the Higgs[2] boson is a prime example of science at its best. The standard model did not become a fact when the Higgs was discovered, rather its standing as a useful model was enhanced.  It is the reliance on the track record of successful predictions that is the demarcation criteria for science and I would suggest the hallmark for defining knowledge. The scientific models and the observations they are based on are our only true knowledge. However, to mistake them for descriptions of the ultimate reality or the thing-in-itself would be folly, not knowledge.


[1] Reading Nietzsche is always a mistake. He was a madman.

[2] To be buzzword compliant, I mention the Higgs boson.