• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘Philosophy of science’

René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.

Share

Since model building is the essence of science, this quote has a bit of a bite to it. It is from George E. P. Box (1919 – 2013), who was not only an eminent statistician but also an eminently quotable one.  Another quote from him: One important idea is that science is a means whereby learning is achieved, not by mere theoretical speculation on the one hand, nor by the undirected accumulation of practical facts on the other, but rather by a motivated iteration between theory and practice.  Thus he saw science as an iteration between observation and theory. And what is theory but the building of erroneous, or at least approximate, models?

To amplify that last comment: The main point of my philosophical musings is that science is the building of models for how the universe works; models constrained by observation and tested by their ability to make predictions for new observations, but models nonetheless. In this context, the above quote has significant implications for science. Models, even those of science, are by their very nature simplifications and as such are not one hundred per cent accurate. Consider the case of a map. Creating a 1:1 map is not only impractical[2] but even if you had one it would be one hundred per cent useless; just try folding a 1:1 scale map of Vancouver. A model with all the complexity of the original does not help us understand the original.  Indeed the whole purpose of a model is to eliminate details that are not essential to the problem at hand.

By their very nature, numerical models are always approximate and this is probably what Box had in mind with his statement. One neglects small effects like the gravitational influence of a mosquito. Even as one begins computing, one makes numerical approximations, replacing integrals with sums or vise versa, derivatives with finite differences, etc. However, one wants to control errors and keep them to a minimum. Statistical analysis techniques, such as Box developed, help estimate and control errors.

To a large extent it is self-evident that models are approximate; so what? Again to quote George Box: Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. What would he have thought of a model with twenty plus parameters, like the standard model of particle physics? His point is a valid one. All measurements have experimental errors. If your fit is perfect you are almost certainly fitting noise. Hence, adding more parameters to get a perfect fit is a fool’s errand. But even without experimental error, a large number of parameters frequently means something important has been missed. Has something been missed in the standard model of particle physics with its many parameters or is the universe really that complicated?

There is an even more basic reason all models are wrong. This goes back at least as far as Immanuel Kant (1724 – 1804). He made the distinction between observation of an object and the object in itself. One never has direct experience of things, the so-called noumenal world; what one experiences is the phenomenal world as conveyed to us by our senses. What we see is not even what has been recorded by the eye.  The mind massages the raw observation into something it can understand; a useful but not necessarily accurate model of the world. Science then continues this process in a systematic manner to construct models to describe observations but not necessarily the underlying reality.

Despite being by definition at least partially wrong, models are frequently useful. The scale model map is useful to tourists trying to find their way around Vancouver or to a general plotting strategy for his next battle. But, if the maps are too far wrong the tourist will get lost and fall into False Creek and the general will go down in history as a failure. Similarly, the models for weather predictions are useful although they are certainly not a hundred per cent accurate. However, they do indicate when it safe to plan a picnic or cut the hay; provided they are right more than by chance and the standard model of particle physics, despite having many parameters and not including gravity, is a useful description of a wide range of observations. But to return to the main point, all models, even useful ones, are wrong because they are approximations and not even approximations to reality but to our observations of that reality. Where does that leave us? Well, let us save the last word for George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Hence the foolishness of talking about theoretical breakthroughs in science. All breakthroughs arise from pondering about observations and observations testing those ponderings.

[2] Not even Google could produce that.

Share

Theoretical physics, simplicity. Surely the two words do not go together. Theoretical physics has been the archetypal example of complicated since its invention. So what did Frank Wilczek (b. 1951) mean by that statement[1] quoted in the title? It is the scientist’s trick of taking a well-defined word, such as simplicity, and giving it a technical meaning. In this case, the meaning is from algorithmic information theory. That theory defines complexity (Kolmogorov complexity[2]) as the minimum length of a computer program needed to reproduce a string of numbers. Simplicity, as used in the title, is the opposite of this complexity. Science, not just theoretical physics, is driven, in part but only in part, by the quest for this simplicity.

How is that you might ask. This is best described by Greg Chaitin (b. 1947), a founder of algorithmic information theory. To quote: This idea of program-size complexity is also connected with the philosophy of the scientific method. You’ve heard of Occam’s razor, of the idea that the simplest theory is best? Well, what’s a theory? It’s a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it’s cooked up, and the data is incomprehensible, it’s random. In that case the theory isn’t doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!—That’s the idea…

In many ways this is quite nice; the best theory is the one that compresses the most empirical information into the shortest description or computer program.  It provides an algorithmic method to decide which of two competing theories is best (but not an algorithm for generating the best theory). With this definition of best, a computer could do science: generate programs to describe data and check which is the shortest. It is not clear, with this definition, that Copernicus was better than Ptolemy. The two approaches to planetary motion had a similar number of parameters and accuracy.

There are many interesting aspects of this approach. Consider compressibility and quantum mechanics. The uncertainty principle and the probabilistic nature of quantum mechanics put limits on the extent to which empirical data can be compressed. This is the main difference between classical mechanics and quantum mechanics. Given the initial conditions and the laws of motion, classically the empirical data is compressible to just that input. In quantum mechanics, it is not. The time, when each individual atom in a collection of radioactive atoms decays, is unpredictable and the measured results are largely incompressible. Interpretations of quantum mechanics may make the theory deterministic, but they cannot make the empirical data more compressible.

Compressibility highlights a significant property of initial conditions. While the data describing the motion of the planets can be compressed using Newton’s laws of motion and gravity, the initial conditions that started the planets on their orbits cannot be. This incompressibility tends to be a characteristic of initial conditions. Even the initial conditions of the universe, as reflected in the cosmic microwave background, have a large random non-compressible component – the cosmic variance.  If it wasn’t for quantum uncertainly, we could probably take the lack of compressibility as a definition of initial conditions. For the universe, the two are the same since the lack of compressibility in the initial conditions is due to quantum fluctuations but that is not always the case.

The algorithmic information approach makes Occam’s razor, the idea that one should minimize assumptions, basic to science. If one considers that each character in a minimal computer program is a separate assumption, then the shortest program does indeed have the fewest assumptions. But you might object that some of the characters in a program can be predicted from other characters. However, if that is true the program can probably be made shorter. This is all a bit counterintuitive since one generally does not take such a fine grained approach to what one considers an assumption.

The algorithmic information approach to science, however, does have a major shortcoming. This definition of the best theory leaves out the importance of predictions. A good model must not only compress known data, it must predict new results that are not predicted by competing models. Hence, as noted in the introduction, simplicity is only part of the story.

The idea of reducing science to just a collection of computer programs is rather frightening. Science is about more than computer programs[3]. It is, and should be, a human endeavour. As people, we want models of how the universe works that humans, not just computers, can comprehend and share with others. A collection of bits on a computer drive does not do this.

To receive a notice of future posts follow me on Twitter: @musquod.



[1] From “This Explains Everything”, Ed, John Brockman, Harper Perennial, New York, 2013

[2] Also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity.

[3] In this regard, I have a sinking feeling that I am fighting a rearguard action against the inevitable.

Share

If there were only one credible interpretation of quantum mechanics, then we could take it as a reliable representation of reality. But when there are many, it destroys the credulity of all of them. The plethora of interpretations of quantum mechanics lends credence to the thesis that science tells us nothing about the ultimate nature of reality.

Quantum mechanics, in its essence, is a mathematical formalism with an algorithm for how to connect the formalism to observation or experiments. When relativistic extensions are included, it provides the framework for all of physics[1] and the underlying foundation for chemistry. For macroscopic objects (things like footballs), it reduces to classical mechanics through some rather subtle mathematics, but it still provides the underlying framework even there. Despite its empirical success, quantum mechanics is not consistent with our common sense ideas of how the world should work. It is inherently probabilistic despite the best efforts of motivated and ingenious people to make it deterministic. It has superposition and interference of the different states of particles, something not seen for macroscopic objects. If it is weird to us, just imagine how weird it must have seemed to the people who invented it. They were trained in the classical system until it was second nature and then nature itself said, “Fooled you, that is not how things are.” Some, like Albert Einstein (1879 – 1955), resisted it to their dying days.

The developers of quantum mechanics, in their efforts to come to grips with quantum weirdness, invented interpretations that tried to understand quantum mechanics in a way that was less disturbing to common sense and their classical training. In my classes in quantum mechanics, there were hand waving discussions of the Copenhagen interpretation, but I could never see what they added to mathematical formalism. I am not convinced my lecturers could either, although the term Copenhagen interpretation was uttered with much reverence. Then I heard a lecture by Sir Rudolf Peierls[2] (1907 – 1995) claiming that the conscious mind caused the collapse of the wave function. That was an interesting take on quantum mechanics, which was also espoused by John von Neumann (1903 – 1957) and Eugene Wigner (1902 –1995) for part of their careers.

So does consciousness play a crucial role in quantum mechanics? Not according to Hugh Everett III (1930 – 1982) who invented the many-worlds interpretation. In this interpretation, the wave function corresponds to physical reality, and each time a measurement is made the universe splits into many different universes corresponding to each possible outcome of the quantum measurement process. Physicists are nothing if not imaginative. This interpretation also offers the promise of eternal life.  The claim is that in all the possible quantum universes there must be one in which you will live forever. Eventually that will be the only one you will be aware of. But as with the Greek legend of Tithonus, there is no promise of eternal youth. The results may not be pretty.

If you do not like either of those interpretations of quantum mechanics, well have I got an interpretation for you. It goes under the title of the relation interpretation. Here the wave function is simply the information a given observer has about the quantum system and may be different for different observers; nothing mystical here and no multiplicity of worlds. Then there is the theological interpretation. This I first heard from Steven Hawking (b. 1942) although I doubt he believed it. In this interpretation, God uses quantum indeterminacy to hide his direct involvement in the unfolding of the universe. He simply manipulates the results of quantum measurements to suit his own goals. Well, He does work in mysterious ways after all.

I will not bore you with all possible interpretations and their permutations. Life is too short for that, but we are still left with the overarching question: which interpretation is the one true interpretation? What is the nature of reality implied by quantum mechanics? Does the universe split into many? Does consciousness play a central role? Is the wave function simply information? Does God hide in quantum indeterminacy?

Experiment cannot sort this out since all the interpretations pretty much agree on the results of experiments (even this is subject to debate), but science has one other criteria: parsimony. We eliminate unnecessary assumptions. When applied to interpretations of quantum mechanics, parsimony seems to favour the relational interpretation. But, in fact, parsimony, carefully applied, favours something else; the instrumentalist approach. That is: don’t worry about the interpretations, just shut up and calculate. All the interpretations have additional assumptions not required by observations.

But what about the ultimate nature of reality? There is no theorem that says reality, itself, must be simple. So quantum mechanics implies very little about the ultimate nature of reality. I guess we will have to leave that discussion to the philosophers and theologians. More power to them.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Although quantum gravity is still a big problem.

[2] A major player in the development of quantum many body theory and nuclear physics.

Share

In the philosophy of science, realism is used in two related ways. The first way is that the interior constructs of a model refer to something that actually exists in nature, for example the quantum mechanical wave function corresponds to a physical entity. The second way is that properties of a system exist even when they are not being measured; the ball is in the box even when no one can see it (unless it is a relative of Schrodinger’s cat). The two concepts are related since one can think of the ball’s presence or absence as part of one’s model for how balls (or cats) behave.

Despite our and even young children’s belief in the continued existence of the ball and that cats are either alive or dead, there are reasons for doubting realism. The three main ones are the history of physics, the role of canonical (unitary) transformations in classical (quantum) mechanics, and Bell’s inequality. The second and third of these may seem rather obtuse, but bear with me.

Let’s start with the first, the history of physics. Here, we follow in the footsteps of Thomas Kuhn (1922–1996). He was probably the first philosopher of science to actually look at the history of science to understand how science works. One of his conclusions was that the interior constructs of models (paradigms in his terminology) do not correspond (refer in the philosophic jargon) to anything in reality. It is easy to see why. One can think of a sequence of models in the history of physics. Here we consider the Ptolemaic system, Newtonian mechanics, quantum mechanics, relativistic field theory (a combination of quantum mechanics and relativity) and finally quantum gravity. The Ptolemaic system ruled for half a millennium, from the second to seventeenth centuries. By any standard, the Ptolemaic model was a successful scientific model since it made correct predictions for the location of the planets in the night sky. Eventually, however, Newton’s dynamical model caused its demise. At the Ptolemaic model’s core were the concepts of geo-centrism and uniform circular motion. People believed these two aspects of the model corresponded to reality. But Newton changed all that. Uniform circular motion and geo-centrism were out and instantaneous gravitation attraction was in. Central to the Newtonian system was the fixed Euclidean space time geometry and particle trajectories. The first of these was rendered obsolete by relativity and the second by quantum mechanics; at least the idea of fixed number of particles survived–until quantum field theory. And if string theory is correct, all those models have the number of dimensions wrong. The internal aspects of well-accepted and successful models disappear when new models replace the old. There are other examples. In the history of physics, the caloric theory of heat was successful at one time but caloric vanished when the kinetic theory of heat took over. And on it goes. What is regarded as central to our understanding of how the world works goes puff when new models replace old.

On to the second reason for doubting realism–the role of transformations: canonical and unitary.  In both classical and quantum mechanics there are mathematical transformations that change the internals of the calculations[1] but leave not only the observables but also the structure of the calculations invariant. For example, in classical mechanics we can use a canonical transformation to change coordinates without changing the physics. We can express the location of an object using the earth as a reference point or the sun. Now this is quite fun; the choice of coordinates is quite arbitrary. So you want a geocentric system (like Galileo’s opponents), no problem. We write the equation of motion in that frame and everyone is happy. But you say the Earth really does go around the sun. That is equivalent to the statement: planetary motion is more simply described in the heliocentric frame. We can go on from there and use coordinates as weird as you like to match religious or personal preconceptions.  In quantum mechanics the transformations have even more surprising implications. You would think something like the correlations between particles would be observable and a part of reality. But that is not the case. The correlations depend on how you do your calculation and can be changed at will with unitary transformations. It is thus with a lot of things that you might think are parts of reality but are, as we say, model dependent.

Finally we come to Bell’s inequality as the third reason to doubt realism. The idea here goes back to what is known as the Einstein-Podolsky-Rosen paradox (published in 1935). By looking at the correlations of coupled particles Einstein, Podolsky, and Rosen claimed that quantum mechanics is incomplete.  John Bell (1928 – 1990), building on their work, developed a set of inequalities that allowed a precise experimental test of the Einstein-Podolsky-Rosen claim. The experimental test has been performed and the quantum mechanical prediction confirmed. This ruled out all local realistic models. That is, local models where a system has definite values of a property even when that property has not been measured. This is using realism in the second sense defined above. There are claims, not universally accepted, that extensions of Bell’s inequalities rule out all realist models, local or non-local.

So where does this leave us? Pretty much with the concept of realism in science in tatters. The internals of models changes in unpredictable ways when science advances. Even within a given model, the internals can be changed with mathematical tricks and for some definitions of realism, experiment has largely ruled it out.  Thus we are left with our models that describe aspects of reality but should never be mistaken for reality itself. Immanuel Kant (1724 – 1804), the great German philosopher, would not be surprised[2].

To receive a notice of future posts follow me on Twitter: @musquod.


[1] For the relation between the two type of transformations see: N.L. Balazs and B.K. Jennings, Unitary transformations, Weyl’s association and the role of canonical transformations, Physica, 121A (1983) 576–586

[2] He made the distinction between the thing in itself and observations of it.

Share

Yes, once!

Paradigm and paradigm shift are so over used and misused that the world would benefit if they were simply banned.  Originally Thomas Kuhn (1922–1996) in his 1962 book, The Structure of Scientific Revolutions, used the word paradigm to refer to the set of practices that define a scientific discipline at any particular period of time. A paradigm shift is when the entire structure of a field changes, not when someone simply uses a different mathematical formulation. Perhaps it is just grandiosity, everyone thinking their latest idea is earth shaking (or paradigm shifting), but the idea has been so debased that almost any change is called a paradigm shift, down to level of changing the color of ones socks.

The archetypal example, and I would suggest the only real example in the natural and physical sciences, is the paradigm shift from Aristotelian to Newtonian physics. This was not just a change in physics from the perfect motion is circular to an object either is at rest or moves at a constant velocity, unless acted upon by an external force but a change in how knowledge is defined and acquired. There is more here than a different description of motion; the very concept of what is important has changed. In Newtonian physics there is no place for perfect motion but only rules to describe how objects actually behave. Newtonian physics was driven by observation. Newton, himself, went further and claimed his results were derived from observation. While Aristotelian physics is broadly consistent with observation it is driven more by abstract concepts like perfection.  Aristotle (384 BCE – 322 BCE) would most likely have considered Galileo Galilei’s (1564 – 1642) careful experiments beneath him.  Socrates (c. 469 BC – 399 BC) certainly would have. Their epistemology was not based on careful observation.

While there have been major changes in the physical sciences since Newton, they do not reach the threshold needed to call them a paradigm shifts since they are all within the paradigm defined by the scientific method. I would suggest Kuhn was misled by the Aristotle-Newton example where, indeed, the two approaches are incommensurate: What constitutes a reasonable explanation is simply different for the two men. But would the same be true with Michael Faraday (1791 – 1867) and Niels Bohr (1885–1962) who were chronologically on opposite sides of the quantum mechanics cataclysm?  One could easily imagine Faraday, transported in time, having a fruitful discussion with Bohr. While the quantum revolution was indeed cataclysmic, changing mankind’s basic understanding of how the universe worked, it was based on the same concept of knowledge as Newtonian physics. You make models based on observations and validate them through testable predictions.  The pre-cataclysmic scientists understood the need for change due to failed predictions, even if, like Albert Einstein (1879 – 1955) or Erwin Schrödinger (1887 – 1961), they found quantum mechanics repugnant. The phenomenology was too powerful to ignore.

Sir Karl Popper (1902 – 1994) provided another ingredients missed by Kuhn, the idea that science advances by the bold new hypothesis, not by deducing models from observation. The Bohr model of the atom was a bold hypothesis not a paradigm shift, a bold hypothesis refined by other scientists and tested in the crucible of careful observation. I would also suggest that Kuhn did not understand the role of simplicity in making scientific models unique. It is true that one can always make an old model agree with past observations by making it more complex[1]. This process frequently has the side effect of reducing the old models ability to make predictions. It is to remedy these problems that a bold new hypothesis is needed. But to be successful, the bold new hypothesis should be simpler than the modified version of the original model and more crucially must make testable predictions that are confirmed by observation. But even then, it is not a paradigm shift; just a verified bold new hypothesis.

Despite the nay-saying, Kuhn’s ideas did advance the understanding of the scientific method. In particular, it was a good antidote to the logical positivists who wanted to eliminate the role of the model or what Kuhn called the paradigm altogether. Kuhn made the point that is the framework that gives meaning to observations. Combined with Popper’s insights, Kuhn’s ideas paved the way for a fairly comprehensive understanding of the scientific method.

But back to the overused word paradigm, it would be nice if we could turn back the clock and restrict the term paradigm shift to those changes where the before and after are truly incommensurate; where there is no common ground to decide which is better. Or if you like, the demarcation criteria for a paradigm shift is that the before and after are incommensurate[2]. That would rule out the change of sock color from being a paradigm shift. However, we cannot turn back the clock so I will go back to my first suggestion that the word be banned.

To receive a notice of future posts follow me on Twitter: @musquod.

 


[1] This is known as the Duhem-Quine thesis.

[2] There are probably paradigm shifts, even in the restricted meaning of the word, if we go outside science. The French revolution could be considered a paradigm shift in the relation between the populace and the state.

Share

Modern science has assumed many of the roles traditionally played by religion and, as a result, is often mistaken for just another religion; one among many. But the situation is rather more complicated and many of the claims that science is not a religion come across as a claim that science is The One True Religion. In the past, religion has supplied answers to the basic questions of how the universe originated, how people were created, what determines morality, and how humans relate to the rest of the universe. Science is slowly but surely replacing religion as the source of answers to these questions. The visible universe originated with the big bang, humans arose through evolution, morality arose through the evolution of a social ape and humans are a mostly irrelevant part of the larger universe. One may not agree with science’s answers but they exist and influence even those who do not explicitly believe them.

More importantly, through answering questions like these, religion has formed the basis for people’s worldview, their overall perspective from which they see and interpret the world. Religious beliefs and a person’s worldview were frequently so entangled that they are often viewed as one and the same thing. In the past this was probably true, but in this modern day and age, science presents an alternative to religion as the basis for a person’s worldview. Therefore science is frequently seen as a competing religion not just the basis of a competing world view. Despite this, there is a distinct difference between science and religion and it has profound implications for how they function.

The prime distinction was recognized at least as far back as Thomas Aquinas (1225 – 1274). The idea is this: Science is based on public information while religion is based on private information, information that not even the NSA can spy on. Anyone can, if they wait long enough, observe an apple fall as Sir Isaac Newton (1642–1727) did, but no one can know by independent observation what Saint Paul (c. 5 – c. 67) saw in the third heaven. Anyone sufficiently proficient in mathematics can repeat Albert Einstein’s (1879 – 1955) calculations but no one can independently check Joseph Smith’s (1805 – 1844) revelations that are the foundation of Mormonism, although additional private inspiration may, or may not, support them.  As a result of the public nature of the information on which science is founded, science tends to develop consensuses which only change when new information becomes available. In contrast, religion, being based on private information, tends to fragment when not constrained by the sword or at least the law. Just look at the number of Christian denominations and independent churches. While not as fragmented as Christianity, most major religions have had at least one schism. Even secularism, the none-of-the-above of religion, has its branches, one for example belonging to the new atheists.

The consensus-forcing nature of the scientific method and the public information on which it is based lead some to the conclusion that science is based on objective reality.  But in thirty years of wandering around a physics laboratory, I have never had the privilege of meeting Mr. Objective Reality—very opinionated physicists, yes, but Mr. Objective Reality, no.  Rather, science is based on two assumptions:

  1. Meaningful knowledge can be extracted from observation. While this may seem self-evident, it has been derided by various philosophers from Socrates on down.
  2. What happened in the past can be used to predict what will happen in the future. This is a sophisticated version of the Mount Saint Helens fallacy that had people refusing to leave that mountain before it erupted because it has not erupted in living memory.

 

Science and religion are, thus, both based on assumptions but differ in the public versus private nature of the information that drives their development. This difference in their underlying epistemology means that their competing claims cannot be systematically resolved; they are different paradigms.  Both can, separately or together, be used as a basis of a person’s worldview and it is here that conflict arises. People react rather strongly when their worldview is challenged and the competing epistemologies both claim to be the only firm basis on which a worldview can be based.

To receive a notice of future posts follow me on Twitter: @musquod.

 

Share

Simplicity plays a crucial, but frequently overlooked, role in the scientific method (see the posters in my previous post). Considering how complicated science can be, simplicity may seem to be far from a driving source in science. Is string theory really simple? If scientists need at least six, seven or more years of training past high school, how can we consider science to be anything but antithetical to simplicity?

Good questions, but simple is relative. Consider the standard model of particle physics. First, it is widely agreed upon what the standard model is. Second, there are many alternatives to the standard model that agree with the standard model where there is experimental data but disagree elsewhere. One can name many[1]: Little Higgs, Technicolor, Grand Unified Models (in many varieties), and Super Symmetric Grand Unified Models (also in many varieties). I have even attended a seminar where the speaker gave a general technique to generate extensions to the standard model that also have a dark matter candidate. So why do we prefer the standard model? It is not elegance. Very few people consider the Standard Model more elegant than its competitors. Indeed, elegance is one of the main motivations driving the generation of alternate models. The competitors also keep all the phenomenological success of the standard model. So, to repeat the question, why do we prefer the standard model to the competitors? Simplicity and only simplicity. All the pretenders have additional assumptions or ingredients that are not required by the current experimental data. At some point they may be required as more data is made available but not now.  Thus we go with the simplest model that describes the data.

This is true across all disciplines and over time. The elliptic orbits of Kepler (1571–1630) where simpler than the epicycles of Ptolemy random graph(c. 90 – c. 168) or the epicyclets of Copernicus (1473–1543). There it is. We draw straight lines through the data rather than 29th order polynomials. If the data has bumps and wiggles, we frequently assume they are experimental error as in the randomly[2] chosen graph to the left where the theory lines do not go through all the data points. No one would take me seriously if I fit every single bump and wiggle. Simplicity is more important than religiously fitting each data point.

Going from the sublime to the ridiculous consider Russell’s teapot.  Bertrand Russell (1872–1970) argued as follows: If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense. But what feature of the scientific method rules out the orbiting teapot? Or invisible pink unicorns? Or anyone of a thousand different mythical beings? Not observation! But they fail the simplicity test. Like the various extensions to the standard model, they are discounted because there are extra assumptions that are not required by the observational data.  This is otherwise known as Occam’s razor.

The argument for simplicity is rather straight forward. Models are judged by their ability to describe past observations and make correct predictions for future ones. As a matter of practical consideration, one should drop all features of a model that are not conducive to that end. While the next batch of data may force one to a more complicated model, there is no way to judge in advance which direction the complication will take. Hence we have all the extensions of the standard model waiting in the wings to see which, if any, the next batch of data will prefer – or rule out.

The crucial role of simplicity in choosing one model from among the many solves one of the enduring problems in the philosophy of science. Consider the following quote from Imre Lakatos (1922 – 1974) a leading philosopher of science from the last century: But, as many skeptics pointed out, rival theories are always indefinitely many and therefore the proving power of experiment vanishes.  One cannot learn from experience about the truth of any scientific theory, only at best about its falsehood: confirming instances have no epistemic value whatsoever (emphasis in the original). Note the premise of the argument: rival theories are always indefinitely many. While rival theories may be infinitely many, one or at most a very few are always chosen by the criteria of simplicity.  We have the one standard model of particle physics not an infinite many and his argument fails at the first step. Confirming instances, like finding the Higgs boson, do have epistemic value.


[1] This list is time dependent and may be out of date.

[2] Chosen randomly from one of my papers.

Share

This essay makes a point that is only implicit in most of my other essays–namely that scientists are arro—oops that is for another post. The point here is that science is defined not by how it goes about acquiring knowledge but rather by how it defines knowledge. The underlying claim is that the definitions of knowledge as used, for example, in philosophy are not useful and that science has the one definition that has so far proven fruitful. No, not arrogant at all.

The classical concept of knowledge was described by Plato (428/427 BCE – 348/347 BCE) as having to meet three criteria: it must be justified, true, and believed. That description does seem reasonable. After all, can something be considered knowledge if it is false? Similarly, would we consider a correct guess knowledge? Guess right three times in a row and you are considered an expert –but do you have knowledge? Believed, I have more trouble with that: believed by whom? Certainly, something that no one believes is not knowledge even if true and justified.

The above criteria for knowledge seem like common sense and the ancient Greek philosophers had a real knack for encapsulating the common sense view of the world in their philosophy. But common sense is frequently wrong, so let us look at those criteria with a more jaundiced eye. Let us start with the first criteria: it must be justified. How do we justify a belief? From the sophists of ancient Greece, to the post-modernists and the-anything-goes hippies of the 1960s, and all their ilk in between it has been demonstrated that what can be known for certain is vanishingly small.

Renee Descartes (1596 – 1960) argues in the beginning of his Discourse on the Method that all knowledge is subject to doubt: a process called methodological skepticism. To a large extend, he is correct. Then to get to something that is certain he came up with his famous statement: I think, therefore I am.  For a long time this seemed to me like a sure argument. Hence, “I exist” seemed an incontrovertible fact. I then made the mistake of reading Nietzsche[1] (1844—1900). He criticizes the argument as presupposing the existence of “I” and “thinking” among other things. It has also been criticized by a number of other philosophers including Bertrand Russell (1872 – 1970). To quote the latter: Some care is needed in using Descartes’ argument. “I think, therefore I am” says rather more than is strictly certain. It might seem as though we are quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. Oh, well back to the drawing board.  

The criteria for knowledge, as postulated by Plato, lead to knowledge either not existing or being of the most trivial kind. No belief can be absolutely justified and there is no way to tell for certain if any proposed truth is an incontrovertible fact.  So where are we? If there are no incontrovertible facts we must deal with uncertainty. In science we make a virtue of this necessity. We start with observations, but unlike the logical positivists we do not assume they are reality or correspond to any ultimate reality. Thus following Immanuel Kant (1724 – 1804) we distinguish the thing-in-itself from its appearances. All we have access to are the appearances. The thing-in-itself is forever hidden.

But all is not lost. We make models to describe past observations. This is relatively easy to do. We then test our models by making testable predictions for future observations. Models are judged by their track record in making correct predictions–the more striking the prediction the better. The standard model of particle physics prediction of the Higgs[2] boson is a prime example of science at its best. The standard model did not become a fact when the Higgs was discovered, rather its standing as a useful model was enhanced.  It is the reliance on the track record of successful predictions that is the demarcation criteria for science and I would suggest the hallmark for defining knowledge. The scientific models and the observations they are based on are our only true knowledge. However, to mistake them for descriptions of the ultimate reality or the thing-in-itself would be folly, not knowledge.

 



[1] Reading Nietzsche is always a mistake. He was a madman.

[2] To be buzzword compliant, I mention the Higgs boson.

Share

A colleague of mine is an avid fan of the New York Yankees baseball team. At a meeting a few years ago, when the Yankees had finished first in the American league regular season, I pointed out to him that the result was not statistically significant. He did not take kindly to the suggestion. He actually got rather angry! A person, who in his professional life would scorn anyone for publishing a one sigma effect, was crowing about a one sigma effect for his favorite sports team. But then most people do ignore the effect of statistical fluctuations in sports.

In sports, there is a random effect in who wins or loses. The best team does not always win. In baseball where two teams will frequently play each other four games in a row over three or four days, it is relatively uncommon for one team to win all four games. Similarly a team at the top of the standings does not always beat a team lower down.  As they say in sports: on any given day, anything can happen. Indeed it can and frequently does.[1]

Let us return to American baseball. Each team plays 162 games during the regular season. If the results were purely statistical with each team having a 50% chance of winning any given game, then we would expect a normal distribution of the results with a spread of sigma=6.3 games. The actual spread or standard deviation for the last few seasons is closer to 11 games. Thus slightly more than half the spread in games won and lost is due to statistical fluctuations. Moving from the collective spread to the performance of individual teams, if a team wins the regular season by six games or one sigma, as with the Yankees above, there is a one in three chance that it is purely a statistical fluke. For a two-sigma effect, a team would have to win by twelve games or by eighteen games for a three-sigma effect. The latter would give over 99% confidence that the winner won justly, not due to a statistical fluctuation. When was the last time any team won by eighteen games? For particle physics we require an even higher standard–a five sigma effect to claim a discovery. Thus a team would have to lead by 30 games to meet this criterion. Now my colleague from the first paragraph suggested that by including more seasons the results become more significant.  He was right of course. If the Yankees finished ahead by six games for thirty-four seasons in a row that would be five-sigma effect. From this we can also see why sports results are never published in Physical Review with its five-sigma threshold for a discovery–there has yet to be such a discovery. To make things worse for New York Yankees’ fans they have already lost their chance for an undefeated season this year.

In other sports the statistics are even worse. In the National Hockey League (NHL) teams play eighty-two games and the spread in win-loss expected from pure chance is sigma=4.5. The actual distribution for last year was 6.3 sigma. The signal due the difference in the individual teams’ ability is all in the 1.8 sigma difference. Perhaps there is more parity in the NHL than in Major League Baseball. Or perhaps there is not enough statistics to tell. Speaking of not telling. Last year the Vancouver Canucks finished with the best record for the regular season, two games ahead of the New York Rangers and three games ahead of the St. Louis Blues. Only a fool or a Vancouver Canucks fan would think this ordering was significant and not just a statistical fluctuation. In the National Football League last year, 14 of the 32 teams were within two sigma of the top. Again much of the spread was statistical. It was purely a statistical fluke that the New England Patriots did not win the super bowl as they should have.

Playoffs are even worse (this is why the Canucks have never won a Stanley Cup). Consider a best of seven game series. Even if the two teams are equal, we would expect that the series would only go four games one in every eight (two cubed[2]) series.  When a series goes the full seven games one might as well flip a coin. Rare events, like one team winning the first three games and losing the last four, are expect to happen once in every sixty-four series and considering the number of series being played it is not surprising we see them occasionally.

Probably the worst example of playoff madness is the American college basketball tournament called, appropriately enough, March Madness. Starting with 64 teams or 68 depending on how you count, the playoffs proceed through a single elimination tournament. With over 70 games it is not surprising that strange things happen. One of the strangest would be that the best team wins.  To win the title the best team would have to win six straight games. If the best team has on average a 70% chance of winning each game they would only have a 12% chance of winning the tournament. Perhaps it would be better if they just voted on who is best.

But you say they would never decide a national championship based on a vote. Consider American college football. Now that is a multi-million dollar enterprise! Nobel Laureates do not get paid as much as US college football coaches. They do not generate as much money either. So what is more important to American universities–sports or science?

In the past, the US college national football champions were decided by a vote of some combination of sports writers, coaches and computers. Now that combination only decides who will play in the championship game. The national champion is ultimately decided by who wins that one final game. Is that better than the old system? More exciting but as they say: on any given day anything can happen. Besides sports is more about deciding winners and losers rather than who is best.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] With the expected frequency of course.

[2] Not two to the fourth power because one of the two teams has to win the first game and that team has to win the next three games.

Share