• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Flip
  • Tanedo
  • USA

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Karen
  • Andeen
  • Karlsruhe Institute of Technology

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Alexandre
  • Fauré

Latest Posts

  • Jim
  • Rohlf
  • USA

Latest Posts

  • Emily
  • Thompson
  • Switzerland

Latest Posts

Posts Tagged ‘Philosophy of science’

If there were only one credible interpretation of quantum mechanics, then we could take it as a reliable representation of reality. But when there are many, it destroys the credulity of all of them. The plethora of interpretations of quantum mechanics lends credence to the thesis that science tells us nothing about the ultimate nature of reality.

Quantum mechanics, in its essence, is a mathematical formalism with an algorithm for how to connect the formalism to observation or experiments. When relativistic extensions are included, it provides the framework for all of physics[1] and the underlying foundation for chemistry. For macroscopic objects (things like footballs), it reduces to classical mechanics through some rather subtle mathematics, but it still provides the underlying framework even there. Despite its empirical success, quantum mechanics is not consistent with our common sense ideas of how the world should work. It is inherently probabilistic despite the best efforts of motivated and ingenious people to make it deterministic. It has superposition and interference of the different states of particles, something not seen for macroscopic objects. If it is weird to us, just imagine how weird it must have seemed to the people who invented it. They were trained in the classical system until it was second nature and then nature itself said, “Fooled you, that is not how things are.” Some, like Albert Einstein (1879 – 1955), resisted it to their dying days.

The developers of quantum mechanics, in their efforts to come to grips with quantum weirdness, invented interpretations that tried to understand quantum mechanics in a way that was less disturbing to common sense and their classical training. In my classes in quantum mechanics, there were hand waving discussions of the Copenhagen interpretation, but I could never see what they added to mathematical formalism. I am not convinced my lecturers could either, although the term Copenhagen interpretation was uttered with much reverence. Then I heard a lecture by Sir Rudolf Peierls[2] (1907 – 1995) claiming that the conscious mind caused the collapse of the wave function. That was an interesting take on quantum mechanics, which was also espoused by John von Neumann (1903 – 1957) and Eugene Wigner (1902 –1995) for part of their careers.

So does consciousness play a crucial role in quantum mechanics? Not according to Hugh Everett III (1930 – 1982) who invented the many-worlds interpretation. In this interpretation, the wave function corresponds to physical reality, and each time a measurement is made the universe splits into many different universes corresponding to each possible outcome of the quantum measurement process. Physicists are nothing if not imaginative. This interpretation also offers the promise of eternal life.  The claim is that in all the possible quantum universes there must be one in which you will live forever. Eventually that will be the only one you will be aware of. But as with the Greek legend of Tithonus, there is no promise of eternal youth. The results may not be pretty.

If you do not like either of those interpretations of quantum mechanics, well have I got an interpretation for you. It goes under the title of the relation interpretation. Here the wave function is simply the information a given observer has about the quantum system and may be different for different observers; nothing mystical here and no multiplicity of worlds. Then there is the theological interpretation. This I first heard from Steven Hawking (b. 1942) although I doubt he believed it. In this interpretation, God uses quantum indeterminacy to hide his direct involvement in the unfolding of the universe. He simply manipulates the results of quantum measurements to suit his own goals. Well, He does work in mysterious ways after all.

I will not bore you with all possible interpretations and their permutations. Life is too short for that, but we are still left with the overarching question: which interpretation is the one true interpretation? What is the nature of reality implied by quantum mechanics? Does the universe split into many? Does consciousness play a central role? Is the wave function simply information? Does God hide in quantum indeterminacy?

Experiment cannot sort this out since all the interpretations pretty much agree on the results of experiments (even this is subject to debate), but science has one other criteria: parsimony. We eliminate unnecessary assumptions. When applied to interpretations of quantum mechanics, parsimony seems to favour the relational interpretation. But, in fact, parsimony, carefully applied, favours something else; the instrumentalist approach. That is: don’t worry about the interpretations, just shut up and calculate. All the interpretations have additional assumptions not required by observations.

But what about the ultimate nature of reality? There is no theorem that says reality, itself, must be simple. So quantum mechanics implies very little about the ultimate nature of reality. I guess we will have to leave that discussion to the philosophers and theologians. More power to them.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Although quantum gravity is still a big problem.

[2] A major player in the development of quantum many body theory and nuclear physics.


In the philosophy of science, realism is used in two related ways. The first way is that the interior constructs of a model refer to something that actually exists in nature, for example the quantum mechanical wave function corresponds to a physical entity. The second way is that properties of a system exist even when they are not being measured; the ball is in the box even when no one can see it (unless it is a relative of Schrodinger’s cat). The two concepts are related since one can think of the ball’s presence or absence as part of one’s model for how balls (or cats) behave.

Despite our and even young children’s belief in the continued existence of the ball and that cats are either alive or dead, there are reasons for doubting realism. The three main ones are the history of physics, the role of canonical (unitary) transformations in classical (quantum) mechanics, and Bell’s inequality. The second and third of these may seem rather obtuse, but bear with me.

Let’s start with the first, the history of physics. Here, we follow in the footsteps of Thomas Kuhn (1922–1996). He was probably the first philosopher of science to actually look at the history of science to understand how science works. One of his conclusions was that the interior constructs of models (paradigms in his terminology) do not correspond (refer in the philosophic jargon) to anything in reality. It is easy to see why. One can think of a sequence of models in the history of physics. Here we consider the Ptolemaic system, Newtonian mechanics, quantum mechanics, relativistic field theory (a combination of quantum mechanics and relativity) and finally quantum gravity. The Ptolemaic system ruled for half a millennium, from the second to seventeenth centuries. By any standard, the Ptolemaic model was a successful scientific model since it made correct predictions for the location of the planets in the night sky. Eventually, however, Newton’s dynamical model caused its demise. At the Ptolemaic model’s core were the concepts of geo-centrism and uniform circular motion. People believed these two aspects of the model corresponded to reality. But Newton changed all that. Uniform circular motion and geo-centrism were out and instantaneous gravitation attraction was in. Central to the Newtonian system was the fixed Euclidean space time geometry and particle trajectories. The first of these was rendered obsolete by relativity and the second by quantum mechanics; at least the idea of fixed number of particles survived–until quantum field theory. And if string theory is correct, all those models have the number of dimensions wrong. The internal aspects of well-accepted and successful models disappear when new models replace the old. There are other examples. In the history of physics, the caloric theory of heat was successful at one time but caloric vanished when the kinetic theory of heat took over. And on it goes. What is regarded as central to our understanding of how the world works goes puff when new models replace old.

On to the second reason for doubting realism–the role of transformations: canonical and unitary.  In both classical and quantum mechanics there are mathematical transformations that change the internals of the calculations[1] but leave not only the observables but also the structure of the calculations invariant. For example, in classical mechanics we can use a canonical transformation to change coordinates without changing the physics. We can express the location of an object using the earth as a reference point or the sun. Now this is quite fun; the choice of coordinates is quite arbitrary. So you want a geocentric system (like Galileo’s opponents), no problem. We write the equation of motion in that frame and everyone is happy. But you say the Earth really does go around the sun. That is equivalent to the statement: planetary motion is more simply described in the heliocentric frame. We can go on from there and use coordinates as weird as you like to match religious or personal preconceptions.  In quantum mechanics the transformations have even more surprising implications. You would think something like the correlations between particles would be observable and a part of reality. But that is not the case. The correlations depend on how you do your calculation and can be changed at will with unitary transformations. It is thus with a lot of things that you might think are parts of reality but are, as we say, model dependent.

Finally we come to Bell’s inequality as the third reason to doubt realism. The idea here goes back to what is known as the Einstein-Podolsky-Rosen paradox (published in 1935). By looking at the correlations of coupled particles Einstein, Podolsky, and Rosen claimed that quantum mechanics is incomplete.  John Bell (1928 – 1990), building on their work, developed a set of inequalities that allowed a precise experimental test of the Einstein-Podolsky-Rosen claim. The experimental test has been performed and the quantum mechanical prediction confirmed. This ruled out all local realistic models. That is, local models where a system has definite values of a property even when that property has not been measured. This is using realism in the second sense defined above. There are claims, not universally accepted, that extensions of Bell’s inequalities rule out all realist models, local or non-local.

So where does this leave us? Pretty much with the concept of realism in science in tatters. The internals of models changes in unpredictable ways when science advances. Even within a given model, the internals can be changed with mathematical tricks and for some definitions of realism, experiment has largely ruled it out.  Thus we are left with our models that describe aspects of reality but should never be mistaken for reality itself. Immanuel Kant (1724 – 1804), the great German philosopher, would not be surprised[2].

To receive a notice of future posts follow me on Twitter: @musquod.

[1] For the relation between the two type of transformations see: N.L. Balazs and B.K. Jennings, Unitary transformations, Weyl’s association and the role of canonical transformations, Physica, 121A (1983) 576–586

[2] He made the distinction between the thing in itself and observations of it.


Yes, once!

Paradigm and paradigm shift are so over used and misused that the world would benefit if they were simply banned.  Originally Thomas Kuhn (1922–1996) in his 1962 book, The Structure of Scientific Revolutions, used the word paradigm to refer to the set of practices that define a scientific discipline at any particular period of time. A paradigm shift is when the entire structure of a field changes, not when someone simply uses a different mathematical formulation. Perhaps it is just grandiosity, everyone thinking their latest idea is earth shaking (or paradigm shifting), but the idea has been so debased that almost any change is called a paradigm shift, down to level of changing the color of ones socks.

The archetypal example, and I would suggest the only real example in the natural and physical sciences, is the paradigm shift from Aristotelian to Newtonian physics. This was not just a change in physics from the perfect motion is circular to an object either is at rest or moves at a constant velocity, unless acted upon by an external force but a change in how knowledge is defined and acquired. There is more here than a different description of motion; the very concept of what is important has changed. In Newtonian physics there is no place for perfect motion but only rules to describe how objects actually behave. Newtonian physics was driven by observation. Newton, himself, went further and claimed his results were derived from observation. While Aristotelian physics is broadly consistent with observation it is driven more by abstract concepts like perfection.  Aristotle (384 BCE – 322 BCE) would most likely have considered Galileo Galilei’s (1564 – 1642) careful experiments beneath him.  Socrates (c. 469 BC – 399 BC) certainly would have. Their epistemology was not based on careful observation.

While there have been major changes in the physical sciences since Newton, they do not reach the threshold needed to call them a paradigm shifts since they are all within the paradigm defined by the scientific method. I would suggest Kuhn was misled by the Aristotle-Newton example where, indeed, the two approaches are incommensurate: What constitutes a reasonable explanation is simply different for the two men. But would the same be true with Michael Faraday (1791 – 1867) and Niels Bohr (1885–1962) who were chronologically on opposite sides of the quantum mechanics cataclysm?  One could easily imagine Faraday, transported in time, having a fruitful discussion with Bohr. While the quantum revolution was indeed cataclysmic, changing mankind’s basic understanding of how the universe worked, it was based on the same concept of knowledge as Newtonian physics. You make models based on observations and validate them through testable predictions.  The pre-cataclysmic scientists understood the need for change due to failed predictions, even if, like Albert Einstein (1879 – 1955) or Erwin Schrödinger (1887 – 1961), they found quantum mechanics repugnant. The phenomenology was too powerful to ignore.

Sir Karl Popper (1902 – 1994) provided another ingredients missed by Kuhn, the idea that science advances by the bold new hypothesis, not by deducing models from observation. The Bohr model of the atom was a bold hypothesis not a paradigm shift, a bold hypothesis refined by other scientists and tested in the crucible of careful observation. I would also suggest that Kuhn did not understand the role of simplicity in making scientific models unique. It is true that one can always make an old model agree with past observations by making it more complex[1]. This process frequently has the side effect of reducing the old models ability to make predictions. It is to remedy these problems that a bold new hypothesis is needed. But to be successful, the bold new hypothesis should be simpler than the modified version of the original model and more crucially must make testable predictions that are confirmed by observation. But even then, it is not a paradigm shift; just a verified bold new hypothesis.

Despite the nay-saying, Kuhn’s ideas did advance the understanding of the scientific method. In particular, it was a good antidote to the logical positivists who wanted to eliminate the role of the model or what Kuhn called the paradigm altogether. Kuhn made the point that is the framework that gives meaning to observations. Combined with Popper’s insights, Kuhn’s ideas paved the way for a fairly comprehensive understanding of the scientific method.

But back to the overused word paradigm, it would be nice if we could turn back the clock and restrict the term paradigm shift to those changes where the before and after are truly incommensurate; where there is no common ground to decide which is better. Or if you like, the demarcation criteria for a paradigm shift is that the before and after are incommensurate[2]. That would rule out the change of sock color from being a paradigm shift. However, we cannot turn back the clock so I will go back to my first suggestion that the word be banned.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] This is known as the Duhem-Quine thesis.

[2] There are probably paradigm shifts, even in the restricted meaning of the word, if we go outside science. The French revolution could be considered a paradigm shift in the relation between the populace and the state.


Modern science has assumed many of the roles traditionally played by religion and, as a result, is often mistaken for just another religion; one among many. But the situation is rather more complicated and many of the claims that science is not a religion come across as a claim that science is The One True Religion. In the past, religion has supplied answers to the basic questions of how the universe originated, how people were created, what determines morality, and how humans relate to the rest of the universe. Science is slowly but surely replacing religion as the source of answers to these questions. The visible universe originated with the big bang, humans arose through evolution, morality arose through the evolution of a social ape and humans are a mostly irrelevant part of the larger universe. One may not agree with science’s answers but they exist and influence even those who do not explicitly believe them.

More importantly, through answering questions like these, religion has formed the basis for people’s worldview, their overall perspective from which they see and interpret the world. Religious beliefs and a person’s worldview were frequently so entangled that they are often viewed as one and the same thing. In the past this was probably true, but in this modern day and age, science presents an alternative to religion as the basis for a person’s worldview. Therefore science is frequently seen as a competing religion not just the basis of a competing world view. Despite this, there is a distinct difference between science and religion and it has profound implications for how they function.

The prime distinction was recognized at least as far back as Thomas Aquinas (1225 – 1274). The idea is this: Science is based on public information while religion is based on private information, information that not even the NSA can spy on. Anyone can, if they wait long enough, observe an apple fall as Sir Isaac Newton (1642–1727) did, but no one can know by independent observation what Saint Paul (c. 5 – c. 67) saw in the third heaven. Anyone sufficiently proficient in mathematics can repeat Albert Einstein’s (1879 – 1955) calculations but no one can independently check Joseph Smith’s (1805 – 1844) revelations that are the foundation of Mormonism, although additional private inspiration may, or may not, support them.  As a result of the public nature of the information on which science is founded, science tends to develop consensuses which only change when new information becomes available. In contrast, religion, being based on private information, tends to fragment when not constrained by the sword or at least the law. Just look at the number of Christian denominations and independent churches. While not as fragmented as Christianity, most major religions have had at least one schism. Even secularism, the none-of-the-above of religion, has its branches, one for example belonging to the new atheists.

The consensus-forcing nature of the scientific method and the public information on which it is based lead some to the conclusion that science is based on objective reality.  But in thirty years of wandering around a physics laboratory, I have never had the privilege of meeting Mr. Objective Reality—very opinionated physicists, yes, but Mr. Objective Reality, no.  Rather, science is based on two assumptions:

  1. Meaningful knowledge can be extracted from observation. While this may seem self-evident, it has been derided by various philosophers from Socrates on down.
  2. What happened in the past can be used to predict what will happen in the future. This is a sophisticated version of the Mount Saint Helens fallacy that had people refusing to leave that mountain before it erupted because it has not erupted in living memory.


Science and religion are, thus, both based on assumptions but differ in the public versus private nature of the information that drives their development. This difference in their underlying epistemology means that their competing claims cannot be systematically resolved; they are different paradigms.  Both can, separately or together, be used as a basis of a person’s worldview and it is here that conflict arises. People react rather strongly when their worldview is challenged and the competing epistemologies both claim to be the only firm basis on which a worldview can be based.

To receive a notice of future posts follow me on Twitter: @musquod.



Simplicity plays a crucial, but frequently overlooked, role in the scientific method (see the posters in my previous post). Considering how complicated science can be, simplicity may seem to be far from a driving source in science. Is string theory really simple? If scientists need at least six, seven or more years of training past high school, how can we consider science to be anything but antithetical to simplicity?

Good questions, but simple is relative. Consider the standard model of particle physics. First, it is widely agreed upon what the standard model is. Second, there are many alternatives to the standard model that agree with the standard model where there is experimental data but disagree elsewhere. One can name many[1]: Little Higgs, Technicolor, Grand Unified Models (in many varieties), and Super Symmetric Grand Unified Models (also in many varieties). I have even attended a seminar where the speaker gave a general technique to generate extensions to the standard model that also have a dark matter candidate. So why do we prefer the standard model? It is not elegance. Very few people consider the Standard Model more elegant than its competitors. Indeed, elegance is one of the main motivations driving the generation of alternate models. The competitors also keep all the phenomenological success of the standard model. So, to repeat the question, why do we prefer the standard model to the competitors? Simplicity and only simplicity. All the pretenders have additional assumptions or ingredients that are not required by the current experimental data. At some point they may be required as more data is made available but not now.  Thus we go with the simplest model that describes the data.

This is true across all disciplines and over time. The elliptic orbits of Kepler (1571–1630) where simpler than the epicycles of Ptolemy random graph(c. 90 – c. 168) or the epicyclets of Copernicus (1473–1543). There it is. We draw straight lines through the data rather than 29th order polynomials. If the data has bumps and wiggles, we frequently assume they are experimental error as in the randomly[2] chosen graph to the left where the theory lines do not go through all the data points. No one would take me seriously if I fit every single bump and wiggle. Simplicity is more important than religiously fitting each data point.

Going from the sublime to the ridiculous consider Russell’s teapot.  Bertrand Russell (1872–1970) argued as follows: If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense. But what feature of the scientific method rules out the orbiting teapot? Or invisible pink unicorns? Or anyone of a thousand different mythical beings? Not observation! But they fail the simplicity test. Like the various extensions to the standard model, they are discounted because there are extra assumptions that are not required by the observational data.  This is otherwise known as Occam’s razor.

The argument for simplicity is rather straight forward. Models are judged by their ability to describe past observations and make correct predictions for future ones. As a matter of practical consideration, one should drop all features of a model that are not conducive to that end. While the next batch of data may force one to a more complicated model, there is no way to judge in advance which direction the complication will take. Hence we have all the extensions of the standard model waiting in the wings to see which, if any, the next batch of data will prefer – or rule out.

The crucial role of simplicity in choosing one model from among the many solves one of the enduring problems in the philosophy of science. Consider the following quote from Imre Lakatos (1922 – 1974) a leading philosopher of science from the last century: But, as many skeptics pointed out, rival theories are always indefinitely many and therefore the proving power of experiment vanishes.  One cannot learn from experience about the truth of any scientific theory, only at best about its falsehood: confirming instances have no epistemic value whatsoever (emphasis in the original). Note the premise of the argument: rival theories are always indefinitely many. While rival theories may be infinitely many, one or at most a very few are always chosen by the criteria of simplicity.  We have the one standard model of particle physics not an infinite many and his argument fails at the first step. Confirming instances, like finding the Higgs boson, do have epistemic value.

[1] This list is time dependent and may be out of date.

[2] Chosen randomly from one of my papers.


This essay makes a point that is only implicit in most of my other essays–namely that scientists are arro—oops that is for another post. The point here is that science is defined not by how it goes about acquiring knowledge but rather by how it defines knowledge. The underlying claim is that the definitions of knowledge as used, for example, in philosophy are not useful and that science has the one definition that has so far proven fruitful. No, not arrogant at all.

The classical concept of knowledge was described by Plato (428/427 BCE – 348/347 BCE) as having to meet three criteria: it must be justified, true, and believed. That description does seem reasonable. After all, can something be considered knowledge if it is false? Similarly, would we consider a correct guess knowledge? Guess right three times in a row and you are considered an expert –but do you have knowledge? Believed, I have more trouble with that: believed by whom? Certainly, something that no one believes is not knowledge even if true and justified.

The above criteria for knowledge seem like common sense and the ancient Greek philosophers had a real knack for encapsulating the common sense view of the world in their philosophy. But common sense is frequently wrong, so let us look at those criteria with a more jaundiced eye. Let us start with the first criteria: it must be justified. How do we justify a belief? From the sophists of ancient Greece, to the post-modernists and the-anything-goes hippies of the 1960s, and all their ilk in between it has been demonstrated that what can be known for certain is vanishingly small.

Renee Descartes (1596 – 1960) argues in the beginning of his Discourse on the Method that all knowledge is subject to doubt: a process called methodological skepticism. To a large extend, he is correct. Then to get to something that is certain he came up with his famous statement: I think, therefore I am.  For a long time this seemed to me like a sure argument. Hence, “I exist” seemed an incontrovertible fact. I then made the mistake of reading Nietzsche[1] (1844—1900). He criticizes the argument as presupposing the existence of “I” and “thinking” among other things. It has also been criticized by a number of other philosophers including Bertrand Russell (1872 – 1970). To quote the latter: Some care is needed in using Descartes’ argument. “I think, therefore I am” says rather more than is strictly certain. It might seem as though we are quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. Oh, well back to the drawing board.  

The criteria for knowledge, as postulated by Plato, lead to knowledge either not existing or being of the most trivial kind. No belief can be absolutely justified and there is no way to tell for certain if any proposed truth is an incontrovertible fact.  So where are we? If there are no incontrovertible facts we must deal with uncertainty. In science we make a virtue of this necessity. We start with observations, but unlike the logical positivists we do not assume they are reality or correspond to any ultimate reality. Thus following Immanuel Kant (1724 – 1804) we distinguish the thing-in-itself from its appearances. All we have access to are the appearances. The thing-in-itself is forever hidden.

But all is not lost. We make models to describe past observations. This is relatively easy to do. We then test our models by making testable predictions for future observations. Models are judged by their track record in making correct predictions–the more striking the prediction the better. The standard model of particle physics prediction of the Higgs[2] boson is a prime example of science at its best. The standard model did not become a fact when the Higgs was discovered, rather its standing as a useful model was enhanced.  It is the reliance on the track record of successful predictions that is the demarcation criteria for science and I would suggest the hallmark for defining knowledge. The scientific models and the observations they are based on are our only true knowledge. However, to mistake them for descriptions of the ultimate reality or the thing-in-itself would be folly, not knowledge.


[1] Reading Nietzsche is always a mistake. He was a madman.

[2] To be buzzword compliant, I mention the Higgs boson.


A colleague of mine is an avid fan of the New York Yankees baseball team. At a meeting a few years ago, when the Yankees had finished first in the American league regular season, I pointed out to him that the result was not statistically significant. He did not take kindly to the suggestion. He actually got rather angry! A person, who in his professional life would scorn anyone for publishing a one sigma effect, was crowing about a one sigma effect for his favorite sports team. But then most people do ignore the effect of statistical fluctuations in sports.

In sports, there is a random effect in who wins or loses. The best team does not always win. In baseball where two teams will frequently play each other four games in a row over three or four days, it is relatively uncommon for one team to win all four games. Similarly a team at the top of the standings does not always beat a team lower down.  As they say in sports: on any given day, anything can happen. Indeed it can and frequently does.[1]

Let us return to American baseball. Each team plays 162 games during the regular season. If the results were purely statistical with each team having a 50% chance of winning any given game, then we would expect a normal distribution of the results with a spread of sigma=6.3 games. The actual spread or standard deviation for the last few seasons is closer to 11 games. Thus slightly more than half the spread in games won and lost is due to statistical fluctuations. Moving from the collective spread to the performance of individual teams, if a team wins the regular season by six games or one sigma, as with the Yankees above, there is a one in three chance that it is purely a statistical fluke. For a two-sigma effect, a team would have to win by twelve games or by eighteen games for a three-sigma effect. The latter would give over 99% confidence that the winner won justly, not due to a statistical fluctuation. When was the last time any team won by eighteen games? For particle physics we require an even higher standard–a five sigma effect to claim a discovery. Thus a team would have to lead by 30 games to meet this criterion. Now my colleague from the first paragraph suggested that by including more seasons the results become more significant.  He was right of course. If the Yankees finished ahead by six games for thirty-four seasons in a row that would be five-sigma effect. From this we can also see why sports results are never published in Physical Review with its five-sigma threshold for a discovery–there has yet to be such a discovery. To make things worse for New York Yankees’ fans they have already lost their chance for an undefeated season this year.

In other sports the statistics are even worse. In the National Hockey League (NHL) teams play eighty-two games and the spread in win-loss expected from pure chance is sigma=4.5. The actual distribution for last year was 6.3 sigma. The signal due the difference in the individual teams’ ability is all in the 1.8 sigma difference. Perhaps there is more parity in the NHL than in Major League Baseball. Or perhaps there is not enough statistics to tell. Speaking of not telling. Last year the Vancouver Canucks finished with the best record for the regular season, two games ahead of the New York Rangers and three games ahead of the St. Louis Blues. Only a fool or a Vancouver Canucks fan would think this ordering was significant and not just a statistical fluctuation. In the National Football League last year, 14 of the 32 teams were within two sigma of the top. Again much of the spread was statistical. It was purely a statistical fluke that the New England Patriots did not win the super bowl as they should have.

Playoffs are even worse (this is why the Canucks have never won a Stanley Cup). Consider a best of seven game series. Even if the two teams are equal, we would expect that the series would only go four games one in every eight (two cubed[2]) series.  When a series goes the full seven games one might as well flip a coin. Rare events, like one team winning the first three games and losing the last four, are expect to happen once in every sixty-four series and considering the number of series being played it is not surprising we see them occasionally.

Probably the worst example of playoff madness is the American college basketball tournament called, appropriately enough, March Madness. Starting with 64 teams or 68 depending on how you count, the playoffs proceed through a single elimination tournament. With over 70 games it is not surprising that strange things happen. One of the strangest would be that the best team wins.  To win the title the best team would have to win six straight games. If the best team has on average a 70% chance of winning each game they would only have a 12% chance of winning the tournament. Perhaps it would be better if they just voted on who is best.

But you say they would never decide a national championship based on a vote. Consider American college football. Now that is a multi-million dollar enterprise! Nobel Laureates do not get paid as much as US college football coaches. They do not generate as much money either. So what is more important to American universities–sports or science?

In the past, the US college national football champions were decided by a vote of some combination of sports writers, coaches and computers. Now that combination only decides who will play in the championship game. The national champion is ultimately decided by who wins that one final game. Is that better than the old system? More exciting but as they say: on any given day anything can happen. Besides sports is more about deciding winners and losers rather than who is best.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] With the expected frequency of course.

[2] Not two to the fourth power because one of the two teams has to win the first game and that team has to win the next three games.


Dedicated to Johanna[1]

There are two observations about women in physics and mathematics that are at odds with each other. The first is that there are relatively few women in science. In a typical seminar or conference presentation I have counted that just over ten percent of the audience is female. The second is that, despite the relatively few women, they are by no means second-rate scholars. The first person to ever win two Nobel Prizes was a woman–Marie Curie (1867–1924). But I do not have to go far-far away and long-long ago to find first rate women scientists. I just have to go down the corridor, well actually down the corridor and up a flight of stairs since my office is in the ground floor administrative ghetto while the real work gets done on the second floor.  Since women are demonstratively capable, why are there so few of them in the mathematical sciences?

A cynic could say they are too bright to waste their time on such dead end fields but as a physicist I could never admit the validity of that premise. So why are there so few women in physics and mathematics? It is certainly true that in the past these subjects were considered too hard or inappropriate for women. Despite her accomplishments and two Nobel prizes, Madam Curie was never elected to the French Academy of Sciences. Since she was Polish as well as a woman the reason may have been as much due to xenophobia as misogyny.

Another interesting example of a successful woman scientist is Caroline Herschel (1750–1848). While not as famous as her brother William (1738–1822), she still made important discoveries in astronomy including eight comets and three nebulae. The comment from Wikipedia is in many ways typical: Caroline was struck with typhus, which stunted her growth and she never grew past four foot three. Due to this deformation, her family assumed that she would never marry and that it was best for her to remain a house servant. Instead she became a significant astronomer in collaboration with William. Not attractive enough to marry and not wanting to be a servant she made lasting contributions to astronomy.  If she had been considered beautiful we would probably never have heard of her! Sad.

Sophie Germain (1776–1831) is another interesting example. She overcame family opposition to study mathematics. Not being allowed to attend the lectures of Joseph Lagrange (1736–1813) she obtained copies of his lecture notes from other students and submitted assignments under an assumed male name. Lagrange, to his credit, became her mentor when he found out that the outstanding student was a woman. She also used a pseudonym in her correspondence with Carl Gauss[2] (1777–1855). After her death, Gauss made the comment: [Germain] proved to the world that even a woman can accomplish something worthwhile in the most rigorous and abstract of the sciences and for that reason would well have deserved an honorary degree. High praise from someone like Gauss, but why: even a woman? It reminds one of the quote from Voltaire (1694–1778) regarding the mathematician Émilie du Châtelet (1706–1749): a great man whose only fault was being a woman. Fault? And so it goes. Even outstanding women are not allowed to stand on their own merits but are denigrated for being women.

But what about today, does this negative perception still continue? While I have observed that roughly ten percent of attendees at physics lectures tend to be female, the distribution is not uniform. There tend to be more women from countries like Italy and France. I once asked a German colleague if she thought Marie Curie as a role model played a role in the larger (or is that less small) number of female physicists from those counties. She said no, that it was more to do with physics not being as prestigious in those counties. Cynical but probably true; through prejudice and convention women are delegated to roles of less prestige rather than those reflecting their interests and abilities.

My mother is probably an example of that. The only outlet she had for her mathematical ability was tutoring hers and the neighbour’s children, and filling out the family income tax forms. From my vantage point, she was probably as good at mathematics as many of my colleagues. One wonders how far she could have gone given the opportunity, a B. Sc., a Ph. D? One will never know. The social conventions and financial considerations made it impossible. Her sisters became school teachers while she married a small time farmer and raised five children. It is a good thing she did because otherwise I would not exist.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] A fellow graduate student who died many years ago of breast cancer.

[2] Probably the greatest mathematician that ever existed.


Is science merely fiction?

Friday, February 8th, 2013

Hans Vaihinger (1852 – 1933) was a German philosopher who introduced the idea of “as if” into philosophy. His book, Die Philosophie des Als Ob (The Philosophy of ‘As If’), was published in 1911, but written more than thirty years earlier. He seems to have survived the publish or perish paradigm for thirty years.

In his book, Vaihinger argued that we can never know the true underlying reality of the world but only construct systems which we assume match the underlying reality. We proceed as if they were true.  A prime example is Newtonian mechanics. We know that the underlying assumptions are false—the fixed Euclidean geometry for example—but proceed as if they were true and use them to do calculations. The standard model of particle physics also falls into this category. We know that at some level it is false but we use it anyway since it is useful. Vaihinger himself used the example of electrons and protons as things not directly observed but assumed to exist. They are, in short, useful fictions.

Vaihinger’s approach is a good response to Ernst Mach’s (1838 – 1916) refusal to believe in atoms because they could not be seen.  In the end, Mach lost that fight but not without casualties.  His positivism had a negative effect on physics in many ways was a contributing factor in Ludwig Boltzmann’s (1844 – 1906) suicide.  The philosophy of ‘as if’ is the antithesis of positivism, which holds closely to observation and rejects things like atoms which cannot be directly seen. Even as late as the early twentieth century, some respectable physics journals insisted that atoms be referred to as mathematical fictions.  Vaihinger would say to proceed as if they were true and not worry about their actual existence. Indeed, calling them mathematical fictions is not far from the philosophy of ‘as if’.

The ideas of Vaihinger had precursors. Vaihinger drew on Jeremy Bentham’s (1748 – 1832) work  Theory of Fictions. Bentham was the founder of modern utilitarianism and a major influence on John Stuart Mill (1806 – 1873) among others.  ‘As if’ is very much a form of utilitarianism: If a concept is useful, use it.

The idea of ‘as if’ was further developed in what is known as factionalism. According to fictionalism, statements that appear to be descriptions of the world should be understood as cases of ‘make believe,’ or pretending to treat something as literally true (a ‘useful fiction’ or ‘as if’).  Possible worlds or concepts, regardless of whether they really exist or not, may be usefully discussed. In the extreme case, science is only a useful discussion of fictions; ie science is fiction.

The core problem goes back at least to Plato (424/423 BCE – 348/347 BCE) with the parable of the cave (from The Republic). There, he talks about prisoners who are chained in a cave and can only see the wall of the cave.  A fire behind them casts shadows on the wall and the prisoners perceive these shadows as reality since this is all they know. Plato then argues that philosophers are like a prisoner who is freed from the cave and comes to understand that the shadows on the wall are not reality at all. Unfortunately, Plato (and many philosophers after him) then goes off in the wrong direction. They take ideas in the mind (Plato’s ideals) as the true reality. Instead of studying reality, they study the ideals which are reflections of a reflection. While there is more to idealism than this, it is the chasing after a mirage or, rather, the image reflected in a mirage.

Science takes the other tack and says we may only be studying reflections on a wall or a mirage but let us do the best job we can of studying those reflections. What we see is indeed, at best, a pale reflection of reality. The colours we perceive are as much a property of our eyes as of any underlying reality. Even the number of dimensions we perceive may be wrong. String theory seems to have settled on eleven as the correct number of dimensions but that is still in doubt. Thus, science can be thought of as ‘as if’ or fictionalism.

But that is far too pessimistic, even for a cynic like me. The correct metaphor for science is the model. What we build in science are not fictions but models. Like fictions and ‘as if,’ these are not reality and should never be mistaken for such, but models are much more than fictions. They capture a definite aspect of reality and portray how the universe functions. So while we scientists may be studying reflections on a wall, let us do so with the confidence that we are learning real but limited knowledge of how the universe works.

To receive a notice of future posts follow me on Twitter: @musquod.


I like talking about science. I like talking about religion. I even like talking about the relationship and boundaries between the two. These are all fascinating subjects, with many questions that are very much up for debate, so I am very pleased to see that CERN is participating in an event in which scientists, philosophers, and theologians talk together about the Big Bang and other questions.

But this quote, at least as reported by the BBC, simply doesn’t make any sense:

Co-organiser Canon Dr Gary Wilton, the Archbishop of Canterbury’s representative in Brussels, said that the Higgs particle “raised lots of questions [about the origins of the Universe] that scientists alone can’t answer”.

“They need to explore them with theologians and philosophers,” he added.

The Higgs particle does no such thing; it is one aspect of a model that describes the matter we see around us. If there is a God, CERN’s recent observations tell us that God created a universe in which the symmetry between the photon and the weak bosons is probably broken via the Higgs Mechanism. If there is not, they tell us that a universe exists anyway in which the symmetry between the photon and the weak bosons is probably broken via the Higgs Mechanism. It doesn’t raise any special questions about the origins of the universe, any more than the existence of the electron does.

There are many interesting philosophical questions to ask about the relationships between models of scientific observations on the one hand, and notions of absolute Truth on the other. You can also talk about what happened before the times we can make scientific observations about, whether there are “other universes” with different particles and symmetries, and so on. Theologians and philosophers have much to say about these issues.

But in regard to searches for the Higgs boson in particular, the people we need to explore questions with are mostly theoretical physicists and statisticians.