## Posts Tagged ‘Philosophy of science’

### In Defense of Jargon

Friday, May 4th, 2012

Jargon, even the name has a harsh ring to it. Can anyone but an author love a title like[1]: Walking near a Conformal Fixed Point: the 2-d O(3) Model at theta near pi as a Test Case? “How can anyone take science seriously when it uses so much jargon?” said the teamster[2] as he told his helper to fasten the traces to the whiffletree and check the tugs and hames straps. Jargon is everywhere and not unique to science.  While you may not understand what the teamster is talking about, my father would have understood instantly and then gone to get a jag of wood.

But back to jargon.  To the uninitiated the above title, like the teamsters words, seems like so much gobbledygook.  But to the initiated, those working in the field, it is a precise statement and easily understood.  Trying to put the title, or the teamster’s words, in a form understandable to the layperson would have been a fool’s errand. In making it understandable to a more general audience, the precision would have been lost and we would probably never have gotten that jag of wood.  That would have been unfortunate as Nova Scotian winters can be cold.

One of the principles of all good writing is to tailor the communication to the intended audience.  When I am helping put together a report for TRIUMF, the instructions to the authors always includes a statement about the intended audience.  Even then, the good authors frequently ask me to make the description of the intended audience more precise.  Life gets more complicated when a document has more than one intended audience. Then it is necessary to have a layered document where introductory sections are understandable by an intelligent layperson while the later sections are directed at the specialist. One is reminded of the old joke about the structure of good seminar: The speaker starts at a low level understandable by anyone and then as the seminar progresses he becomes more technical and less understandable so that by the end, even the speaker does not know what he is talking about.  Well, perhaps that is getting a little too carried away, but one can error on either side, by making the writing too technical for the audience or not technical enough.

Similarly, the reader has to realize that the writing may not be directed at him or her. We, as people with technical expertise, have to be careful not to judge non-technical writing too harshly because it does not capture all the subtle nuances we are aware of. Including them would lose the layperson. It is a fine line between not confusing the layman and misleading him. When I am reading an article directed at a general audience, on a topic I am an expert in, I find I have to translate the layman’s language back to the technical language before I can understand it. That is as it should be.

Conversely, in fields we are not experts in, we should not criticize technical writing as being too filled with jargon. This latter mistake is made frequently by politicians and commentators who criticize technical writing due to ignorance. Few have the wisdom of the former Canadian Prime Minister, Pierre Elliot Trudeau, who said on opening TRIUMF, “I do not know what a cyclotron is, but I am glad Canada has one.” It is a rare politician who has the confidence to admit ignorance.  As an undergraduate student, I picked up a copy of Rose’s book:  Elementary Theory of Angular Momentum. That is when I learned one should be leery of books with elementary in the title[3]. If that is an elementary book, I would hate to have to read an advanced one. It is a good book but I, at that stage in my career, was not the intended audience.

Words only have meaning within the context they are used.  When used with a person possessing a similar background, the context does not have to be spelled out. Thus, in conversation with a colleague I have worked with for some time a lot is understood without being stated explicitly. Jargon speeds up communication and makes it less prone to misunderstanding. On the other hand, with people who are not acquainted with the field, we have to spell out the background assumptions and suppress the details that are only of interest to the expert.

In the end, it is quite unfortunate that jargon has been abused and hence has received a bad name.  In technical writing, jargon or technical terms are not only acceptable but necessary. So press on and employ jargon­—but only where appropriate.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] First title on the lattice archive the day I checked to get an example.

[2] The kind that drives horses.

[3] Books with elementary in the title are usually advanced while those with advanced in the tile are usually elementary.

### Is Science Consistent with Evolution?

Friday, April 27th, 2012

The evolutionary argument against naturalism

Alvin Plantinga (1932), professor emeritus of philosophy at the University of Notre Dame, is a leading theistic philosopher and opponent of evolution. He has proposed an intriguing, and specious—yet non-the-less intriguing—argument against evolution. It is intriguing for several reasons: First, because on the face of it, it is plausible. Second because it is typical of a whole class of specious arguments. Finally, because it highlights the difference between how scientists and philosophers approach a problem.

The argument runs as follows: The naturalist can be reasonably sure that the neurophysiology underlying belief formation is adaptive, but nothing follows about the truth of the beliefs depending on that neurophysiology. In fact, he’d have to hold that it is unlikely, given unguided evolution, that our cognitive faculties are reliable. It’s as likely, given unguided evolution, that we live in a sort of dream world as that we actually know something about ourselves and our world (original emphasis). In other words, if people in fact evolved, they could not trust their cognitive faculties to give them the truth and hence, do science. He goes on to argue that it is only possible to trust our cognitive faculties if people are created in God’s image.

It is amusing that unbelievers argue the opposite; namely that the existence of a God means science is impossible since he/she/it could override the rules of nature at will and there would be no reason to assume constant laws. Both are correct to this extent: Absolute knowledge is impossible,[1] independent of God’s existence.  But back to Plantinga’s argument; it hinges on the concept of truth, or equivalently, reliability. But what is truth? A profound question—or a meaningless one. The difference between profound and meaningless is often vanishingly small.

At one level, the idea of truth is simple: Does the testimony of the person on the witness stand agree with what happened? Or perhaps the simpler question: Does the testimony agree with what the person thinks happened? The second is a less stringent requirement. But from this simple concept, the grand metaphysics concept of TRUTH is generated. Whatever this grand metaphysical concept is, science is not concerned with it. Is it TRUTH ™ that colds are caused by viruses? The reductionist, at least if he believes in string theory, would say no. Colds, like all other phenomena, are caused by how strings vibrate in eleven dimensions. Viruses are just a wimpy low-energy approximation to the real TRUTH ™.

In science, we build models for how the universe works, which usually have a limited range of validity. Think of classical mechanics which is only valid for velocities much less than the speed of light.  Is classical mechanics the TRUTH ™? No, certainly no, it fails in various places. But it is certainly useful. Science is a natural extension of the model building the unconscious mind does all the time, which is necessary for us to survive in a hostile world. The surprising thing is not that beings who evolved created science, but rather, that they did not do it sooner. Plantinga’s problem is that he does not understand what science is or how it works—seeking effective models rather than the TRUTH ™, whatever that may be. He should have known better, since by the Duhem-Quine thesis, no model can be falsified.  Arguing that the current models have deficiencies is never enough. You have to provide better ones with more predicative power.

In the same manner that Plantinga’s argument relies on the grand metaphysics concept of TRUTH ™, many arguments in philosophy rely on similar word definitions. A prime example is the ontological agreement for God’s existence. First proposed by Anselm of Canterbury (1033 – 1109), the argument goes as follows: Define God as the greatest possible being we can conceive. If the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, a greater being is possible—one which exists in the mind and in reality. Note that his argument hinges on the definition of greatest. My daughter believes that anything, no matter how great, can be made greater by being pink. Thus the greatest being is pink. If I define non-existence as being greater than existence,[2] the ontological argument becomes an argument for God’s nonexistence. Evil is another word that is frequently made into a grand metaphysical concept, EVIL™, and used to justify various philosophical positions. The concept of actions I do not like is then taken a step further and personified in the concept of the devil.

While our concepts and word definitions may reflect reality, they do not constrain it. In the end, models founded on observation take precedence over philosophical arguments based on word definitions and phenomenologically unconstrained speculations. If such philosophical arguments disagree with scientific models, so much the worse for them. Thor showing up for Thursday afternoon tea at the Empress Hotel would make all arguments regarding his existence moot[3].  One observation is worth a thousand philosophical arguments.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[2] See Ecclesiastes chapter 4 for why this definition may be reasonable.

[3] You can tell it is Thor because he would be carrying a large hammer and one of the goats pulling his chariot would be limping.

### The Role of Mathematics and Rational Arguments in Science

Friday, April 20th, 2012

Mathematics is a tool used by scientists to help them construct models of how the universe works and make precise predictions that can be tested against observation. That is really all there is to it, but I had better add some more or this will be a really short essay.

For an activity to be science, it is neither necessary, nor sufficient, for it to involve math. Astrology uses very precise mathematics to calculate the planetary positions, but that does not make it science any more than using a hammer makes one a carpenter (Ouch, my finger!). Similarly, not using math does not necessarily mean one is not doing science any more than not using a hammer means one is not a carpenter. Carl Linnaeus’s (1707 – 1778) classification of living things and Charles Darwin’s (1809 – 1882) work on evolution are prime examples of science being done with minimal mathematics (and yes, they are science). The ancient Greek philosophers, either Plato or Aristotle, would have considered the use of math in describing observations as strange and perhaps even pathological. Following their lead, Galileo was criticized for using math to describe motion. Yet since his time, the development of physics, in particular, has been joined at the hip to mathematics.

The foundation of mathematics itself is a whole different can of worms. Is it simply a tautology, with symbols manipulated according to well defined rules? Or is it synthetic a priori information? Is 2+2=4 a profound statement about the universe or simply the definition of 4? Bertrand Russell (1872 – 1970) argued the latter and then showed 3+1=4. Are the mathematical theorems invented or discovered? There are ongoing arguments on the topic, but who knows? I certainly don’t. Fortunately, it does not matter for our purposes. All we need to know about mathematics, from the point of view of science, is that it helps us make more precise predictions. It works, so we use it. That’s all.

I could end this essay here, but it is still quite short. Luckily, there is more. Mathematics is so entwined with parts of science that is has become its de facto language. That is certainly true of physics where the mathematics is an integral part of our thinking. When two physicists discuss, the equations fly. This is still using mathematics as a tool, but a tool that is fully integrated in to the process of science. This has a serious downside. People who do not have a strong background in mathematics are to some extent alienated from science. They can have, at best, a superficial understanding of it from studying the translation of the mathematics into common language. Something is always lost in a translation. In translating topics like quantum mechanics—or indeed most of modern particle physics—that loss is large; hence nonsense like the “God Particle”. There is no “God Particle” in the mathematics, only some elegant equations and, really, considering their importance, quite simple equations.  One hears question like: How do you really understand quantum mechanics? The answer is clear, study the mathematics. That is where the real meat of the topic and where the understanding is—not in some dreamed up metaphysics-like the many worlds interpretation.

Closely related to mathematics are logical and rational arguments. Logic may or may not give rise to mathematics, but for science, all we require from logic is that it be useful. Rational arguments are a different story. Like mathematics, they are useful only to the extent they help us make better predictions. But that is where the resemblance stops. Rational arguments masquerade as logic, but often become rationalizations: seductive, but specious.  Unlike mathematics, rational arguments are not sufficiently constrained by their rules to be 100% reliable. Indeed, one can say that the prime problem with much of philosophy is the unreliability of seemingly rational arguments. Philosophers using supposedly rational arguments come to wildly different conclusions: compare Plato, Descartes, Hume, and Kant. This is perhaps the main difference between science and philosophy: philosophers trust rational arguments, while scientists insist they be very tightly constrained by observation; hence the success of science.

In science, we start with an idea and develop it using rational arguments and mathematics. We check it with our colleagues and convince ourselves using entirely rational arguments that it must be correct, absolutely, 100%. Then the experiment is performed. Damn—another beautiful theory slain by an ugly fact. Philosophy is like science, but without the experiment[1]. Perhaps the real definition of a rational argument, as compared to a rationalization, is one that produces results that agree with observations. Mathematics, logic, and rational arguments are just a means to an end, producing models that allow us to make precise predictions. And in the end, it is only the success of the predictions that count.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] I believe this observation comes from one of the Huxelys but I cannot find the reference.

### The Argument from Design

Friday, April 13th, 2012

Central to the scientific method is a process for deciding between conflicting models of how the universe operates. It is very instructive to apply this process to the argument from design for the existence of a higher intelligence in the universe. The argument from design is commonly associated with William Paley (1743 – 1805) and for those who like big words, is also called as the teleological argument for God’s existence. A counter argument is given in Richard Dawkins’ book: The Blind Watch Maker. The basic argument from design is, however, much older than Paley; it goes back to the ancient Greeks. Needless to say, Dawkins’ book has failed to lay the argument to rest. If one checks the current state of the arguments on the topic[1], they typically are of the form: Anyone who does not recognize design in the universe is in denial, and the counter argument is: Those who see design in the universe are delusional. Needless to say, neither argument is particularly convincing. So what can the scientific method add to resolving the impasse? Quite a bit actually.

Let’s begin by looking at the actual form of the argument. It was stated succinctly by Cicero (106BCE – 43 BCE): When you see a sundial or a water-clock, you see that it tells the time by design and not by chance. How then can you imagine that the universe as a whole is devoid of purpose and intelligence, when it embraces everything, including these artifacts themselves and their artificers? This analogy was expanded upon, most famously, by Paley (quoted from the Wikipedia):

[S]uppose I found a watch upon the ground, and it should be inquired how the watch happened to be in that place, I should hardly think … that, for anything I knew, the watch might have always been there. Yet why should not this answer serve for the watch as well as for [a] stone [that happened to be lying on the ground]?… For this reason, and for no other; namely, that, if the different parts had been differently shaped from what they are, if a different size from what they are, or placed after any other manner, or in any order than that in which they are placed, either no motion at all would have been carried on in the machine, or none which would have answered the use that is now served by it.

So what about the watch and how do we know that it was designed? We begin with one of the mantras of this series of essays: The meaning is in the model. To understand the watch and its creation, our mind, either consciously or unconsciously, develops a model for its origin.  The watch is deduced to have to been made by humans, not by non-human agencies, and humans do things by design. Thus, by a two-step process we arrive at design. Now, the watch is fairly obvious, but what about that pointed rock on the ground? Is it due to design or natural causes? Is it simply a broken rock or is it an arrow head? Here the question of design is strictly one of if it was made by humans or not. If the indications on the rock show signs of human manufacture it is considered due to design, and if not, then accident.

The typical theist would claim that the universe and everything in it is designed. Thus, we cannot do the comparison of something designed to something that was not designed; a technique that was useful in deciding if the watch was humanly designed or not.  So how do we tell if something is designed or not? Use the methodology from science, of course.

In science, there are two distinct steps with any model: first the model must be constructed, and then it must be tested. Model construction is a creative activity and does rely on analogy and pattern recognition. Thus, in the initial stage, the argument from design is on good grounds. Now for the crux of the matter: the crucial test is neither how good the analogy is, nor how striking the apparent pattern, but rather if the argument from design passes the tests of parsimony and also makes successful predictions for observations. The scientific method defines three criteria for judging models: the successful description of past observations, the ability to make correct predictions for future observations, and simplicity. Being able to describe past observations is just the price to play the game, and with sufficient ingenuity, can usually be done. The definitive test of a scientific model is the ability to make predictions for novel phenomena. By predictions, I mean definite predictions that can be falsified. Not the kind of predictions made by Nostradamus that after the fact can be claimed to have been fulfilled, but rather definite predictions that can be tested, like it will rain tomorrow at TRIUMF between 3:00 and 4:00 pm.

Finally, there is simplicity. Yes, there is always simplicity or parsimony. By simplicity, I mean the elimination of assumptions that do not help the model make predictions. Today, common descent for living things is pretty much established and is mainly challenged by gross violations of the simplicity principle. A prime example is the omphalos hypothesis of Phillip Gosse (1810 – 1888). He stated that the world was created six thousand years ago, but in a manner that cannot be distinguished from one that is much older. As pointed out in a previous essay, that hypothesis can only be eliminated by an appeal to parsimony. As for design, natural selection is one way of generating the design of living things without the need for external intelligence and, at least at the small scale, natural selection is observed to be happening.  So, can an external intelligence as suggested by the argument from design, or the idea of intelligent design, add anything useful to this? Or can they both be eliminated, like the omphalos hypothesis, by the appeal to parsimony?  The challenge to the proponents of the argument from design (and similarly for intelligent design) is to make precise testable predictions, not postdictions, that distinguish it from natural selection.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] This post was partly motivated by such an exchange on Huffington Post.

### The Role of the Crucial Experiment

Friday, April 6th, 2012

The idea of a crucial experiment that decisively confirms a model goes back at least to Francis Bacon (1561 – 1626) who used the term instantia cruci. Later, the term experimentum crucis was coined by Robert Hooke (1635 – 1703) and used by Isaac Newton (1642 – 1727), in particular with regard to his theory of light. Alternatively, Pierre Duhem (1861 – 1916) strongly disagreed with the possibility of crucial experiments. Somewhat in anticipation of Thomas Kuhn’s (1922–1996) paradigms, Duhem realized that scientific theories or models do not stand alone, but rather come coupled with auxiliary assumptions. Was what Galileo saw through the telescope features of the heavens, or only of his telescope, as some of his detractors claimed? One has to consider the combined heavens-telescope system to decide. When the detector is as complex as the ATLAS detector at CERN the question is even more apropos.

Karl Popper (1902 – 1994) refined the idea of the crucial experiment to one that falsifies a given model. But the Duhem-Quine hypothesis, a variation of Duhem’s idea, makes the point that falsification, at least in its naïve form, falls victim to same holistic argument: we can never test a single model in isolation. So is the idea of a crucial experiment just a will-o-the-wisp that vanishes on more careful evaluation?

We can think of many examples: Sir Arthur Eddingtons measurement of the bending of star light by the sun, the discovery of high-temperature superconductors, the measurement of the three degree microwave background, the Michelson–Morley experiment, and so on. Did none of these play a critical role in the history of science? I would suggest they did, but not in the simple manner suggested by Bacon or Popper.

Consider the Michelson–Morley experiment in 1887. Scientists did not do a Chicken Little impersonation and run around claiming the sky was falling or, in this case, that Newton (Newtons laws of motion) and Maxwell (electromagnetism) were wrong. Rather, they started trying to understand what the explanation could be. This led to ideas like ether drag (the earth entraining the ether) or Lorentz-Fitzgerald contraction (the idea that objects shorten in the direction of motion). The latter idea was developed and expanded upon by Lorentz and Poincaré who developed the math for special relativity. Einstein claimed he was unaware of the Michelson–Morley experiment, but he was certainly aware of Lorentzs early attempts to understand that experiment. Thus, the Michelson–Morley experiment started a chain of events that inexorably lead to special relativity, not in one easy step, but eventually and inevitably. If special relativity had been proposed thirty years sooner, it would have been treated as a curiosity like the Copernicus model when it was first proposed.

As another example, consider the measurement of the bending of light by the sun. The general theory of relativity and classical mechanics differ by a factor of two. Eddingtons 1919 experiment gave a result closer to general relativity and hence contributed to the early acceptance of general relativity (not that people are not still trying to test it; that is as it should be). A more striking example was the discovery of the three-degree kelvin cosmic microwave background. Before then, there were two models, both with strong support: the steady-state model and the big bang model. While the microwave background was a big boost for the big bang model, the solid state model did not give up without a struggle. There were various attempts to describe the microwave background in the solid state model but they were too little too late. Like the Michelson–Morley experiment, the discovery of the microwave background started a chain reaction that led to the acceptance of one model and the rejection of another.

Perhaps the best way of thinking of crucial experiments is not that they prove (that ugly word) one model better than another, but that they serve as a catalyst. Or perhaps, one can think of a super-cooled fluid that when slightly disturbed, suddenly solidifies. The same phenomenon is seen with people. A group are sitting at lunch and when one gets up to go and they all go, but only if the circumstances are right. Consider the discovery of the J/Ψ particle. The time was right and the background had been prepared so that when it was discovered, the particle physics community solidified around the quark model. Similarly, you can consider Galileo turning his telescope on the heavens as providing the catalyst for the acceptance of the heliocentric model.

Like models, experimental results do not exist in isolation. Rather, they build on each other and are given meaning by the prevailing models. The role of crucial experiments should be seen in relation to that milieu. They do not single handedly overturn or confirm the status quo, but rather, start chains of events that lead to or act as tipping points for the establishment of new paradigms. Thus, crucial experiments do exist but not in the naïve manner envisioned by Bacon, Hooke, or Popper.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

The idea of a crucial experiment that decisively confirms a model goes back at least to Francis Bacon (1561 – 1626) who used the term instantia cruci. Later, the term experimentum crucis was coined by Robert Hooke (1635 – 1703) and used by Isaac Newton (1642 – 1727), in particular with regard to his theory of light. Alternatively, Pierre Duhem (1861 – 1916) strongly disagreed with the possibility of crucial experiments. Somewhat in anticipation of Thomas Kuhn’s (1922–1996) paradigms, Duhem realized that scientific theories or models do not stand alone, but rather come coupled with auxiliary assumptions. Was what Galileo saw through the telescope features of the heavens, or only of his telescope, as some of his detractors claimed? One has to consider the combined heavens-telescope system to decide. When the detector is as complex as the ATLAS detector at CERN the question is even more apropos.

Karl Popper (1902 – 1994) refined the idea of the crucial experiment to one that falsifies a given model. But the Duhem-Quine hypothesis, a variation of Duhem’s idea, makes the point that falsification, at least in its naïve form, falls victim to same holistic argument: we can never test a single model in isolation. So is the idea of a crucial experiment just a will-o-the-wisp that vanishes on more careful evaluation?

We can think of many examples: Sir Arthur Eddingtons measurement of the bending of star light by the sun, the discovery of high-temperature superconductors, the measurement of the three degree microwave background, the Michelson–Morley experiment, and so on. Did none of these play a critical role in the history of science? I would suggest they did, but not in the simple manner suggested by Bacon or Popper.

Consider the Michelson–Morley experiment in 1887. Scientists did not do a Chicken Little impersonation and run around claiming the sky was falling or, in this case, that Newton (Newtons laws of motion) and Maxwell (electromagnetism) were wrong. Rather, they started trying to understand what the explanation could be. This led to ideas like ether drag (the earth entraining the ether) or Lorentz-Fitzgerald contraction (the idea that objects shorten in the direction of motion). The latter idea was developed and expanded upon by Lorentz and Poincaré who developed the math for special relativity. Einstein claimed he was unaware of the Michelson–Morley experiment, but he was certainly aware of Lorentzs early attempts to understand that experiment. Thus, the Michelson–Morley experiment started a chain of events that inexorably lead to special relativity, not in one easy step, but eventually and inevitably. If special relativity had been proposed thirty years sooner, it would have been treated as a curiosity like the Copernicus model when it was first proposed.

As another example, consider the measurement of the bending of light by the sun. The general theory of relativity and classical mechanics differ by a factor of two. Eddingtons 1919 experiment gave a result closer to general relativity and hence contributed to the early acceptance of general relativity (not that people are not still trying to test it; that is as it should be). A more striking example was the discovery of the three-degree kelvin cosmic microwave background. Before then, there were two models, both with strong support: the steady-state model and the big bang model. While the microwave background was a big boost for the big bang model, the solid state model did not give up without a struggle. There were various attempts to describe the microwave background in the solid state model but they were too little too late. Like the Michelson–Morley experiment, the discovery of the microwave background started a chain reaction that led to the acceptance of one model and the rejection of another.

Perhaps the best way of thinking of crucial experiments is not that they prove (that ugly word) one model better than another, but that they serve as a catalyst. Or perhaps, one can think of a super-cooled fluid that when slightly disturbed, suddenly solidifies. The same phenomenon is seen with people. A group are sitting at lunch and when one gets up to go and they all go, but only if the circumstances are right. Consider the discovery of the J/Ψ particle. The time was right and the background had been prepared so that when it was discovered, the particle physics community solidified around the quark model. Similarly, you can consider Galileo turning his telescope on the heavens a������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������s���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

### The Accumulative Nature of Science

Friday, March 9th, 2012

Is science accumulative? Is the Pope a Catholic? Some things are truly self-evident. The accumulative nature of science is one of them. But the different ways science is accumulative does hold some surprises. Consider home improvement. We can add onto the top, build out from the side, fix the broken window, or build down from the foundations. Science is accumulative in all these directions. Think of classical mechanics and planetary motion. After Isaac Newton (1642 – 1727) introduced his three law of motion, various people, most notably Joseph-Louis Lagrange (1736 – 1813) and William Hamilton (1805 – 1865), developed more mathematically sophisticated treatments of classical motion. They used Newton’s work as a starting point and built new stories onto the superstructure. Pierre-Simon Laplace (1749 – 1827) added onto Newton’s work in other ways. He found an error in Newton’s calculation of planetary stability and added the nebular hypothesis to describe the origin of the solar system. The first of these corrected the structure Newton had built—replaced the broken window if you like—while the second, the nebular hypothesis, added a new room on the side. It extended Newton’s ideas beyond where they were originally applied. The discovery of Uranus by William Herschel (1738 – 1822) can also be considered a sidewise extension to planetary system; the discovery left the original work intact but extended it outward.

These advances all left the paradigm of classical mechanics intact but built on the foundation Newton had laid. But quantum mechanics was a whole different story. It left the superstructure intact but changed the foundation; like the magician’s trick of pulling the table cloth off the table while leaving the dishes in place. The advent of quantum mechanics did not require the recalculation of planetary orbits. The work of Newton, Laplace, Lagrange, and Hamilton could still be applied as before but only to a fixed range of phenomena. Quantum mechanics kept all the successes of classical mechanics, but put it on a new foundation.

Now, quantum mechanics frequently is seen as a complete overthrow of classical mechanics and if you are looking at the metaphysics, that is true. However, no one should take metaphysics seriously anyway. From the point of view of the person calculating planetary orbits, nothing changed when Schrodinger introduced his eponymous equation. Schrodinger built on the work of Hamilton just as much as Hamilton built on the work of Newton. (Quantum mechanics is built on Hamilton’s formulation of classical mechanics.) Whereas Hamilton added to the superstructure, Schrodinger helped replace the foundation.  Both added to the existing structure rather than demolishing it, and the smoke went up the chimney just the same[1]. Or rather, the planets went round the sun just the same.

Replacing the foundation is largely synonymous with Thomas Kuhn’s idea of paradigm change. This is the reductionists dream and the foundations in various fields of science are indeed frequently replaced: quantum gravity will replace quantum field theory, which replaced quantum mechanics, which in turn replaced classical mechanics. But only the foundation was replaced, the superstructure was left intact.  A similar process happened with this sequence: indivisible atoms, atom structure, nuclear structure, nucleon structure, and the standard model.

Thus, we see how science advances: fixing errors (Laplace), refining formalisms (Hamilton, Lagrange), extending to new areas (Laplace, Herschel), and replacing the foundations (Schrodinger). But these are all extensions to the existing knowledge. When we forget this, mistakes are made. When quantum chromodynamics was introduced, it changed the foundation of nuclear physics, but left most of the previous understanding of nuclear physics intact. The overzealous proponents of the quantum chromodynamics did not understand this and claimed that nuclear physics would have to be largely redone. But that was nonsense; we made a few minor changes and carried on. Science is amazing in that it can easily change the foundation without major damage to the superstructure. Try doing that with a sky scraper or even a two story house.

The reason this all works is that science is modular, with fairly well defined interfaces between the models. Consider chemistry. At one side, quantum chemistry is closely related to physics and share a common formalism: quantum mechanics. In the middle, chemistry developed independently of physics and did not depend on the quantum chemistry foundation.  But then, parts of biology and applied science use chemistry as a foundation to build on; interlinked but each progressing separately.

So science oozes onwards in all directions: upward, downward, sideways, and inward. It discards what is no longer useful—yet, for the most part, the older models provide the scaffolding to support the new, and the more recent insights are obtained without destroying the older ones. And science unfolds as it should, building knowledge one room at a time.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] From a children’s song by Fred Chandler, 1901.

### Creativity: as Important in the Sciences as in the Arts

Friday, March 2nd, 2012

In these essays, I have discussed various aspects of the scientific method based on model building and testing against experiment. One aspect, I have avoided up until now is how the models are constructed. This is a logically distinct process from how models are tested. We have seen that models are tested by requiring them to make predictions that can be tested against observation. We can then rank models on parsimony and their ability to make successful predictions. But how are models constructed in the first place? Francis Bacon (1561 – 1626) and Isaac Newton (1642 – 1727) would have told you that models were deduced from observation by a method called induction. In this approach, the construction and testing of models is seen as the same process, not two distinct processes. But induction is not valid and the making of models is logically independent of their testing. This is a key point Karl Popper (1902 – 1994) made when he introduced the idea of falsification.

Let us turn to a master, Albert Einstein (1879 – 1955), and see what he says:

The supreme task of the physicist is . . . the search for those most general, elementary laws from which the world picture is to be obtained through pure deduction. No logical path leads to these elementary laws; it is instead just the intuition that rests on an empathic understanding of experience. In this state of methodological uncertainty one can think that arbitrarily many, in themselves equally justified systems of theoretical principles were possible; and this opinion is, in principle, certainly correct. But the development of physics has shown that of all the conceivable theoretical constructions a single one has, at any given time, proved itself unconditionally superior to all others. No one who has really gone deeply into the subject will deny that, in practice, the world of perceptions determines the theoretical system unambiguously, even though no logical path leads from the perceptions to the basic principles of the theory.

Very curious: no logical path leads to these elementary laws. Paul Feyerabend (1924 – 1994) would have agreed. He wrote a book entitled, Against Method: Outline of an Anarchistic Theory of Knowledge. In this book, he argued that there is no scientific method, but that knowledge advances through chaos. However, what Bacon and Newton missed, Feyerabend glimpsed through a haze, and Einstein understood, is that model construction is a creative, not algorithmic, process: the intuition that rests on an empathic understanding of experience. Science does not function by deducing models from observations. Rather, we construct models and compare their predictions with what is observed. A falling apple inspired Newton, rising water in a bath inspired Archimedes, a dream inspired Kekulé (the structure of benzene). Or at least, that is how the stories go. For model construction, Feyerabend is correct; anything goes—dreams, divine inspiration, pure luck, and especially hard work. Creativity by its very nature is chaotic and erratic.

We start with observations and ask, “what is the simplest model that could account for these observations?” Once a model is constructed, it is tested by the much more algorithmic or deterministic process of making predictions and checking them against observation.  Now, one of the necessary constraints in building scientific models is simplicity. Without simplicity, we can get nowhere: an infinite set of models describe any finite set of measurements. That is why we cannot ask questions such as, “what model do these observations imply?” There are infinitely many such models.

Yet, simplicity comes at a price. For the sake of argument, take simplicity to be defined in terms of Kolmogorov complexity. This is a measure of the computational resources needed to specify the model. There is a theorem that says the Kolmogorov complexity cannot be determined algorithmically. If we accept the above identification of simplicity, it then follows that scientific models cannot be constructed algorithmically from observations. So much for Francis Bacon, Newton, and induction. The identification is probably not exact, but never-the-less sufficiently close to reality to be indicative. Model building cannot be algorithmic, but rather, is creative.

The creative aspect of science is obscured by two things: the analytic aspect and the accumulative aspect. The analytic aspect of testing models tends to obscure the creativity in constructing them. We are so blinded by the dazzling mathematical virtuoso of Newton that we fail to see how creative the development of this three laws were. Aristotle got it all wrong, but Newton got it right—and he did it through creativity, not just math.

The accumulative nature of science gives a sense of inevitability that makes the creativity less obvious. The sense of creativity can also feel lost in the noise of a thousand lesser persons. To quote Bertrand Russell:

In science, men have discovered an activity of the very highest value in which they are no longer, as in art, dependent for progress upon the appearance of continually greater genius, for in science, the successors stand upon the shoulders of their predecessors; where one man of supreme genius has invented a method, a thousand lesser men can apply it. No transcendent ability is required in order to make useful discoveries in science; the edifice of science needs its masons, bricklayers, and common labourers as well as its foremen, master builders, and architects. In art, nothing worth doing can be done without genius; in science even a very moderate capacity can contribute to a supreme achievement.

While we common labourers may not be creative geniuses, the foremen, master builders, and architects are. When it comes to creativity, Isaac Newton, Charles Darwin, and Albert Einstein do not take a back seat to writers like William Shakespeare, Charles Dickens, or James Joyce, nor to painters like Michelangelo, Vincent van Gogh, or Pablo Picasso.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

### Most Exciting New Results are Wrong!

Friday, February 24th, 2012

Giovanni Schiaparelli (1835 – 1910) is mainly remembered for his discovery of “canali” on Mars. What a fate, to be remembered only for discovering something that does not exist. I suppose it could be worse; he could be remembered only as the uncle of the fashion designer, Elsa Schiaparelli (1890–1973).  But he was not alone in seeing canals on Mars. The first recorded instant of the use of the word “canali” was by Angelo Secchi (1818 – 1878 ) in 1858. The canals were also seen by William Pickering (1818 – 1878) and most famously by Percival Lowell (1855 – 1916). That’s right, the very Lowell whom the Lowell observatories on Mars Hill Road in Arizona are named after. Unfortunately, after about 1910, better telescopes failed to find them and they faded away. Either that or Marvin the Martian filled them in while fleeing from Bugs Bunny. However, they did provided part of the backdrop for H.G. Wells’s War of the World. But it is interesting that the canals were observed by more than one person before being shown to be optical illusions.

Another famous illusion (delusion?) was the n-ray discovered by Professor Blondlot (1844 – 1930) in 1903. These remarkable rays, named after his birth place, Nancy, France, could be refracted by aluminum prisms to show spectral lines. One of their more amazing properties was that they were only observed in France, not England or Germany. About 120 scientists in 300 papers claimed to have observed them (note the infallibility of peer review). But then Prof. Robert Wood (1868 – 1955), at the instigation of the magazine Nature, visited the laboratory. By judiciously and surreptitiously removing and reinserting the aluminum prism, he was able to show that the effect was physiological, not physical. And that was the end n-rays and also of poor Prof. Blondlot’s reputation.

Probably the most infamous example of nonsense masquerading as science is, Homo piltdownensis, otherwise known as the Piltdown man.  This was the English answer to the Neanderthal man and the Cro-Magnon man discovered on the continent. A sculptured elephant bone, found nearby, was even jokingly referred to as a cricket bat. Seems appropriate. While there was some scepticism of the find, the powers that be declared it to be a breakthrough and it was only forty years later that someone had the brilliant idea that it might be a fake. Once the signs of faking were looked for, they were easily found. What we see here is an unholy combination of fraud, delusion, and people latching onto something that confirmed their preconceived ideas.

These examples are not unique. Most exciting new results are wrong[1]: polywater, the 17 kev neutrino cold fusion, superheavy element 118, pentaquarks, and the science on almost any evening news cast. Cancer has been cured so often it is a surprise that any cancer cells are left. So, why so many exciting wrong results? First, to be exciting means, almost by definition, that the results have a low prior probability of being correct. The discovery of a slug eating my garden plants is not exiting, annoying but not exciting. It is what we expect to happen. But a triceratops in my garden, that would be exciting and almost certainly specious (pink elephants are another matter).  It is the unexpected result that is exciting and gets hyped. One can say over hyped. There is pressure to get exciting results out quickly and widely distributed so you get the credit; a pressure to not check as carefully as one should, a pressure to ensure priority by not checking with one’s peers.

Couple the low prior probability and the desire for fame with the ubiquity of error and you have the backdrop to most exciting new results being wrong. Not all exciting new results are wrong, of course. For example, the discovery of high temperature superconductors (high = liquid nitrogen). This had crackpot written all over it. The highest temperature recorded earlier was 30deg Kelvin. But with high temperature superconductors, that jumped to 90 in 1986 and then shortly afterwards to 127deg kelvin. Surely something was wrong, but it wasn’t and a Nobel Prize was awarded in 1987. The speed at which the Nobel Prize was awarded was also the subject of some disbelief. Why was the result accepted so quickly? Reproducibility. The results were made public and quickly and widely reproduced. It was not just the cronies of the discoverers who could reproduce them.

The lesson here is to distrust every exciting new science result: canals on Mars, n-ray, high temperature superconductors, faster than light propagation of neutrinos (which coincidentally just released some new interesting information), the Higgs bosons and so on. Wait until they have been independently confirmed and then wait until they have been independently confirmed again. There is a pattern with these results that turn out to be wrong. In almost every example given above, the first attempts at reproducing the wrong results succeeded.  People are in a hurry to get on the band wagon; they want to be first to reproduce the results. But after the initial excitement fades, sober second thought kicks in. People have time to think about how to do the experiment better, time to be more careful. In the end, it is this third generation of experiments that finally tells the tale.  Yah, I know I should not have been sucked in by pentaquarks but they agreed with my preconceived ideas and the second generation experiments did see them in almost the right place. Damn. Oh well, I did get a widely cited paper out of it.

Once burnt, twice shy. So scientists become very leery of the next big thing. Here again, science is different than the law. In the law, there is a presumption of innocence until proven guilty. In other words, the persecution must prove guilt; the suspect does not need to prove innocence. In science, the burden of proof is the other way around. The suspect—in this case, the exciting new result—must prove innocence. It is the duty of the proponents of the exciting new result to demonstrate the validity or usefulness of the new result.  It is the duty of his peers to look for holes, because as the above examples indicate, they are frequently there.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

### Error Control in Science

Friday, February 17th, 2012

Scientists are subject to two contradictory criticisms. The first is that they are too confident of their results, to the point of arrogance. The second is that they are too obsessed about error control–all this nonsense about double blind trials, sterilized test tubes, lab coats and the like. It evidently has not occurred to the critics that the reason scientists are confident of their results is that they have obsessed over error control. Or conversely they obsess over error control so they can be confident of their results.

Now, most people outside science, do not realize a scientist’s day job is error control. There is this conception of scientists having brilliant ideas, going into immaculate labs where they effortlessly confirm their results to the chagrin of their competitors. That is, of course, when they are not plotting world domination like Pinky and the Brain.  But scientists neither spend their time plotting world domination (be with you in a minute Brain) nor doing effortless experiments. But rather they are thinking about what might be wrong; how do I control that error? As for theorists, they must be a part of a wicked and adulterous generation[1] because they are always seeking after a sign–a minus sign that is.

So what do scientists do to control errors? There are very few arrows in their quiver. Really only three: care in doing the experiment or calculation, care in doing the experiment or calculation, and care in doing the experiment or calculation. Well actually there are two others: peer review and independent repetition. Let’s take the first three first: care, care, and care. As previously noted, scientists are frequently criticized here. Why do double blind studies when we have Aunt Martha’s word for it that Fyffe’s Patented Moose Juice cured her lumbago? Well actually, testimonials are notoriously unreliable. A book[2] I have, had an example of from the early 1900’s of testimonials for cures for consumption and then had the dates the person died of consumption. The death was frequently quite close to the date of the testimonial. So no, I will not trust Aunt Martha’s testimonial[3]. To quote Robert L. Park: The most important discovery of modern medicine is not vaccines or antibiotics, it is the randomized double-blind test, by means of which we know what works and what doesn’t. This has now carried over into subatomic physics where blind analyses are common. By blind, I mean that the people doing the analysis cannot tell how close they are to the expected answer (the theoretically predicted answer or the results of a previous experiment) until most of the analysis has been completed. Otherwise, as one of my experimental colleagues said: data points are like sheep, they travel in flocks. Even small biases can influence the results. Blind analysis is just one example of the extremes scientists go to, to ensure that their results are reliable. All this rigmarole that scientists go through is one of the reasons life expectancy increased by about 30 years between 1900 and 2000, perhaps the major reason. The lack of this care is the reason I distrust alternative medicine.

We now move on to the other two aspects of error control: peer review and independent replication of results. Both of these depend on the results being made public. Since these are crucial to error control, results that have not been made available for scrutiny should be treated with suspicion. Peer review has been discussed in the previous post and is just the idea that new results should be run past the people who are most knowledgeable so they can check for errors.

Replication is, in the end, the most important part of error control. Scientists are human, they make mistakes, they are deluded, and they cheat. It is only through attempted replication that errors, delusions, and outright fraud can be caught. And it is very good at catching them. In the next post, I will go into the examples but it is a good practice not to trust any exciting new result until it has been independently confirmed. However replication and reproducibility are not simple concepts. I go out doors and it is nice and sunny, I go out twelve hours later and it is dark and cold. The initial observation is not reproduced. I look up, I see stars. An hour later I go out and the stars are in different places. And the planets, over time, they wander hither, thither and yon. In a very real sense the observations are not reproduced. It is only within the context of model or paradigm that we can understand what reproducible means. The models, either Ptolemaic or Newtonian, tell us where to look for the planets and we can reproducibly check they are where the models say they should be at any given time. Reproducibility is always checking against a model prediction.

Replication is also not just doing the same things over and over again. Then you would make the same mistakes and get the same results over and over again. You do things differently, guided by the model being tested, to see if the effect observed is an artifact of the experimental procedures or real. Is there really a net energy gain or have you just measured a hot spot in your flask. The presence of the hot spot can be reproduced, but put in a stirrer to test the idea of energy gain and, damn, the effect went away. Another beautiful model was slain by an ugly observation. Oh, well, happens all the time.

So science advances, we keep testing our previous results in new and inventive ways. The wrong results fall by the wayside and are forgotten. The correct ones pile up and we progress. To err is human, to control errors–science.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] Matthew 12:39,16:4

[2] Pseudoscience and the Paranormal, Terence Hines, Prometheus Books, Buffalo (1988).

[3] My grandfather died of consumption.

### Peer Review: A Cornerstone of Science

Friday, February 10th, 2012

Ah yes, peer review; one of the more misunderstood parts of the scientific method. Peer review is frequently treated as an incantation to separate the wheat from the chaff. What has been peered reviewed is good; what hasn’t is bad. But life is never so simple. In the late 1960s, Joseph Weber (1919 – 2000) published two Physical Review Letters were he claimed to have detected gravitational waves. Although there are a few holdouts who believed he did, the general consensus is that he did not, since his results have not been reproduced. Rather it is generally believed that his results were an experimental artifact. His results were peer reviewed and accepted at a “prestigious” journal but that does not guarantee that they are correct. Even the Nobel committee occasionally makes mistakes, most notably giving the award to the discoverer of lobotomies.

Conversely, consider the case of Alfred Wegener (1880 – 1930). In 1912 he proposed the idea of continental drift. To say the least, it was not enthusiastically received. It did not help that Wegener was meteorologist, not a geologist. This theory was largely rejected by his peers in geology. For example, the University of Chicago geologist Rollin T. Chamberlin said, If we are to believe in Wegener’s hypothesis we must forget everything which has been learned in the past 70 years and start all over again. In 1926, the American Association of Petroleum Geologists (AAPG) held a special symposium on the hypothesis of continental drift and rejected it. After that, the hypothesis was strictly on the fringe until the late 1950s and early ‘60s when it finally became mainstream.

Thus, we see that peer review cannot definitively be relied on to give the final answer. So what use is peer review? The problem is that, as pointed out in previous posts, in science there is no one person who can serve as the ultimate authority; rather, observation is. As a school student, the teacher knows more than the student and can be considered the final authority. In university, the professor plays that role, sometimes with gusto. But when it comes to research, frequently it is the researcher him/herself who is the world expert. So how can research be judged and how do we make decisions about that research? And decisions do have to be made. We cannot publish everything—the useful results would get lost in the noise. We must maintain the collective wisdom that has been laboriously developed. Similarly, decisions have to be made on who gets research grants. Do we use a random number generator? Ok, no snide remarks, I admit that it does occasionally look like we do.  As there is no single human to serve as the final authority, we turn to the people who know the most about the topic, namely the peers of the person. If we want a decision related to sheep farming, we consult sheep farmers; if about nuclear physics, we consult nuclear physicists. Peer review is simply the idea that when we have to make a decision, we consult those people most likely to be able to make an informed decision. Is it perfect? No. Is there a better process? Perhaps, but no one seems to know what it is.

Peer review is also used as a bulwark against bull…, oops, material, that is of questionable validity. The expression, that has not been peer reviewed, is used as a euphemism for, that is complete and utter crap and I am not going to waste my time dealing with it. In this case it tends to come across as closed minded: Not peer reviewed?  It’s nonsense! Needless to say, cranks take great exception and tend to regard peer review as a new priesthood who stifles innovation.  And indeed, as noted above, sometimes peer review does get it wrong. There is always this tension between accepting nonsense and rejecting the next big thing. As the case of continental drift illustrates, it is sometimes only in retrospect, when we have more data, that we can tell what the correct answer is. However, it is better to reject or delay the acceptance of something that has a good chance of being wrong than to have the literature overrun with wrong results (think lobotomies). However, contrary to popular conception, Copernicus and Wegener are the exception, not the rule. That is why Copernicus is still used as the example of the suppression of ideas half a millennium later—there are just not that many good examples. And I might add that both Copernicus and Wegener were initially rejected for good reasons and were accepted once sufficient supporting data came to light.  Most people, who the peer review process deems to be cranks, are indeed cranks. Never heard of Immanuel Velikovsky (1895 – 1979)? Well, there is a reason. The few who were right are remembered, but the multitudes that were wrong are, like Velikovsky, forgotten.

Peer review is one of the cornerstones of science and is an essential part of its error control process. At every level in science we use peers to check for errors. Within well-run collaborations, results are reviewed by the peers within the collaboration before submitting for publication. I will get my peers to read my papers before submission. Even the editing of these posts before being put on line can be considered peer review. Then there is the formal peer review a paper receives when it is submitted to a journal. In many ways this is the least important peer review because it is after a paper is published that it receives its most vigorous peer review. I can be quite sure there is no fundamental flaw in special relativity, not because Einstein was a genius, not because it was published in a prestigious journal, but because after it was published many very clever people tried very hard to find flaws in it and failed. Any widely read scientific paper will be subject to this thorough scrutiny by the author’s peers.  That is the reason we can have confidence in the results of science and why secrecy is the enemy of scientific progress. Given enough eyeballs, all bugs are shallow[1].

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.