• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Byron Jennings | TRIUMF | Canada

Read Bio

Creativity: as Important in the Sciences as in the Arts

Friday, March 2nd, 2012

In these essays, I have discussed various aspects of the scientific method based on model building and testing against experiment. One aspect, I have avoided up until now is how the models are constructed. This is a logically distinct process from how models are tested. We have seen that models are tested by requiring them to make predictions that can be tested against observation. We can then rank models on parsimony and their ability to make successful predictions. But how are models constructed in the first place? Francis Bacon (1561 – 1626) and Isaac Newton (1642 – 1727) would have told you that models were deduced from observation by a method called induction. In this approach, the construction and testing of models is seen as the same process, not two distinct processes. But induction is not valid and the making of models is logically independent of their testing. This is a key point Karl Popper (1902 – 1994) made when he introduced the idea of falsification.

Let us turn to a master, Albert Einstein (1879 – 1955), and see what he says:

The supreme task of the physicist is . . . the search for those most general, elementary laws from which the world picture is to be obtained through pure deduction. No logical path leads to these elementary laws; it is instead just the intuition that rests on an empathic understanding of experience. In this state of methodological uncertainty one can think that arbitrarily many, in themselves equally justified systems of theoretical principles were possible; and this opinion is, in principle, certainly correct. But the development of physics has shown that of all the conceivable theoretical constructions a single one has, at any given time, proved itself unconditionally superior to all others. No one who has really gone deeply into the subject will deny that, in practice, the world of perceptions determines the theoretical system unambiguously, even though no logical path leads from the perceptions to the basic principles of the theory.

Very curious: no logical path leads to these elementary laws. Paul Feyerabend (1924 – 1994) would have agreed. He wrote a book entitled, Against Method: Outline of an Anarchistic Theory of Knowledge. In this book, he argued that there is no scientific method, but that knowledge advances through chaos. However, what Bacon and Newton missed, Feyerabend glimpsed through a haze, and Einstein understood, is that model construction is a creative, not algorithmic, process: the intuition that rests on an empathic understanding of experience. Science does not function by deducing models from observations. Rather, we construct models and compare their predictions with what is observed. A falling apple inspired Newton, rising water in a bath inspired Archimedes, a dream inspired Kekulé (the structure of benzene). Or at least, that is how the stories go. For model construction, Feyerabend is correct; anything goes—dreams, divine inspiration, pure luck, and especially hard work. Creativity by its very nature is chaotic and erratic.

We start with observations and ask, “what is the simplest model that could account for these observations?” Once a model is constructed, it is tested by the much more algorithmic or deterministic process of making predictions and checking them against observation.  Now, one of the necessary constraints in building scientific models is simplicity. Without simplicity, we can get nowhere: an infinite set of models describe any finite set of measurements. That is why we cannot ask questions such as, “what model do these observations imply?” There are infinitely many such models.

Yet, simplicity comes at a price. For the sake of argument, take simplicity to be defined in terms of Kolmogorov complexity. This is a measure of the computational resources needed to specify the model. There is a theorem that says the Kolmogorov complexity cannot be determined algorithmically. If we accept the above identification of simplicity, it then follows that scientific models cannot be constructed algorithmically from observations. So much for Francis Bacon, Newton, and induction. The identification is probably not exact, but never-the-less sufficiently close to reality to be indicative. Model building cannot be algorithmic, but rather, is creative.

The creative aspect of science is obscured by two things: the analytic aspect and the accumulative aspect. The analytic aspect of testing models tends to obscure the creativity in constructing them. We are so blinded by the dazzling mathematical virtuoso of Newton that we fail to see how creative the development of this three laws were. Aristotle got it all wrong, but Newton got it right—and he did it through creativity, not just math.

The accumulative nature of science gives a sense of inevitability that makes the creativity less obvious. The sense of creativity can also feel lost in the noise of a thousand lesser persons. To quote Bertrand Russell:

In science, men have discovered an activity of the very highest value in which they are no longer, as in art, dependent for progress upon the appearance of continually greater genius, for in science, the successors stand upon the shoulders of their predecessors; where one man of supreme genius has invented a method, a thousand lesser men can apply it. No transcendent ability is required in order to make useful discoveries in science; the edifice of science needs its masons, bricklayers, and common labourers as well as its foremen, master builders, and architects. In art, nothing worth doing can be done without genius; in science even a very moderate capacity can contribute to a supreme achievement.

While we common labourers may not be creative geniuses, the foremen, master builders, and architects are. When it comes to creativity, Isaac Newton, Charles Darwin, and Albert Einstein do not take a back seat to writers like William Shakespeare, Charles Dickens, or James Joyce, nor to painters like Michelangelo, Vincent van Gogh, or Pablo Picasso.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

Most Exciting New Results are Wrong!

Friday, February 24th, 2012

Giovanni Schiaparelli (1835 – 1910) is mainly remembered for his discovery of “canali” on Mars. What a fate, to be remembered only for discovering something that does not exist. I suppose it could be worse; he could be remembered only as the uncle of the fashion designer, Elsa Schiaparelli (1890–1973).  But he was not alone in seeing canals on Mars. The first recorded instant of the use of the word “canali” was by Angelo Secchi (1818 – 1878 ) in 1858. The canals were also seen by William Pickering (1818 – 1878) and most famously by Percival Lowell (1855 – 1916). That’s right, the very Lowell whom the Lowell observatories on Mars Hill Road in Arizona are named after. Unfortunately, after about 1910, better telescopes failed to find them and they faded away. Either that or Marvin the Martian filled them in while fleeing from Bugs Bunny. However, they did provided part of the backdrop for H.G. Wells’s War of the World. But it is interesting that the canals were observed by more than one person before being shown to be optical illusions.

Another famous illusion (delusion?) was the n-ray discovered by Professor Blondlot (1844 – 1930) in 1903. These remarkable rays, named after his birth place, Nancy, France, could be refracted by aluminum prisms to show spectral lines. One of their more amazing properties was that they were only observed in France, not England or Germany. About 120 scientists in 300 papers claimed to have observed them (note the infallibility of peer review). But then Prof. Robert Wood (1868 – 1955), at the instigation of the magazine Nature, visited the laboratory. By judiciously and surreptitiously removing and reinserting the aluminum prism, he was able to show that the effect was physiological, not physical. And that was the end n-rays and also of poor Prof. Blondlot’s reputation.

Probably the most infamous example of nonsense masquerading as science is, Homo piltdownensis, otherwise known as the Piltdown man.  This was the English answer to the Neanderthal man and the Cro-Magnon man discovered on the continent. A sculptured elephant bone, found nearby, was even jokingly referred to as a cricket bat. Seems appropriate. While there was some scepticism of the find, the powers that be declared it to be a breakthrough and it was only forty years later that someone had the brilliant idea that it might be a fake. Once the signs of faking were looked for, they were easily found. What we see here is an unholy combination of fraud, delusion, and people latching onto something that confirmed their preconceived ideas.

These examples are not unique. Most exciting new results are wrong[1]: polywater, the 17 kev neutrino cold fusion, superheavy element 118, pentaquarks, and the science on almost any evening news cast. Cancer has been cured so often it is a surprise that any cancer cells are left. So, why so many exciting wrong results? First, to be exciting means, almost by definition, that the results have a low prior probability of being correct. The discovery of a slug eating my garden plants is not exiting, annoying but not exciting. It is what we expect to happen. But a triceratops in my garden, that would be exciting and almost certainly specious (pink elephants are another matter).  It is the unexpected result that is exciting and gets hyped. One can say over hyped. There is pressure to get exciting results out quickly and widely distributed so you get the credit; a pressure to not check as carefully as one should, a pressure to ensure priority by not checking with one’s peers.

Couple the low prior probability and the desire for fame with the ubiquity of error and you have the backdrop to most exciting new results being wrong. Not all exciting new results are wrong, of course. For example, the discovery of high temperature superconductors (high = liquid nitrogen). This had crackpot written all over it. The highest temperature recorded earlier was 30deg Kelvin. But with high temperature superconductors, that jumped to 90 in 1986 and then shortly afterwards to 127deg kelvin. Surely something was wrong, but it wasn’t and a Nobel Prize was awarded in 1987. The speed at which the Nobel Prize was awarded was also the subject of some disbelief. Why was the result accepted so quickly? Reproducibility. The results were made public and quickly and widely reproduced. It was not just the cronies of the discoverers who could reproduce them.

The lesson here is to distrust every exciting new science result: canals on Mars, n-ray, high temperature superconductors, faster than light propagation of neutrinos (which coincidentally just released some new interesting information), the Higgs bosons and so on. Wait until they have been independently confirmed and then wait until they have been independently confirmed again. There is a pattern with these results that turn out to be wrong. In almost every example given above, the first attempts at reproducing the wrong results succeeded.  People are in a hurry to get on the band wagon; they want to be first to reproduce the results. But after the initial excitement fades, sober second thought kicks in. People have time to think about how to do the experiment better, time to be more careful. In the end, it is this third generation of experiments that finally tells the tale.  Yah, I know I should not have been sucked in by pentaquarks but they agreed with my preconceived ideas and the second generation experiments did see them in almost the right place. Damn. Oh well, I did get a widely cited paper out of it.

Once burnt, twice shy. So scientists become very leery of the next big thing. Here again, science is different than the law. In the law, there is a presumption of innocence until proven guilty. In other words, the persecution must prove guilt; the suspect does not need to prove innocence. In science, the burden of proof is the other way around. The suspect—in this case, the exciting new result—must prove innocence. It is the duty of the proponents of the exciting new result to demonstrate the validity or usefulness of the new result.  It is the duty of his peers to look for holes, because as the above examples indicate, they are frequently there.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] For more information on the examples, Google is your friend.

Share

Error Control in Science

Friday, February 17th, 2012

Scientists are subject to two contradictory criticisms. The first is that they are too confident of their results, to the point of arrogance. The second is that they are too obsessed about error control–all this nonsense about double blind trials, sterilized test tubes, lab coats and the like. It evidently has not occurred to the critics that the reason scientists are confident of their results is that they have obsessed over error control. Or conversely they obsess over error control so they can be confident of their results.

Now, most people outside science, do not realize a scientist’s day job is error control. There is this conception of scientists having brilliant ideas, going into immaculate labs where they effortlessly confirm their results to the chagrin of their competitors. That is, of course, when they are not plotting world domination like Pinky and the Brain.  But scientists neither spend their time plotting world domination (be with you in a minute Brain) nor doing effortless experiments. But rather they are thinking about what might be wrong; how do I control that error? As for theorists, they must be a part of a wicked and adulterous generation[1] because they are always seeking after a sign–a minus sign that is.

So what do scientists do to control errors? There are very few arrows in their quiver. Really only three: care in doing the experiment or calculation, care in doing the experiment or calculation, and care in doing the experiment or calculation. Well actually there are two others: peer review and independent repetition. Let’s take the first three first: care, care, and care. As previously noted, scientists are frequently criticized here. Why do double blind studies when we have Aunt Martha’s word for it that Fyffe’s Patented Moose Juice cured her lumbago? Well actually, testimonials are notoriously unreliable. A book[2] I have, had an example of from the early 1900’s of testimonials for cures for consumption and then had the dates the person died of consumption. The death was frequently quite close to the date of the testimonial. So no, I will not trust Aunt Martha’s testimonial[3]. To quote Robert L. Park: The most important discovery of modern medicine is not vaccines or antibiotics, it is the randomized double-blind test, by means of which we know what works and what doesn’t. This has now carried over into subatomic physics where blind analyses are common. By blind, I mean that the people doing the analysis cannot tell how close they are to the expected answer (the theoretically predicted answer or the results of a previous experiment) until most of the analysis has been completed. Otherwise, as one of my experimental colleagues said: data points are like sheep, they travel in flocks. Even small biases can influence the results. Blind analysis is just one example of the extremes scientists go to, to ensure that their results are reliable. All this rigmarole that scientists go through is one of the reasons life expectancy increased by about 30 years between 1900 and 2000, perhaps the major reason. The lack of this care is the reason I distrust alternative medicine.

We now move on to the other two aspects of error control: peer review and independent replication of results. Both of these depend on the results being made public. Since these are crucial to error control, results that have not been made available for scrutiny should be treated with suspicion. Peer review has been discussed in the previous post and is just the idea that new results should be run past the people who are most knowledgeable so they can check for errors.

Replication is, in the end, the most important part of error control. Scientists are human, they make mistakes, they are deluded, and they cheat. It is only through attempted replication that errors, delusions, and outright fraud can be caught. And it is very good at catching them. In the next post, I will go into the examples but it is a good practice not to trust any exciting new result until it has been independently confirmed. However replication and reproducibility are not simple concepts. I go out doors and it is nice and sunny, I go out twelve hours later and it is dark and cold. The initial observation is not reproduced. I look up, I see stars. An hour later I go out and the stars are in different places. And the planets, over time, they wander hither, thither and yon. In a very real sense the observations are not reproduced. It is only within the context of model or paradigm that we can understand what reproducible means. The models, either Ptolemaic or Newtonian, tell us where to look for the planets and we can reproducibly check they are where the models say they should be at any given time. Reproducibility is always checking against a model prediction.

Replication is also not just doing the same things over and over again. Then you would make the same mistakes and get the same results over and over again. You do things differently, guided by the model being tested, to see if the effect observed is an artifact of the experimental procedures or real. Is there really a net energy gain or have you just measured a hot spot in your flask. The presence of the hot spot can be reproduced, but put in a stirrer to test the idea of energy gain and, damn, the effect went away. Another beautiful model was slain by an ugly observation. Oh, well, happens all the time.

So science advances, we keep testing our previous results in new and inventive ways. The wrong results fall by the wayside and are forgotten. The correct ones pile up and we progress. To err is human, to control errors–science.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] Matthew 12:39,16:4

[2] Pseudoscience and the Paranormal, Terence Hines, Prometheus Books, Buffalo (1988).

[3] My grandfather died of consumption.

Share

Peer Review: A Cornerstone of Science

Friday, February 10th, 2012

Ah yes, peer review; one of the more misunderstood parts of the scientific method. Peer review is frequently treated as an incantation to separate the wheat from the chaff. What has been peered reviewed is good; what hasn’t is bad. But life is never so simple. In the late 1960s, Joseph Weber (1919 – 2000) published two Physical Review Letters were he claimed to have detected gravitational waves. Although there are a few holdouts who believed he did, the general consensus is that he did not, since his results have not been reproduced. Rather it is generally believed that his results were an experimental artifact. His results were peer reviewed and accepted at a “prestigious” journal but that does not guarantee that they are correct. Even the Nobel committee occasionally makes mistakes, most notably giving the award to the discoverer of lobotomies.

Conversely, consider the case of Alfred Wegener (1880 – 1930). In 1912 he proposed the idea of continental drift. To say the least, it was not enthusiastically received. It did not help that Wegener was meteorologist, not a geologist. This theory was largely rejected by his peers in geology. For example, the University of Chicago geologist Rollin T. Chamberlin said, If we are to believe in Wegener’s hypothesis we must forget everything which has been learned in the past 70 years and start all over again. In 1926, the American Association of Petroleum Geologists (AAPG) held a special symposium on the hypothesis of continental drift and rejected it. After that, the hypothesis was strictly on the fringe until the late 1950s and early ‘60s when it finally became mainstream.

Thus, we see that peer review cannot definitively be relied on to give the final answer. So what use is peer review? The problem is that, as pointed out in previous posts, in science there is no one person who can serve as the ultimate authority; rather, observation is. As a school student, the teacher knows more than the student and can be considered the final authority. In university, the professor plays that role, sometimes with gusto. But when it comes to research, frequently it is the researcher him/herself who is the world expert. So how can research be judged and how do we make decisions about that research? And decisions do have to be made. We cannot publish everything—the useful results would get lost in the noise. We must maintain the collective wisdom that has been laboriously developed. Similarly, decisions have to be made on who gets research grants. Do we use a random number generator? Ok, no snide remarks, I admit that it does occasionally look like we do.  As there is no single human to serve as the final authority, we turn to the people who know the most about the topic, namely the peers of the person. If we want a decision related to sheep farming, we consult sheep farmers; if about nuclear physics, we consult nuclear physicists. Peer review is simply the idea that when we have to make a decision, we consult those people most likely to be able to make an informed decision. Is it perfect? No. Is there a better process? Perhaps, but no one seems to know what it is.

Peer review is also used as a bulwark against bull…, oops, material, that is of questionable validity. The expression, that has not been peer reviewed, is used as a euphemism for, that is complete and utter crap and I am not going to waste my time dealing with it. In this case it tends to come across as closed minded: Not peer reviewed?  It’s nonsense! Needless to say, cranks take great exception and tend to regard peer review as a new priesthood who stifles innovation.  And indeed, as noted above, sometimes peer review does get it wrong. There is always this tension between accepting nonsense and rejecting the next big thing. As the case of continental drift illustrates, it is sometimes only in retrospect, when we have more data, that we can tell what the correct answer is. However, it is better to reject or delay the acceptance of something that has a good chance of being wrong than to have the literature overrun with wrong results (think lobotomies). However, contrary to popular conception, Copernicus and Wegener are the exception, not the rule. That is why Copernicus is still used as the example of the suppression of ideas half a millennium later—there are just not that many good examples. And I might add that both Copernicus and Wegener were initially rejected for good reasons and were accepted once sufficient supporting data came to light.  Most people, who the peer review process deems to be cranks, are indeed cranks. Never heard of Immanuel Velikovsky (1895 – 1979)? Well, there is a reason. The few who were right are remembered, but the multitudes that were wrong are, like Velikovsky, forgotten.

Peer review is one of the cornerstones of science and is an essential part of its error control process. At every level in science we use peers to check for errors. Within well-run collaborations, results are reviewed by the peers within the collaboration before submitting for publication. I will get my peers to read my papers before submission. Even the editing of these posts before being put on line can be considered peer review. Then there is the formal peer review a paper receives when it is submitted to a journal. In many ways this is the least important peer review because it is after a paper is published that it receives its most vigorous peer review. I can be quite sure there is no fundamental flaw in special relativity, not because Einstein was a genius, not because it was published in a prestigious journal, but because after it was published many very clever people tried very hard to find flaws in it and failed. Any widely read scientific paper will be subject to this thorough scrutiny by the author’s peers.  That is the reason we can have confidence in the results of science and why secrecy is the enemy of scientific progress. Given enough eyeballs, all bugs are shallow[1].

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

 

Share

The Role of Authority in Science and in Law

Friday, February 3rd, 2012

In the thirteenth century, Western Europe rediscovered the teachings of ancient Greece. Two friars played a lead role in this: the Dominican Saint Thomas Aquinas (1225 – 1274) and the Franciscan Roger Bacon (1214/1220 –1292).  Aquinas combined the teaching of Aristotle with Christianity. His teachings became the orthodoxy in both Christianity and natural philosophy until the scientific revolution in the seventeenth century. Aquinas took Aristotle as an authority and, in turn, was taken as an authority by those who followed him. To some extent this has continued down to the present day, at least in the Catholic Church. The scientific revolution was, to a large extent, the overturning of Aristotelian philosophy as repackaged by Aquinas.

Bacon took a different track and extracted something different from the study of Aristotle. This something different was an early version of the scientific method. He applied mathematics to describe observations and advocated using observation to test models. Bacon described a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification.  Bacon was largely ignored and, unlike Aquinas, was not declared a saint. Galileo Galilei (1564 – 1642), if not directly influenced by Bacon, was in many ways following his tradition, both in his use of mathematics and in stressing the importance of observations. The difference between Aquinas and Bacon is the contrast between the appeal to authority and the finding out for oneself. In this contest, the appeal to authority lost rather decisively, but it was a long tough fight. People generally prefer a given answer, even if it is wrong, to the tough process of extracting the correct answer.

In spite of all that, appeal to authority is frequently necessary. The legal system in most democracies, for example, is based on the idea of appeal to authority. The parliament may make the laws but it is the courts that decide on what they mean.  Frequently, the courts even have the authority to override laws based on the constitution. This is true in many countries but most famously in the United States of America. In these countries, what the Supreme Court says, is the law. What a law actually means is commonly a matter of interpretation as evidenced by split decisions where one judge holds one opinion and another judge the opposite. Perhaps the interpretations are even arbitrary as they sometimes change over time despite the authority given to precedence. But a decision is required and there is no objective criteria, so the majority rules.

Now, it is worth commenting that that laws of nature and laws of man are completely different beasts and it is unfortunate that they are given the same name. The so called laws of nature are descriptive. They describe regularities that have been observed in nature.  They have no prescriptive value. In contrast, the laws of man are prescriptive, not descriptive. Certainly, the laws against smoking marijuana are not descriptive in British Columbia, neither were the laws against drinking during US prohibition. The laws describe what the government thinks should happen with prescribed punishments for those who disobey. However, there is no penalty for breaking the law of gravity because, as far as we know, it can’t be done. If someone actually did it, it would cease to be a law and there would be a Nobel Prize, not a penalty.  Like the laws of man, the laws of God—for example, the Ten Commandments—are prescriptive, not descriptive, with penalties given for breaking them. You can break the law of man and the laws of God, but not the laws of physics.

In science, things are different than in the courts of law. In the latter, we are concerned with the meaning of a law that some group of people have written. This, by its very nature, has a subjective component. In science, we are trying to discover regularities in how the universe operates. In this, we have the two objective criteria: parsimony and the ability to make correct predictions for observations. As pointed out in the previous post, idolizing a person is a mistake, even if that person is Isaac Newton. Appeal to observation trumps appeal to a human authority, but in the short term, even in science, appeal to a human authority is often necessary. Life is too short and the amount to know too large to discover it all for oneself. Thus, one relies on authorities. I consult the literature rather than trying to do experiments myself. We consult other people for expertise that we do not have ourselves. We rely on the collective wisdom of the community as reflected in the published literature. When we require decisions, we must rely on the proximate authorities of peers in a process called peer review. This process is relied on to maintain the collective wisdom and will be discussed in more detail in the next post.  In the meantime, we conclude this post by paraphrasing William Lyon McKenzie King[1] (1874 –1950): Appeal to authority if necessary but not necessarily appeal to authority.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] The longest serving Canadian Prime Minister.

Share

The Role of the Individual in Science and Religion

Friday, January 27th, 2012

Lady Hope (1842 – 1922)[1] in 1915 published a claim that Charles Darwin (1809 – 1882) on his death bed had recanted his views on evolution and God. This story published thirty-three years after Darwin’s death was strongly denied by his family but has made the rounds of various creationist publications and web sites to this day. Now my question is: Why would anyone care? It may be of interest to historians but nothing Darwin wrote, said, or did has any consequences for evolution today. The theory itself and the evidence supporting it have moved far beyond Darwin. But this story does serve to highlight the different role of individuals in science as compared to religion or even philosophy.

I have always considered it strange that philosophy places such importance on reading the works of long dead people—Aristotle, Descartes, etc. In science, Newton’s ideas trumped those of both Aristotle and Descartes, yet very few scientists today read Newton’s works. His ideas have been taken, clarified, reworked, and simplified. The same thing applies to the scientific writings of other great and long dead scientists. Nothing is gained by going to the older sources. Science advances and the older writings lose their pedagogical value. This is because in science, the ultimate authority is not a person, but observation.

A given person may play an important role but there is always someone else close on his heels. Natural selection was first suggested, not by Darwin, but by Patrick Matthew (1790 – 1874) in 1831 and perhaps by others even earlier. Alfred Russell Wallace’s (1823 – 1913) and Darwin’s works were presented together to the Linnean Society in July 1858[2].  And so it goes: Henri Poincaré (1854 – 1912) and Hendrik Lorentz (1853 – 1928) were nipping at Einstein’s heels when he published his work on special relativity.  Someone gets priority, but it is observation that ultimately should be given the credit for new models.

When the ultimate role of observation is forgotten, science stagnates. Take, for example, British physics after Isaac Newton (1642 – 1727). It fell behind the progress on the continent because the British physicists were too enamoured of Newton. But the most egregious example is Aristotle (384 BC – 322 BC). The adoration of Aristotle delayed the development of knowledge for close to two millennia.  Galileo and his critic, Fortunio Liceti (1577 – 1657), disputed about which was the better Aristotelian, as if this was the crucial issue. Even today, post-docs all too frequently worry about what the supervisor means rather than thinking for themselves: But he is a great man, so his remark must be significant[3]. Actually he puts on his pants on one leg at a time like anyone else.

Then there is the related problem of rejecting results due to their origins, or the associated ideology. The most notorious example is the Nazi rejection of non-Aryan science; for example, relativity because Einstein was a Jew. One sees a similar thing in politics where ideas are rejected as being socialist, fascist, atheist, Islamic, Christian, or un-American thus avoiding the real issues of the validity of the idea: Darwinism[4] is atheistic hence it must be condemned. Yeah?  And your mother wears army boots.

In science, people are considered great because of the greatness of the models they develop or the experimental results they obtained. In religion, it is the other way around. Religions are considered great based on the greatness of their founder. Jesus Christ is central to Christianity: and if Christ has not been raised, then our preaching is vain, your faith also is vain (1 Corinthians 15:14). Islam is based on the idea: There is no God but Allah and Mohammad is his prophet. Many other major religions (or philosophies of life) are founded on one person: Moses (Judaism), Buddha (Buddhism), Confucius (Confucianism), Lao Tzu (Taoism), Guru Nanak (Sikhism), Zoroaster (Zoroastrianism), Bahá’u’lláh (Bahá’í Faith) and Joseph Smith (Mormonism).  Even at an operational level, certain people have an elevated position and are considered authorities: for example, the Pope in the Catholic Church, or the Grand Ayatollahs in Shi’ite Islam. Because of the basic difference between science and religion, an attack on a founder of a religion is an attack on its core, while an attack on a scientist is an irrelevancy. If Joseph Smith (1805 – 1844) was a fraud, then Mormonism collapses. Yet nothing in evolution depends on Darwin, nor anything in classical mechanics on Newton. But we can understand the upset of the Islamic community when Mohammad is denigrated: it is an attack on their whole religious framework which depends on Mohammad’s unique role.

The difference in the role of the individual in science and religion is due to their different epistemologies. In science, everything is public—both the observations and the models built on them. In contradistinction, the inspiration or revelation of religion is inherently private, a point noted by Saint Thomas Aquinas (1225 – 1274). You too can check Einstein’s calculations or Eddington’s experiment; you do not have to rely on either Einstein or Eddington. Now it may take years of work and a lot of money, but in principle it can be done. But you cannot similarly check the claims of Jesus’s divinity, even with years of study, but must take it on faith or as the result of private revelation.

Unlike in science, in religion, old is better than new. If a physical manuscript of St. Paul’s writing dating from the first century were discovered, it would have a profound effect on Christianity. But a whole suitcase of newly discover works in Newton’s or Darwin’s handwriting would have no effect on the progress of science. This is because religion is based on following the teachings of the inspired leader, while science is based on observation.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] Otherwise known as Elizabeth Reid nee Cotton

[2] The president of the Linnean Society remarked in May 1859 that the year had not been marked by any revolutionary discoveries.

[3] I have heard that very comment.

[4] Note also the attempt to associate evolution with one person.

Share

The Interpretation of Quantum Mechanics

Friday, January 20th, 2012

When I first started dabbling in the dark side and told people I was working on the philosophy of science, the most common response from my colleagues was: Oh the foundations of quantum mechanics? Actually not. For the most part, I find the foundations of quantum mechanics rather boring. Perhaps that is because my view of science has a strong instrumentalist tinge, but the foundations of quantum mechanics have always seemed to me to be trying to fit a quantum reality into a classical framework; the proverbial triangular peg in an hexagonal hole. Take wave-particle duality for example. Wave and particles are classical idealizations. The classical point particle does not exist, even within the context of classical mechanics. It should come as no surprise that when the classical framework breaks down, the concepts from classical mechanics are no longer valid. What quantum mechanics is telling us is only that the classical concepts of waves and particles are no longer valid. Interesting, but nothing to get excited about.

The problem with the uncertainty principle is similar. This principle states that we cannot simultaneously measure the position and motion of a particle. Now, classically, the state of a particle is given by its location and motion (i.e. it’s momentum). Quantum mechanically, the state is given by the wave function or, if you prefer, by a distribution in the location-motion space[1]. Now the problem is not that the location and motion cannot be measured simultaneously but that the particle does not simultaneous have a well-defined position and motion since its state is given by a distribution. This causes realists, at least classical realists, to have fits. In quantum mechanics, the position is only known when it is directly measured, ie properties of the system only exist when they are being looked at. This is a distinctly antirealist point of view. Again, this is trying to force a classical framework on a quantum system. If anything is real in quantum systems, it is wave functions, not individual observables. But see below.

Quantum mechanics is definitely weird; it goes against our common sense, our intuition. The main problem is that, while classical mechanics is deterministic, quantum mechanics is probabilistic. To see why this is a problem, consider the classical-probability problem of rolling a dice. I roll a fair dice. The chance of it being 2 is 1/6; similarly for any value from 1 to 6. Now once I look at the dice the probability distribution collapses. Let’s say, I see a 2. The probability is now 1 that the value is 2 and zero for the other values. But for Alice who has not seen me check, the probabilities are still all 1/6. I now tell her that the number is even. This collapse her probability distribution so that it is 1/3 for 2,4,6 and zero for 1,3,5. Now for Bob, who did not hear me telling Alice, the probabilities are still 1/6 for each of the numbers. Two important points arise from this. First, classical probabilities change discontinuously when measurements are made and, second, classical probabilities depend not just on the system but on the observer, ie probabilities are observer dependant.

We should expect the same quantum mechanically. We should expect measurements to discontinuously change the probability distribution and the probability distribution to be observer dependent. The first is certainly true. Quantum mechanical measurements cause the wave function to collapse and consequently the probability distribution[2] also collapses. The second is not commonly realized or accepted, but it should be. The idea that the wave function is a property of the quantum system plus observer, not the quantum system in isolation, is not new. Indeed, it is a variant of the original Copenhagen interpretation of quantum mechanics. But frequently, it is denied. When this is done, one is usually forced to the conclusion that the mind or consciousness plays a large and mysterious role in the measurement process. Making the wave function, or the state description, observer dependent avoids this problem.  The wave function is then just the information the observer has about the quantum system. As Niels Bohr (1885 – 1962), one of the founders of quantum mechanics, said: It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.

Let us consider the wave function collapse in more detail. Consider an entanglement experiment. The idea is to have a system emit two particles such that if we know the properties of one, the properties of the other are also known. One of the two emitted particles is measured by Bob and the other by Alice.[3] Now, Alice is lazy so she has her particle transported to her home laboratory. She also knows that once Bob has done his measurement, she does not have to measure her particle but only has to call Bob to get the answer. Bob is also lazy, but he does go the lab and, if he feels like it, does the measurement and faithfully records it in his log book. One day when Alice calls, she gets no answer. It turns out Bob has died between the time he would have made the measurement and when he would have recorded it in his lab book. Now Alice is very upset. Not that Bob has died—she never liked him anyway—but that she does not know if the momentous event of the wave function collapse has happened or not. Her particle has not arrived at her home yet, but there is no experiment she can do on it to determine if the wave function has collapsed or not. The universe may have split into many worlds but she can never know! Of course, if the wave function is a property of the observer-quantum system, there is no problem.  The information Bob had on the wave function was lost when Bob died and Alice’s wave function is as it always was. Nothing to see here, move along.

So what is the interpretation of quantum mechanics? An important part seems to be that wave functions are the information the observer has on the quantum system, and is not a property of the quantum system alone. If you do not like that, well there is always instrumentalism,[4] i.e. shut up and calculate.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] Technically, the phase space.

[2] The probability is the absolute value of the wave function squared.

[3] By convention it has to be Bob and Alice. I believe this is a quantum effect.

[4] Instrumentalism has no problem with quantum mechanics or, indeed, any other scientific model.

Share

Shut Up and Calculate

Friday, January 13th, 2012

Andreas Osiander (1498 – 1552) was a Lutheran theologian who is best remembered today for his preface to Nicolaus Copernicus’s (1473 – 1543) book on heliocentric astronomy: De revolutionibus orbium coelestium. The preface, originally anonymous, suggested that the model described in the book was not necessarily true, or even probable, but was useful for computational purposes. Whatever motivated the Lutheran Osiander, it was certainly not keeping the Pope and the Catholic Church happy. It might have been theological, or it could have been the more general idea that one should not mix mathematics with reality.  Johannes Kepler (1571 – 1630), whose work provided a foundation for Isaac Newton’s theory of gravity, took Copernicus’s idea as physical and was criticized by no less than his mentor, Michael Maestlin (1550 – 1631) for mixing astronomy and physics. This was all part of a more general debate about whether or not the mathematical descriptions of the heavens should be considered merely mathematical tricks or if physics should be attached to them.

Osiander’s approach has been adopted by many others down through the history of science. Sir Isaac Newton—the great Sir Isaac Newton himself—did not like action at a distance and when asked about gravity said, “Hypotheses non fingo.” This can be roughly paraphrased into English as: shut up and calculate. He was following Osiander’s example. It was not until Einstein’s general theory of relativity that one could do better. Even then, one could take a shut up and calculate approach to the curved space-time of general relativity.

Although atoms were widely used in chemistry, they were not accepted by many in the physics community until after Einstein’s work on Brownian motion in 1905.  Ernst Mach (1838 – 1916) opposed them because they could not be seen. Even in the early years of the twentieth century Mach and his followers insisted that papers discussing atoms, published in some leading European physics journals, have an Osiander-like introduction. And so it continues: in his first paper on quarks, Murray Gell-Mann (1929) introduced quarks as a mathematical trick.  If Alfred Wegener (1880–1930) had used that approach to continental drift it might not have taken fifty years for it to be accepted.

We see a trend: ideas that are considered heretical or at least unorthodox—heliocentrism, action at a distance, atoms, and quarks—are introduced first as mathematical tricks. Later, once people become used to the idea, they take on a physical reality, at least in people’s minds.

In one case, the trend went the other way. Maxwell’s equations describe electromagnetic phenomena very well. They are also wave equations. Now, physicists had encountered wave equations before and every time, there was a medium for the waves. Not being content to shut up and calculate, they invented the ether as the medium for the waves. Lord Kelvin (1824 –1907) even proposed that particles of matter were vortices in the ether. High school text books defined physics in terms of vibrations in the either.  And then it all went poof when Einstein published the special theory of relativity.  Sometimes, it is best to just shut up and calculate.

Of course, the expression Shut up and calculate is applied most notably to quantum mechanics. In much the same vein as with the ether, physicists invented the Omphalos … oops, I mean the many-worlds interpretation, of quantum mechanics to try to give the mathematics a physical interpretation. At least Philip Gosse (1810 –1888), with the Omphalos hypothesis, only had one universe pop into existence without any direct evidence of the pop. The proponents of the many-worlds interpretation have many universes popping into existence every time a measurement is made.  Unless someone comes up with a subtle knife[1] so one can travel from one of these universes to another, they should be not taken any more seriously than the ether.

The shut up and calculate approach to science is known as instrumentalism—the idea that the models of science are only instruments that allow one to describe and predict observations. The other extreme is realism—the idea that the entities in the scientific models refer to something that is present in reality. Considering the history of science, the role of simplicity, and the implications of quantum mechanics[2] (a topic for another post), realism—at least in its naïve form—is not tenable. Every time there is a paradigm change or major advance in science, what changes is the nature of reality given in the models. For example, with the advent of special relativity, the fixed space-time that was a part of reality in classical mechanics vanished.  But with an instrumentalists view, all that changes with a paradigm change is the range of validity of the previous models. Classical mechanics is still valid as an instrument to predict, for example, planetary motion. Indeed, even the caloric model of heat is still a good instrument to describe many properties of thermodynamics and the efficiency of heat engines. Instrumentalism thus circumvents one of the frequent charges again science: namely that we claim to know how the universe works and then discover that we were wrong. This is only true if you take realism seriously and apply it the internals of models.

The model building approach to science advocated in these posts is perhaps an intermediate between the extremes of instrumentalism and realism. The models are judged by their usefulness as instruments to describe past observations and make predictions for new ones; hence the tie-in to instrumentalism. The models are not reality any more than a model boat is, but they capture some not completely determined aspect of reality. Thus, the models are more than mere instruments, but less than complete reality.  In any event, one never goes wrong by shutting up and calculating.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod


[1] The Subtle Knife, the second novel in the His Dark Materials trilogy, was written by the English novelist Philip Pullman

[2] In particular Bell’s inequalities.

Share

The Origins of Science

Friday, January 6th, 2012

The true origins of science are lost in the mists of time. Possibly it started when some Australopithecus observed that a stick with a knot at the end was more effective in warding off a rival for its[1] mate than one without a knot. Since then the use of the scientific method has occasionally intruded into mainstream life but until the seventeenth century was always beaten back into the ground by philosophers, theocrats, and the proponents of common sense: Of course the earth is flat[2] and no, stones do not fall from the sky.  But in the seventeenth century science “took” and began its path to mainstream acceptance. To be definite, I would take the date for the emergence of science to be that night in 1609 when Galileo first pointed his telescope to the heavens.  Two questions then present themselves: 1) Why was it so late in the advance of civilization that science arose and 2) Why did it arise when and where it did? The first question was addressed in a previous post and the second will be addressed here.

The date chosen for the beginning of science is rather arbitrary since science did not spring full blown out of nothing. There were precursors and aftershocks but the early seventeenth century is as good a starting point as any. And it was not just in astronomy. In 1600, William Gilbert, (1544 – 1603) published his work on magnetism, De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure, and in 1628 William Harvey (1578 – 1657) released his work on blood circulation, De Motu Cordis.  Back in astronomy, Johannes Kepler (1571 – 1630) published his first book on elliptic planetary orbits in 1609 (1609 was indeed a propitious year) and a multi volume astronomy text book about ten years later.  So back to the basic question: why so much activity then and there?

There are a number of different reasons. The first is a slow accumulation of ideas that suddenly reached a critical point and took off. Even Galileo Galilei’s father, Vincenzo Galilei, played a role. He helped put music theory on an empirical and mathematical basis, and influenced his son towards applied mathematics. Inventions also played a role; for Galileo to point his telescope at the heavens, the telescope had first to be invented. Besides the telescope, the invention of the printing press around 1440 by Johannes Gutenberg[3] played a key role. It greatly increased the ease with which new ideas could propagate. It played a key role in the Protestant Reformation and allowed Nicolaus Copernicus’s (1473 – 1543) ideas of a heliocentric planetary system to spread throughout Europe. It also played a key role in disseminating Galileo’s ideas.

But there is more than that. In the thirteenth century, Western Europe began to rediscover the ancient learning of the Greeks, especially Aristotle. This came by way of the Arab world which added original contributions (e.g. Arabic numerals) to the store of knowledge and also collected information from other sources, for example, India (e.g. zero). Building on that foundation, Western Europe built an academic tradition at universities and monasteries.  This mostly consisted of scholasticism and the worship of Aristotle but it did set the stage for intellectual debate and the pursuit of knowledge as an end in itself. In the end, science destroyed the scholasticism and the worship of Aristotle that had laid the foundation for its success.

The rediscovery of Greek learning in the thirteenth century had a surprising side effect. The Greeks were long on rational thought, but regarded the things of the world as changing and unpredictable, probably due to their belief in capricious Gods and fickle Fates. Christian Europe believed in a supreme, omnipotent God. This lead at least one part of the church to regard science, the study of how the world worked, as sacrilegious since it seemed to imply a limit on what God could do.  But combine the Greek ideal of rationalism with the idea of an omnipotent being and suddenly things change.  The very concept of perfect was taken to imply rationality. Hence, the perfect God must be rational and create a rational and ordered universe; namely one in which it made sense to look for orderly laws. Indeed, in nineteenth century England, it was a common belief that God ruled through orderly natural laws.  And of course, it was a scientist’s role to discover these laws.

Religion played another role. The Protestant Reformation was a shift from the authority of a man, namely the Pope, to the written word of the bible. Science was also a shift from the authority of a person, Aristotle, to the unwritten word, namely the universe. The people of the time talked of God’s word and God’s work and considered both worthy of study; study without the need for a human intermediary.

The Protestant Reformation also destroyed a source of central authority—the Catholic Church. This, coupled with political fragmentation (especially in Germany) led to more change. There was no longer any central authority to suppress new ideas, yet enough rule of law to allow fairly rapid communication (again thanks in part due to the printing press). For example, Galileo’s works were published by the Jewish publishing house of Elsevier in protestant Holland while he was under house arrest in Italy by the Catholic Church.

It is also no accident that astronomy was one of the first sciences. It had “practical” applications: astrology (Kepler was a noted astrologer) and the calculation of religious holidays, most notably Easter. It was also sufficiently complicated so the motions of the planets could not be predicted trivially, but sufficiently simple to be amenable to treatment by the mathematics of the day. Hence, it became the Gold Standard of science.

As noted in the first paragraph, science has had from the beginning three main opponents (using anachronistic terms): the academic left, the religious right, and common sense. For Galileo, the academic left was represented by the natural philosophers, the religious right by the Catholic Church that the philosophers sicced on him, and common sense by those who “knew” heavier objects fell faster than light ones.  At various times, different ones of these have been predominated: editorials attacked the idea of rocks falling from the sky (meteorites) or rockets working in space were there was no air to react against. In the 1960s, the main opponents were the academic left with the idea that scientific laws were mostly, if not entirely, cultural and postmodernism remains an opponent of science. But today, the main opposition to science comes from the religious right with evolution being the main fall guy. But same three protagonists—the academic left, the religious right and common sense—have remained and will probably remain into the indefinite future as the main opponents of science. As it was in the beginning, is now, and ever shall be. World without end. Amen.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod


[1] Note, gender neutral pronoun

[2] In British Columbia and Switzerland it is crinkly rather than flat.

[3] Although the Koreans may have invented it earlier.

Share

The Siren Call of Logical Positivism

Friday, December 30th, 2011

For every problem, there is a simple solution: neat, plausible and wrong.

The philosophers such as Rudolf Carnap (1891 – 1970) and the Vienna Circle considered logical positivism the received view of the scientific method.  In the early to mid twentieth century, it dominated the philosophy of science discussions but is now widely viewed as seriously flawed—or as A. J. Ayer (1910 – 1989), a former advocate, put it: “I suppose the most important [defect]…was that nearly all of it was false.” Pity. But it was good while it lasted. So, what is logical positivism? It is sometimes defined by the statement: Only verifiable statements have meaning—note verifiable not falsifiable. The doctrine included opposition to all metaphysics, especially ontology and synthetic a priori propositions. Metaphysics is rejected not as wrong but as having no meaning.

Logical positivism is very nice idea: we work only with observations and what can be deduced directly from them. No need for theories, models or metaphysics. I can hear the cheering now, especially from my experimental colleagues. It was partially in response to the revolutions in physics in the early twentieth century. Quantum mechanics and relativity completely upended the metaphysics and philosophy built around classical mechanics, so the logical positivist wanted to eliminate the metaphysics to prevent this from happening again; a very laudable goal.

So what went wrong? As Ayer noted, almost everything. First, metaphysics tends to be like accents—something only the other person has. The very claim that metaphysics is not needed is itself a metaphysical claim.  Second, observations are not simple. As demonstrated by optical illusions, what we see is not necessarily what is there.  The perceptual apparatus does a lot of processing before the results are presented to the conscious mind. The model of the universe presented to the conscious mind probably has more uncontrolled assumptions than any accepted scientific model. But that is what the logical positivists took as the gospel truth. In addition there is Thomas Kuhn’s (1922 – 1996) claim that observations are model dependent. While that claim is disputable, it is clear that the interpretation of observations depend on the model, the paradigm or if you prefer the metaphysics; something beyond the observations themselves.

Third as Sir Karl Popper (1902 – 1994) argued, in general, scientific models cannot be verified only falsified (and one can argue that even that is impossible, see the first post in this series).  Thus, Only verifiable statements have meaning would exclude all of science from having meaning. Indeed, it would exclude even the statement itself since the statement Only verifiable statements have meaning cannot be verified.

Logical positivism: neat, plausible and wrong. Well can anything be salvaged? Perhaps a little. Consider the statement: In science, only models that can be empirically tested are worth discussing. Not to be overly broad, I restrict the statement to science. The criteria in mathematics are rather different and I do not wish to make a general statement about knowledge, at least not here. Second, I have replaced statement with model since by the Duhem-Quine thesis individual statements cannot be tested since one can make almost any statement true by varying the supporting assumptions. In the end it is global models that are tested. Science is observationally based, so the adjective empirical. I use tested to avoid complaints about the validity of verification or falsification. Tested is neutral in that regard. Finally, meaningful has been replaced by worth discussing. To see why consider the composition of the sun. In the late nineteenth century, it was regarded as something that would never be known. At that point the statement “The sun is composed mainly of hydrogen” would have been considered meaningless by the logical positivists and certainly, at that time, discussion of the issue would have been futile. But with the discovery of spectroscopic lines, models for the composition of the sun became very testable and the composition of sun is now considered well understood. It went from not worth discussing to well understood but the composition of the sun did not change. I would consider the statement “The sun is composed mainly of hydrogen” to be meaningful even before it could be tested; meaningful but not worth discussing.

My restatement above does, however, eliminate a lot of nonsense; like the omphalos hypothesis, the flying spaghetti monster, and a lot of metaphysics, from discussion. But its implications are more wide ranging. During my chequered career as a scientist, I have seen many pointless discussions of things that could not be tested: d-state of the deuteron, off-shell properties, nuclear spectroscopic factors and various other technical quantities that appear in the equations used by physicists. There was much heat but little light. It is important to keep track of what aspects of the models we produce are constrained by observation and which are not. Follow the logical positivists, not the yellow brick road, and keep careful track of what can actually be determined by measurements. What is behind the curtain is only interesting if the curtain can be pulled aside.

To conclude: Don’t waste your time discussing what can’t be empirically tested. That is all that’s left of logical positivism once the chaff has been blown away. And good advice it is—except for mathematicians. Either that or I have been lured to the rocks by the siren call of logical positivism and have another statement that is neat, plausible and wrong!

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod

Share