• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘philosphy of science’

Does God exist?  This is one of the oldest questions in philosophy and is still much debated. The debate on the God particle is much more recent but searching for it has cost a large fortune and inspired people’s careers. But before we can answer the questions implied in the title, we have to decide what we mean when we say something exists. The approach here follows that of my previous essay that defines knowledge in terms of models that make successful predictions.

Let us start with a simple question: What does it mean when we say a tree exists? The evidence for the existence of trees falls into two categories: direct and indirect. Every autumn, I rake the leaves in my backyard. From this I deduce that the neighbour has a tree. This is indirect evidence. I develop a model that the leaves in my backyard come from a tree in the neighbour’s yard. This model is tested by checking the prediction that the leaves are coming from the direction of the neighbour’s yard. Observations have confirmed this prediction.  Can I then conclude that a tree exists? Probably, but it would be useful to have direct evidence. To obtain this, I look into my neighbour’s yard. Yup, there is a tree. But not so fast–what my eye perceives is a series of impressions of light. The brain then uses that input to construct a model of reality and that model includes the tree. The tree we see is so obvious that we frequently forget that it is the result of model construction, subconscious model construction, but model construction none-the-less. The model is tested when I walk into the tree and hurt myself.

Now consider a slightly more sophisticated example: atoms. The idea of atoms, in some form or other, dates back to ancient India and Greece but the modern idea of atoms dates to John Dalton (1766 – 1844). He used the concept of atoms to explain why elements always interact in the ratios of small whole numbers. This is indirect evidence for the existence of atoms and was enough to convince the chemists but not the physicists of that time. Some like Ernst Mach (1838 – 1916) refused to believe in what they could not see up until the beginning of the last century[1]. But then Albert Einstein’s (1879 – 1955) famous 1905 paper[2] on Brownian motion (the motion of small particles suspended in a liquid) convinced even the most recalcitrant physicists that atoms exist.  Einstein showed that Brownian motion could be easily understood as the result of the motion of discrete atoms. This was still indirect evidence but convincing to almost everyone. Atoms were only directly seen after the invention of the scanning electron microscope and even then there was model dependence in interpreting the scanning electron microscope results. As with the tree, we claim that atoms exist because, as a shown by Dalton, Einstein and others, they form an essential part of models that have strong track record of successful predictions.

Now on to the God particle. What a name! The God particle has little in common with God but the name does sound good in the title of this essay. Then again, calling it the Higgs boson is not without problems as people other than Peter Higgs[3] (1920 – ) have claimed to have been the first to predict its existence. Back to the main point, why do we say the God particle exists? First there is the indirect evidence. The standard model of particle physics has an enviable record of successful predictions. Indeed, many (most?) particle physicists would be happier if it had had some incorrect predictions. We could replicate most of the successful predictions of the standard model without the God particle but only at the expense of making the model much more complicated. Like the recalcitrant physicists of old who rejected the atom, the indirect evidence for the God particle was not good enough for most modern-day particle physicists. Although few actually doubted its existence, like doubting Thomas, they had to see it for themselves. Thus, the Large Hadron Collider (LHC) and its detectors were built and direct evidence was found. Or was it? Would lines on a computer screen have convinced the logical positivists like Ernst Mach? Probably not, but the standard model predicted bumps in the cross-sections and the bumps were found. Given the accumulated evidence and its starring role in the standard model of particle physics, we confidently proclaim that the God particle, like the tree and the atom, exists. But remember, that even for the tree our arguments were model dependent.

Having discussed the God particle what about God? I would apply the same criteria to His/Her/Its existence as for the tree, the atom, or the God particle. As in those cases, the evidence can be direct or indirect.  Indirect evidence for God’s existence would be, for example, the argument from design attributed to William Paley (1743 – 1805). This argument makes an analogy between the design in nature and the design of a watch. The question is then is this a good analogy? If we adopt the approach of science this reduces to the question: Can the analogy be used to make correct predictions for observations? If it can, the analogy is useful, otherwise it should be discarded. There is also the possibility of direct evidence: Has God or His messengers ever been seen or heard? But as the previous examples show, nothing is ever really seen directly but depends on model construction. As optical illusions illustrate, what is seen is not always what is there. Even doubting Thomas may have been too ready to accept what he had seen. As with the tree, the atom or the God particle, the question comes back to: Does God form an essential part of a model with a track record of successful predictions?

So does God exist? I have outlined the method for answering this question and given examples of the method for trees, atoms and the God particle. Following the accepted pedagogical practice in nuclear physics, I leave the task of answering the question of God’s existence as an exercise for you, the reader.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Yes, 1905 was the last century. I am getting old.

[2] He had more than one famous 1905 paper.

[3] Why do we claim Peter Higgs exists?  But, I digress.

Share

For the antepenultimate[1] essay in this series, I will tackle the thorny issue of the relation between science and philosophy. Philosophy can be made as wide as you like to include anything concerned with knowledge. In that regard, science could be considered a subset of philosophy. It is even claimed that science arose out of philosophy, but that is an over simplification. Science owes at least as much to alchemy as to Aristotle. After all, both Isaac Newton (1642 – 1727) and Robert Boyle[2] (1627 – 1691) were alchemists and the philosophers, including Francis Bacon, vehemently opposed Galileo. Here, I wish to restrict philosophy to what might be call western philosophy—the tradition started with the ancient Greeks and continued ever since in monasteries and the hallowed halls of academia.

Let us start this discussion with Thomas Kuhn (1922 – 1996). He observed that Aristotelian physics and Newtonian physics did not just differ in degree, but were entirely different beasts. He, then, introduced the idea of paradigms to denote such changes of perspective. However, Kuhn misidentified the fault line. It was not between Aristotelian physics and Newtonian physics, but rather between western philosophy and science. Indeed, I would say that science (along with its sister discipline, engineering) is demarcated by a common definition of what knowledge is (see below). In science, classical and quantum mechanics are very different, yet they share a common paradigm for the nature of knowledge and, hence, we can compare the two from common ground.

Bertrand Russell (1872 –1970) in his A History of Western Philosophy makes a point similar to Kuhn’s. Russell claims that from the ancient Greeks up to the renaissance, philosophers would have been able to understand and discourse with each other. Plato (424 BCE – 348 BCE) and Machiavelli (1469 –1527) would have been able to discuss, if brought together. Similarly with Thomas Aquinas (1225 – 1274) and Martin Luther (1483 – 1546), if Aquinas refrained from having Luther burnt at the stake.  They shared a common paradigm, if not a common view. But with the advent of science, that changes. Neither Aristotle nor Aquinas would have understood Newton. The paradigm had shifted. This shift from philosophy to science is the best and, perhaps, the only real example of a paradigm shift in Kuhn’s original meaning.  Like Kuhn, Russell misidentified the fault line. It was not between early and late western philosophy, but between philosophy and science. C.P. Snow (1905 – 1980) in his 1959 lecture, The two Cultures, identifies a similar fault line but between science and the humanities more generally.

So what are these two paradigms? Philosophy is concerned with using rational arguments[3] to understand the nature of reality. Science turns that on its head and defines rational arguments through observation. A rotational argument is one that helps build models with increased predictive power. To doubt the Euclidian geometry of physical space-time or to suggest twins could age at different rates were at one time considered irrational ideas, beyond the pale. But now they are accepted due to observation-based modeling.  Philosophy tends to define knowledge as that which is true and known to be true for good reason (with debate over what good reason is). Science defines knowledge in terms of observation and observationally constrained models with no explicit mention of the metaphysics concept of truth. Science is concerned with serviceable, rather than certain knowledge.

Once one realizes science and philosophy are distinct paradigms, a lot becomes clear. For example, why philosophers have had so much trouble coming to grips with what science is. Scientific induction as proposed by Francis Bacon (1561 – 1626) does not exist. David Hume (1711 – 1776) started the philosophy of science down the dead end street to logical positivism. Immanuel Kant (1724 – 1804) thought Euclidean geometry was synthetic a priori information, and Karl Popper (1902 – 1994) introduced falsification, which is now largely dismissed by philosophers. Even today, the philosophic community as a whole does not understand what the scientific method is and tends toward the idea that it does not exist at all. All attempts, by either scientist or philosophers, to fit the square peg of science into the round hole of western philosophy have failed and will probably continue to do so into the indefinite future. Eastern philosophy is even more distant.

The different paradigms also provide the explanation of the misunderstanding between science and philosophy. Alfred Whitehead (1861 – 1947) claimed that all of modern philosophy is but footnotes to Plato. On the other hand, Carl Sagan (1934 – 1996) claims Plato and his followers delayed the advance of knowledge by two millennia. The two statements are not in contradiction if you have a negative conception of philosophy. And indeed, many scientists do have a negative conception of philosophy; a short list includes Richard Feynman (1918 – 1988), Ernest Rutherford (1871 – 1937), Steven Weinberg (b. 1933), Stephen Hawking (b. 1962), and Lawrence Krauss (b. 1954).  Feynman is quoted as saying: Philosophy of science is about as useful to scientists as ornithology is to birds. To a large extent, Feynman is correct. The philosophy of science has had little or no effect on the actual practice of science. It has, however, had a large impact on the scientist’s self-image of what they do. Newton was influenced by Francis Bacon, Darwin by Hume, and just try suggesting to a room full of physicists that science is not based on falsification[4].  Even this essay is built around Kuhn’s concept of a paradigm (but most of Kuhn’s other ideas on science are, to put it bluntly, wrong).

This series of essays has been devoted to defining the scientific paradigm for what knowledge is.  The conclusion I have reached, as noted above, is that western philosophy and science are based on different paradigms for the nature of knowledge. But are they competing or complementary paradigms? My take is that the two paradigms are incompatible as well as incommensurate. Knowledge cannot be simultaneously defined by what is true in the metaphysical sense, and by model building.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] That is N2LP in the compact notation of effective field theorists.

[2] The son of the Earl of Cork and the father of modern chemistry.

[3] This is an oversimplification but sufficient for our purposes.

[4] Although I am a theorist, I did that experiment. Not pretty.

Share

There is this myth that scientists are unemotional, rational seekers of truth. This is typified by the quote from Bertrand Russell: But if philosophy is to attain truth, it is necessary first and foremost that philosophers should acquire the disinterested intellectual curiosity which characterises the genuine man of science (emphasis added).  But just get any scientist going on his pet theory or project, and any illusion of disinterest will vanish in a flash.  I guess most scientists are not genuine men, or women, of science. Scientists, at least successful ones, are marked more by obsession than disinterested intellectual curiosity. They are people who wake up at one in the morning and worry about factors of two or missed systematic errors in their experiments, people who convince themselves that their minor role is crucial to the great experiment, people who doggedly pursue a weakly motived theory or experiment.  In the end, most fade into oblivion, but some turn out spectacularly successful and that motivates the rest to keep slugging along. It’s a lot like trying to win the lottery.

The obsession leads to a second myth—that of the mad scientist: cold, obsessed to the point of madness, and caring only about his next result. The scientist who has both a mistress and a wife so that while the wife thinks he is with the mistress and the mistress thinks he with the wife, he is down at the laboratory getting some work done. The myth is typified by the character Dr. Faustus, who sold his soul to the devil for knowledge, Dr. Frankenstein from Mary Shelley’s book, or in real life, by the likes of Josef Mengele. The mad scientist has also been a stable of movies and science fiction. But most real scientists are not that obsessed, and all successful people, regardless of their field—science, sports or business—are driven.

In terms of pettiness, Sir Isaac Newton (1642 – 1727) takes the cake. He carefully removed references to Robert Hooke (1635 – 1703) and Gottfried Leibniz (1646 –1716) from versions of the Principia. In Newton’s defense, it can be said that the forger, William Chaloner, was the only person he had drawn and quartered.  I do not know of modern scientists taking things to that extreme, but there is a recorded case of one distinguished professor hitting another over the head with a teapot. According to the legend, the court ruled it justified. I guess it was the rational and disinterested thing to do. There is also an urban legend of a researcher urinating on his competitor’s equipment.  The surprising thing is that these reports, even if not true, are at least creditable.

In a similar vein, it has been suggested that many great scientists have suffered from autism or Asperger’s syndrome. These include Henry Cavendish (1731 – 1810), Charles Darwin (1809 – 1882), Paul Dirac (1902 – 1984), Albert Einstein (1879 – 1955), Isaac Newton (1642 – 1727), Charles Richter (1900 – 1985) and Nikola Tesla (1856 – 1943).  Many of these diagnoses have been disputed, but it indicates that ruling some of the symptoms of autism were present in these scientists’ behaviour, for example, the single-mindedness with which they pursued their research.

So, are scientists disinterested, autistic, overly obsessed, and/or mad? Probably not more than any other group of people. But to be successful in any field—and especially in science—is demanding. To become a scientist requires a lot of work, dedication, and talent. Consider the years in university. Typically there are four years as an undergraduate. It is at least another four years for a Ph.D. and typically longer. Then to become an academic, you have to spend a few years as a Post-Doctoral Fellow. It is a minimum of ten years of hard work after high school to become an academic. In my case, it was thirteen years from high school to a permanent job. To become a scientist, you have to be driven. Even after you become a scientist, you have to be driven to stay at or near the top. It is not clear if scientists are driven more by a love of their field, or by paranoia. I have seen both and they are not mutually exclusive.

If scientists really were the bastions of rationality that they are sometimes portrayed to be, science would probably grind to a halt. Most successful ideas start out half-baked in some scientist’s mind. Only scientists willing to flog such half-baked ideas can become famous. To become successful, an idea must be pursued before there is any convincing evidence to support it. It is only after the work is done that there can be reason to believe it.  Those who succeed in making their ideas mainstream are made into heroes, those that fail, into crackpots. Generally, it is a bit of a crapshoot.

While individual scientists are not disinterested, nor driven by logic rather than emotion, science as an enterprise is. The error control methods of science, especially peer review and independent repetition, average the biases and foibles of individual scientists to give reliable results. No one should be particularly surprised when results that have not undergone this vetting, particularly the latter, are found to wrong[1]. However, in the final analysis, the enterprise of science reflects the personality of its ultimate judges: observation and parsimony. They are notoriously hard-hearted, disinterested, and unemotional.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Hence, the recently noted medical research results that were wrong.

Share

Cause and effect has been central to many arguments in science, philosophy and theology down through the ages, from Aristotle’s four causes[1] down to the present time. It is has frequently been used in philosophy and Christian apologetics in the form: The law of cause and effect is one of the most fundamental in all of science. But it has it naysayers as well. For example, Bertrand Russell (1872 –1970): All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word “cause” never occurs. … The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm. You can accuse Russell of many things, but being mealy-mouthed is not one of them. Karl Pearson (1856 – 1936), who has been credited with inventing mathematical statistics, would have agreed with Russell. He never talked about causation, though, only correlation.

One of the people who helped elevate cause and effect to its exalted heights was David Hume (1711 -1776). He was a leading philosopher of his day and known as one of the British empiricists (in contradistinction to the continental rationalists). Hume was one of the first to realize that the developing sciences had undermined Aristotle’s ideas on cause and effect and he proposed an alternate in two parts: first, Hume defined cause as “an object, followed by another, and where all objects similar to the first are followed by objects similar to the second”. This accounts for the external impressions. His second definition, which defines a cause as “an object followed by another, and whose appearance always conveys the thought to that other”, captures the internal sensation involved. Hume believed both were needed. In thus trying to relate cause and effect directly to observations Hume started the philosophy of science down two dead ends streets: one was the idea that cause and effect was central to science and the other lead to logical positivism.

Hume’s definitions are seriously flawed. Consider night and day. Day invariably night and the two are thought of together but night does not cause day in any sense of the word. Rather, both day and night are caused by the rotation of the earth, or, if you prefer, a geocentric frame, by the sun circling the earth.  The true cause has no aspect of one thing following another or one causing thought of the other. And the cause does not have to any way resemble the effect.  One can find many other similar cases: it getting light does not cause the sun to rise despite it getting light before the sun rises; it is the sun rising that causes it to get light. Trees losing their leaves does not cause winter but rather the days getting shorter causes the trees to lose their leaves and is a harbinger of winter. The root cause being the tilt of the earth’s axis of rotation with respect to the ecliptic.

As just seen, cause and effect is much more complicated than Hume and his successor thought, but not nonexistent as it detractors maintain. In the words of the statistician: correlation does not imply causation. However, it can give a strong hint.  The cock crowing does not cause the sun to rise but the correlation does suggest that the sun rising might just motivate, if not cause, the cock to crow. Similarly, consider lung cancer and smoking. Not all people who smoke get lung cancer and not all people who get lung cancer smoke (or inhale second hand smoke).  Nevertheless, there is a correlation. It was this correlation that started people looking to see if there is a cause and effect relation. Here we have correlation giving a hint; a hint that needed to be followed up. And it was followed up. Nicotine was found to be carcinogenic and the case was made convincing. A currently controversial topic is global warming and human activities. Here, as with smoking causing cancer, we have both correlation and a mechanism (the greenhouse effect of carbon dioxide and methane).

Cause and effect went out of favor as a cornerstone of science about the time quantum mechanics was developed. Quantum mechanics is non-deterministic with events occurring randomly. Within the context of quantum mechanics, there is no reason or cause for an atom to decay at one time and not at another. The rise of quantum mechanics and the decline in the prominence of cause and effect are probably indeed cause and effect. However, even outside quantum mechanics there are problems with cause and effect. Much of physics, as Russell observed, does not explicitly use cause and effect. The equations work equally well forwards or backwards, deriving the past from present as much as the future from the past.  Indeed, the equations of physics can even propagate spatially sideways rather than temporally forwards or backwards.

In spite of all that, the idea of cause and effect is useful. To understand its limitations and successes we have to go back to one of my mantras: the meaning is in the model. Cause and effect is not something that can be immediately deduced from observation, as Hume implies, but it is not a meaningless concept as Russell said or the physics discussion above might seem to imply. Rather, when we develop our models for a particular situation the idea of causation comes out of that model, is part and parcel of the model. We believe that the post causes the shadow and not the other way around, because of our model on the nature of light and vision. Similarly, the idea that the earth’s rotation causes day and night comes out of our model for light, vision and the solar system. The first chapter of Genesis indicates that this was not always considered obvious[2]. That smoking causing lung cancer is part of the biological model for cancer. That human activities causing global warming comes out of atmospheric modeling. But arising from a model does not make cause and effect any less real nor the concept less useful. Identifying smoking as a cause of cancer has saved many lives and identifying carbon dioxide and methane as the main causes of global warming will, hopefully, help save the world. Cause and effect may not a cornerstone of science but it is still a useful concept and certainly not a relic of a bygone age.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Discussed in a previous post.

[2] Day and night were created before the sun.

Share

Finding the Higgs boson will have no epistemic value whatsoever.  A provocative statement. However, if you believe that science is defined by falsification, it is a true one.  Can it really be true, or is the flaw in the idea of falsification?  Should we thumb our noses at Karl Popper (1902 – 1994), the philosopher who introduced the idea of falsification?

The Higgs boson, the last remaining piece of the standard model, is the object of an enormous search involving scientists from around the world.  The ATLAS collaboration alone has 3000 participants from 174 institutions in 38 different countries. Can only the failure of this search be significant? Should we send out condolence letters if the Higgs boson is found? Were the Nobel prizes for the W and Z bosons a mistake?

Imre Lakatos (1922 – 1974), a neo-falsificationist and follower of Popper, states it very cleanly and emphatically:

But, as many skeptics pointed out, rival theories are always indefinitely many and therefore the proving power of experiment vanishes.  One cannot learn from experience about the truth of any scientific theory, only at best about it falsehood: confirming instances have no epistemic value whatsoever (emphasis in the original).

Yipes! What is going on? Can this actually be true? No! To see the flaw in Lakatos’s argument, let’s consider an avian metaphor—this time Cygnus not Corvus. Consider the statement: All swans are white. (Here we go again.) Before 1492, Europeans would have considered this a valid statement. All the swans they had seen were white. Then Europeans started exploring North America. Again, the swans were white. Then they went on to South America and found swans with black necks (Cygnus melancoryphus) and finally to Australia where the swans are black (Cygnus atratus). By the standards of the falsificationist, nothing was learned when white swans were found, but only when the black swans or partially black swans were found.  With all due respect, or lack of same, that is nonsense. It is the same old problem: you ask a stupid question you get a stupid answer. Did we learn anything when white swans were found in North America? Yes. We learned that there were swans in North America and that they were white. Based on having white swans in Europe, we could not deduce the colour of swans in North America or even that they existed. In Australia, we learned that swans existed there and were black. Thus, we learned a similar amount of information in both cases—really nothing more or nothing less.  The useful question is not, ‘Are all swans white?’ Rather, ‘On which continents do swans exist and what color are they on each continent?’

Moving on from birds to model cars (after all, the standard model of particle physics is a model). What can we learn about a model car? Certainly, not if it is correct. Models are never an exact reproduction of reality. But, we can ask, ‘Which part the car is correctly described by the model? Is it the color? Is it the shape of the head lights or bumper?’ The same type of question applies to models in science. The question is not, ‘Is the standard model of particle physics correct?’ We knew from its inception that it is not the answer to the ultimate question about life, the universe and everything. The answer to that is 42 (Deep Thought, from The Hitchhiker’s Guide to the Galaxy by Douglas Adams). We also know that the standard model is incomplete because it does not include gravity. Thus, the question never was, ‘Is this model correct?’ Rather, ‘What range of phenomena does it usefully describe?’ It has long history of successful predictions and collates a lot of data. So, like the model car, it captures some aspect of reality, but not all.

Finding the Higgs boson helps define what part of reality the standard model describes. It tells us that the standard model still describes reality at the energy scale corresponding to the mass of the Higgs boson. But, it also tells us more: It tells us that the mechanism for electroweak symmetry break –a fundamental part of the model—is adequately described by the mechanism that Peter Higgs (and others) proposed and not some more complex and exotic mechanism.

The quote from Lakatos, given above, misses a very important aspect of science–parsimony. The ambiguity noted there is eliminated by the appeal to simplicity. The standard model of particle physics describes a wide range of experimental observations. Philosophers call this phenomenological adequacy. But a lot of other models are phenomenologically adequate. The literature is filled with extensions to the standard model that agree with the standard model where the standard model has been experimentally tested. They disagree elsewhere, usually at higher energy. Why do we prefer the standard model to these pretenders? Simplicity and only simplicity. And the standard model will reign supreme until one of the more complicated pretenders is demonstrated to be more phenomenolgically adequate. In the meantime, I will be a heretic and proclaim that finding the Higgs boson would indeed confirm the standard model. Popper, Lakatos, and the falsificationists be damned.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

One of the more interesting little conundrums in understanding science is the raven paradox. It was proposed by Carl Hempel (1905 –1997) in the 1940s. Consider the statement: All ravens are black. In strict logical terms, this statement is equivalent to: Everything that is not black is not a raven. To verify the first we look for ravens that are black. To verify the latter we look for coloured objects that are not ravens.  Thus finding a red (not black) apple (not raven) confirms that: Everything that is not black is not a raven, and hence that: all ravens are black. Seems strange: to learn about the colour of birds, we study a basket of fruit.

While the two statements may be equivalent for ravens, they are not equivalent for snarks.  The statement: Everything that is not black is not a snark, is trivially true since snarks do not exist, except in Lewis Carroll’s imagination. However, the statement: All snarks are black, is rather meaningless since snarks of any colour do not exist (boojums are another matter). Hence, the equivalence of the two statements in the first paragraph relies on the hypothesis that ravens do exist.

One resolution of the paradox is referred to as the Bayesian solution.  The ratio of ravens to non-black objects is as near to zero as makes no difference.  Thus finding 20 black ravens is more significant than find 20 non-black, non-ravens. You have sampled a much larger fraction of the objects of interest. While it is not possible to check a significant fraction of non-black objects in the universe, it may be possible to check a significant faction of ravens, at least those which are currently alive.

But the real solution to the problem seems to me to lie in different direction. Finding a red apple confirms not only that all ravens are black but also that all ravens are green, or chartreuse, or even my daughter’s favorite colour, pink.  The problem is that a given observation can confirm or support many different, and possibly contradictory, models.  What we do in science is compare models and see which is better. We grade on a relative, not absolute scale.  To quote Sir Carl Popper:

And we have learnt not to be disappointed any longer if our scientific theories are overthrown; for we can, in most cases, determine with great confidence which of any two theories is the better one. We can therefore know that we are making progress; and it is this knowledge that to most of us atones for the loss of the illusion of finality and certainty.

We do not want to know if: All ravens are black is true but rather if the statement all ravens are black is more accurate than the statement all ravens are green. A red apple confirms both statements, while a green apple confirms one and is neutral about the other. Thus the relative validity of the two statements cannot be checked by studying apples, but only by studying ravens to see what colour they are.  Thus, the idea of comparing models leads to the intuitive result. Whereas, thinking in terms of absolute validity, leads to nonsense:  Here, check this stone to see if ravens are black. Crack, tinkle (sound of broken glass as stone misses raven and goes through neighbor’s window)

We can go farther. Consider the two statements: All ravens are black, and Some ravens are not black. The relative validity of these two statements cannot be checked by studying apples or even black ravens. Rather what is needed is a non-black raven. This is just the idea of falsification. Hence, falsification is just a special case of comparing models: A is correct, A is not correct.

In practice, all ravens are not black. There are purported instances of white ravens. Google says so and Google is never wrong. Right? Thus, we have the statement: Most ravens are black. This statement does not imply anything about non-black objects; they may or may not be ravens.  Curious… this whole raven paradox was based on a false statement and as with: All ravens are black, most absolute statements are false, or at least, not known for certain.

Even non-absolute statements can lead to trouble. Consider: Most ravens are black, and: Most raven are green. So we merrily check ravens to see which is correct. But is it not possible that the green ravens blend in so well with the green foliage that we are not aware that they are there? Rather like the elephants in the kid’s joke that paint their toe nails red so they can hide in cherry trees. Works like charm. Who has seen an elephant in a cherry tree?  We are back to the Duhem-Quine thesis that no idea can be checked in isolation. Ugh. So, why do we dismiss the idea of perfectly camouflaged green ravens and red-nailed elephants? Like any good conspiracy theory, they can only be eliminated by an appeal to simplicity. We eliminate the perfectly camouflaged green raven by parsimony, and as for the red apple, I ate it for lunch.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

The development of science is often portrayed as a conflict between science and religion, between the natural and the supernatural. But it was equally, if not more so, a conflict with Aristotelian concepts: a change from Aristotle’s emphasis on why to a dominant role for how. To become the mainstream, science had to overcome resistance, first and foremost, from the academic establishment and only secondarily from the church. The former, represented by the disciples of Aristotle and the scholastic tradition, was at least as vociferous in condemning Galileo as the latter. Galileo, starting from when he was a student and for most of his career, was in conflict with the natural philosophers. (I decline to call them scientists.) His conflict with the church was mostly towards the end of his career, after he was fifty and more seriously when he was nearing seventy. The church itself even relied on the opinions of the natural philosophers to justify condemning the idea the earth moved. In the end science and Galileo’s successors won out and Aristotle’s natural philosophy was vanquished: the stationary earth, the perfect heavens (circular planetary orbits and perfectly spherical planets), nature abhorring a vacuum, the prime mover and so on.  For most of these it is so long and good riddance. So why do philosophers still spend so much time studying Aristotle? I really don’t know.

However, Aristotle did have a few good ideas whose loss is unfortunate. The baby was thrown out with the bath water, so to speak. One such concept, although much abused, is the classification of causes given by Aristotle. The four types of causes he identified are the formal, material, effective and final causes.  He believed that these four causes were necessary and sufficient to explain any phenomena. The formal cause is the plan, the material cause is what it is made of, the effective cause is the “how”, and the final cause is the “why”. If you think in terms of building a house the formal cause is the blueprint, the material cause is what it is built of (the wood, brick, glass, etc.), the effective causes are the carpenters and their tools (are hammers obsolete?) and the final cause is the purpose the house was built for.

Aristotle and his medieval followers emphasized the final cause and pure thought. Science became established only by breaking away from the final cause and the tyranny of “why”.  The shift from concentrating on pure thought and the final cause (why) to concentrating on observations and effective causes (how) was the driving factor in the development of science.  Science has now so completely swept Aristotle aside that, at the present time, only the effective cause is considered a cause in the “cause and effect” sense.

However, in dealing with human activities all four of these types of causes are useful. For example consider TRIUMF where I work. The formal cause is the five-year plan given in a brilliantly written (OK. I helped write it and they pay my salary so what else could I say) 800-page book that lays out the program for the current five years and beyond. The material cause is what TRIUMF is built of (many tons of concrete shielding among other things). The effective cause is the people and machines that make TRIUMF work. The final cause is TRIUMF’s purpose as given in the mission and vision statements. A similar analysis can be done for any organization. The usefulness of the final cause concept is shown by it being resurrected in good management practice under the heading of mission and/or vision statements.

Now, when we go from human activity to animal activity, we lose the formal cause. Consider a bird building a nest. The material cause is what the nest is built of, the effective cause is the bird itself and the final cause is to provide a safe place to raise its young. But the formal cause does not exist. It is doubtful the bird has a blueprint for the nest; rather the nest is built as the result of effective causes – the reflexive actions of the bird. No bird ever wrote an 800-page book outlining how to build a nest. Just as well, or the avian dinosaurs (otherwise known as birds) would have gone extinct along with the non-avian ones.

A similar analysis exists for simpler organisms. A recent study of yeast showed why (in the sense of the final cause) yeast cells clump together: to increase the efficiency of extracting nutrients from the surroundings. Thus in dealing with human, animal or even yeast activities, science can and does answer the why or final cause question. In the case of the yeast the effective cause would be the method the yeast cells used to do the bonding and the material cause the substances used for the bonding.

When we go from animate to inanimate we lose, in addition to the formal cause, the final cause. Aristotle explained the falling of objects in terms of a final cause: the objects wanted to be at their natural place at the center of the universe, which Aristotle thought was the center of the earth. The reason they speed up as they fell was they became jubilant at approaching their natural place (I am not making that up). Newton, in contrast, proposed an effective cause: gravity. There was no goal, ie final cause, just an effective cause. A river does not flow with the aim of reaching the sea but just goes where gravity pulls. Similarly with evolution by natural selection, it has no aim but just goes where natural selection pulls. This freaks out those people who insist on formal and final causes. With much ingenuity, they have tried to rectify the situation by proposing formal and final causes:  intelligent design and theistic evolution respectively.  Intelligent design posits that at least some of the structures found in living organisms are the result of intelligent design by an outside agent and not the result of natural selection while theistic evolution posits that evolution was controlled by God to produce Homo Sapiens. Neither has been found to increase the ability of models to make accurate predictions; hence they have no place in science.  It is this lack of utility not the role of a supernatural agent that leads to their rejection as science.

To summarize: for the activities of living things, science can and does answer the why question and assigns a final cause. However, for non-living things science has not found the final cause concept to be useful and has eliminated it based on parsimony. Aristotle, his followers and disciples made the mistake of anthropomorphizing nature and assigning to it causes that are only appropriate to humans or, at best, living things.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod

Share

The story is told (original source unknown) of an elderly woman who attended a talk on Copernicanism. She objected to the speaker, claiming that the he was wrong and that the world was supported on the back of a giant elephant. “And what was the elephant supported by?” asked the speaker. “It stood back of a giant turtle,” replied the lady. Before the speaker could reply she added: “Don’t ask what the turtle stands on—I have you there smarty pants, it is turtles all the way down.” Now, what has this to do with science? Quite a bit actually.

Einstein once said that most un-understandable thing about the universe is that it is understandable. But there are two reasons it is understandable. First, the human mind is very well adapted to finding (or creating) patterns and, second, the universe separates itself into different turtles, that is, bite-sized pieces, each characterized by a different scale or size, that can be studied and modeled independently of the other turtles or pieces. Typical scales might be the size of the observable universe, a typical galaxy, the solar system, the earth, people, the atom, the nucleus or the Plank length. One does not have to understand the universe as whole, one can study it one turtle at a time.

For example, I attended a seminar on ab inito calculations of nuclear physics where the speaker started with nucleons and the nuclear potential and derived the properties of nuclei without any additional input. Ab intio means from the beginning and a member of the audience objected quite strenuously (it had high entertainment value) that this was not ab inito because speaker did not start with quantum chromo dynamics (QCD), the assumed underlying model. More pedantically, he should have objected that the speaker should have started with the ultimate theory of everything.

Well, the ultimate theory of everything is not known so where should one start? Obviously, where it is most convenient. In low-energy nuclear physics, this has been a matter of great debate. Historically, the nuclear physicist dealt with nucleons (neutrons and protons) and the interactions between them derived without any reference to an underlying theory (which was unknown at the time). Then along came QCD. The QCD practitioners claimed the nuclear physicists were nincompoops and were wasting their time since they did not start with QCD. This was one of the reasons for the global collapse of nuclear physics in the 1980s. Now it turns out, that by separating the scales using what are called effective field theories, the nuclear physicists were right all along. All the effects of QCD needed for low energy nuclear physics can be taken account of by introducing a few phenomenological parameters to describe the nucleon-nucleon potential (and for purists, many-body forces). Thus there is no need to handle all the complexities of QCD. The QCD practitioners can then calculate the parameters at their leisure. Or not, as the case may be. It really does not matter to the nuclear physicist; all he will ever need is his phenomenological parameters. In the same vein, condensed matter physicists do not have to sit twiddling their thumbs while the particle physicists derive the mass and charge of the electron; they just use phenomenological values. It is the same for other quantities, like the nuclear masses, that condensed matter physicists might need.  They use phenomenological values and move on.

This separation into bite-sized pieces happens all the way up and down the set of turtles. Each scale has a different preferred model connected to neighbouring scales and their models by a few parameters. If there are many parameters you have chosen the wrong place to do the separation. In studying ecology, one does not need to know all the chemical interactions, in doing chemistry one does not need to know all about the quantum mechanical underpinnings, in studying gases one can determine the volume, pressure and temperature without worrying about the motion of the individual atoms making up the gas. No findings at the LHC will have any effect on biology, chemistry, nuclear physics, or QCD. Except perhaps in developing new experimental or theoretical techniques; they are at vastly different scales. The LHC findings will, however, be crucial for determining the validity of and extensions to the standard model of particle physics.

Deriving the parameters needed at one scale in terms of the smaller scales is reductionism. Sweeping the details of the smaller scales into a few parameters is emergence. There is potentially interesting science at every scale. As always, where one does the division of scales is determined by simplicity and convenience. It is effective field theories (not turtles) all the way down and you can do the separation anywhere you like but if you do it in the wrong place you will be sorry. Cutting turtles in half is messy[1].

 

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] No turtles were injured in preparing this post.

Share

On the Nature of Science

Friday, October 21st, 2011

There are two very peculiar things about the scientific method: first, how late in the development of civilization it became mainstream, and second, that is there is no generally accepted definition of what it actually is, certainly not within the philosophical community.

Hints of the scientific method date back to the astronomy of ancient Babylon (c. 1000BCE), to the philosophy of  Thales (624BCE – 546BCE) of Miletus in ancient Greece, and to the experimentation of Frederick II  (1194 – 1250) and Roger Bacon (c. 1214–1294) in Medieval Europe. But it was only when Galileo (1564 – 1642) turned his telescope on the heavens in 1609 that it “took”. It was only then that the scientific method was finally on the road to becoming a dominant part of everyday culture. When Kepler (1571 – 1639), and especially Newton (1643 – 1727), consolidated Galileo’s work, there was no turning back. As they say, the rest is history.

There have been various ideas put forth in the past for what science is: induction, verification, falsification, and various other ‘tions’. There have also been monstrosities like methodological naturalism; dogma masquerading as method. But all these have their critics and justly so. In the end, the current consensus in the philosophical community—to the extent there is a consensus—is that the scientific method, as a unified concept, does not exist. Strange as it may seem, there is this general idea that there is no such thing as the scientific method but that different fields of science use different unrelated methods.

The problem is that the scientific method is not what people, especially the philosophical community, expected. The philosophical community has concentrated on things like knowledge, explanations, truth, facts, naturalism, realism, and other such abstruse metaphysical concepts. Yet, they have missed the obvious—that science is something simpler, much simpler, namely model building[1] . This view of science allows us to understand the scientific method in a simple, unified manner valid across the whole spectrum of scientific endeavours and to see the shortcomings of other views of science. This model-building approach also allows us to minimize the metaphysics required. Unfortunately, it can never be completely eliminated.

Model building is not enough to specify the scientific method. You need two additional concepts: observations and parsimony. The models of science are constrained by observation, and judge by their ability to make correct predictions about future observations. Like a model boat, scientific models cannot be proved right or wrong—what sense does it make to claim a model boat is right? But we can certainly say which of two model boats is a more accurate representation of the original. Similarly with scientific models: we can say which of two models is more accurate at making correct predictions for observations. We do not have induction, verification, or falsification, but rather comparison. As Sir Karl Popper (1902 – 1994) pointed out, we have replaced certainty with progress: models are becoming more accurate over time.

Now, observations by themselves are not able to uniquely determine a model. An infinite set of models make the same set of predictions, the same way an infinite number of mathematical curves may be drawn through any finite set of points. But, once it is accepted that science is about model building and making predictions for observables, it becomes clear that adding frills—that don’t change the predictions—is counter productive. Thus we use parsimony or simplicity to make our observationally constrained models unique. It is the combination of simplicity and observations that fully constrain scientific models.

Models do more than allow one to make predictions; they provide structure and meaning to the observations. This is the point missed by the logical positivists who wanted to go straight from the observations to the meaning. Thomas Kuhn (1922 – 1996) pointed out the folly of this with his idea of paradigms: the structures need to give order to any field of endevour. Thus we have the essence of the scientific method: Observational constrained model building, with the meaning in the model.

This is the first post to Quantum Diaries since I have been given a personal blog here. In this set of posts, I will be fleshing out these ideas based on the metaphor of science as model building. I have already put a number of posts on TRIUMF’s Quantum Diaries blog and they have been moved to my new area.  I would like to thank Quantum Diaries and TRIUMF for giving me this platform for my views on the nature of science and my distorted sense of humour. Also thanks to J. Gagné for editing the posts and turning my mishmash into something readable.

[1] Either that or they are still annoyed they earned less as philosophy graduate students than the science graduate students


Share