• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Byron Jennings | TRIUMF | Canada

Read Bio

Science: Mankind’s Greatest Achievement!

Friday, July 20th, 2012

How is that for the ultimate claim in the ultimate[1] essay in this series? Science: mankind’s greatest achievement. Can there be any doubt? In the four hundred years since science went mainstream, we have learned how the universe works, changed our conception of man’s place in it, and provided the knowledge to develop fantastic technology. We have big history: the inspiring story of the universe beginning with the primordial big bang and creating order out of chaos through self-interaction, and finally life arising and evolving in our corner of the universe. We have developed models that describe the universe on the largest visible scales down to sub-atomic sizes: astronomy, biology, chemistry, cosmology, medicine, physics, psychology, animate, inanimate, eater, and eatee. The models form a mosaic that overlap and interlock to form a seamless whole.  An amazingly complete picture. There is still much to know, but let us take credit as scientists, that much is known. And yes, we should be glad to be living in a time when so much is known.

However, science has two short-comings[2]: it does not offer the illusions of certainty or purpose.  I once came across a last will and testament that began: I commit my body into the ground in the sure and certain knowledge it will be restored to me on the judgement day. Ah, for sure and certain knowledge. Well, the judgement day has not come yet so we do not know if his sure and certain knowledge was valid, but the resurrection of the body is much less prominent in Christian apologetics than it used to be.  When it comes to knowledge, science promises less but delivers more than its competitors in philosophy or theology. I would take Isaac Newton (1642 – 1727) over Rene Descartes (1596 – 1650), Immanuel Kant (1724 – 1804), Thomas Aquinas (1225 – 1274), or William Paley (1743 – 1805) any day of the week and all together.  Their certain knowledge has largely vanished, but Newton’s uncertain and approximate knowledge is still being used in many practical applications. Ask any mechanical engineer.

In the Hitch Hiker’s Guide to the Galaxy, Douglas Adams (1952 – 2001) introduces the total perspective vortex. It was created by a husband whose wife keeps telling him to put things in perspective. However, when anyone looked in the vortex, they realized how utterly insignificant they were in the vast stretches of the universe and invariably went insane and died. This proved that if life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion. Ah yes, the human need for importance and purpose. I guess the best science can come up with for a purpose is entropy[3] generation. I am not sure that is any worse than what I had heard from a Christian apologist who claimed we were created by God to worship him. Personally, I would never worship that narcissistic a God.

Despite its shortcomings, perceived or real, science has a tremendous track record. But the best is still to come. Let us not make the mistake of the late nineteenth century physicists who thought all the important questions had been answered.  There are things that enquiring minds still want to know: What, if anything, was there before the big bang? How do you combine gravity and quantum mechanics? Is there a solution for global warming that is politically acceptable? Are there room temperature superconductors? How did life begin? How intelligent were the Neanderthals? How does the mind work? The last strikes me as the most interesting question: the final frontier[4].  It has the potential to open up a whole new front in the conflict between science and religion, or science and philosophy.  But it is interesting nonetheless. Answering these questions and others will take clever theoretical approaches, clever experiments, and clever approaches to funding. However, the techniques of science are up to the task.

But what is science? In the final analysis, it is a human activity, an exercise of the human mind. We construct models and paradigms because that is how our minds and brains have evolved to deal with the complexities of our experiences. Thus, the nature of science is tied closely to the last question asked above: How does the mind work? Ultimately, how science works and indeed, the very definition of knowledge, are questions for neuroscience and the empirical study of the mind.

I am taking a break from blogging for the rest of the summer but may have some more blogs in the fall. I have run out of interesting things to say (no snide comments that that happened a long time ago). I would like to thank people for their many comments. They have been quite informative. To receive notices of future posts, if and when they occur, follow me on Twitter: @musquod.

 


[1] That is the LP in the language of effective field theorists (LP=last post, not long playing as you old timers thought).

[2] Humility is not one of them.

[3] Entropy generation is the driving force behind evolution.

[4] Sorry Star Trek fans, it is the mind, not space.

Share

Science and Religion: Competing Paradigms?

Friday, July 13th, 2012

The contentious relation between science and religion is the topic of this, the penultimate[1] post in the current series.  Ever since science has gone mainstream, there have been futile attempts to erect a firewall between science and religion. Galileo got in trouble with the Catholic Church, not so much for saying the earth moved as for suggesting the church steer clear of scientific controversies.  More recently, we have methodological naturalism (discussed in a previous post), a misidentification of why the supernatural is absent from science. Then there is the: science cannot answer the why question—but it can when it helps make better models (also discussed in a previous post). For example, why do beavers build dams? This can be answered by science. And there is the ever popular non-overlapping magisteria (NOMA) of Stephen J. Gould (1941 – 2002).  NOMA claims that “the magisterium of science covers the empirical realm: … The magisterium of religion extends over questions of ultimate meaning and moral value.”

The empirical realm covers not just what can be directly observed but what can be implied from what is observed. For example, quarks, and even something as well-known as electrons, are not directly observed but are implied to exist. That would also be true for citizens of the spirit or netherworld. If they exist, they presumably have observable effects. If they have no observable effect, does it matter if they exist or not? Similarly, a religion with no empirical content would be quite sterile, i.e. would prayer be meaningful if it had absolutely no observable effects?

Moral issues cannot be assigned purely to the religious sphere. The study of brain function impacts questions of free will and moral responsibility. Disease and brain injury can have very specific effects on behaviour, for example, a brain injury led to excessive swearing in one person. What about homosexuality? Is it biological or a lifestyle choice? Recent research has indicated a genetic component in homosexuality, thus mixing science with what some regard as a moral issue. Finally, what about when life begins and ends? Who decides who is dead and who is alive? And by what criteria?  Scientific or religious? This has huge implications for when to remove life support. The bigger fight is over abortion and the question of when independent life begins. Is it when the sperm fertilizes the egg? That is a scientific concept developed with the use of the microscope. That simple definition has problems when there are identical twins where the proto-fetus splits in two much later than at conception. In the other direction, both the sperm and the egg can be considered independent life. After all, the sperm has the ability to leave the donor’s body and survive for a period of time. The arguments one hears regarding when independent life begins are frequently an ungodly combination of scientific and theological arguments.

In the end, there is only one reality, however we choose to study or approach it.  Thus, any attempt to put a firewall between different approaches to reality will ultimately fail, be they based on science, religion, or philosophy.  At least the various religious fundamentalists recognize this, but their solution would take us back to the dark ages by subjugating science to particular religious dogmas. However, it does not follow that religion and science have to be in conflict. Since there is so much variation in religions, some are and some are not in conflict with any particular model developed by science. Still, it should be a major concern for theology that something like religion has not arisen naturally from scientific investigations.  While there are places God can hide in the models science produces, there is no place where He is made manifest. And it is not because He is excluded by fiat either (see the essay on methodological naturalism referenced above).

One should not make the same mistake as Andrew Dickson White (1832 –1918) in setting science and religion in perpetual hostility. He was a co-founder of Cornell University and its first president. He was also embittered by the opposition from the church to the establishment of Cornell as a secular institute. The result was the book: History of the Warfare of Science with Theology in Christendom (1896); a polemic against Christianity masquerading as a scholarly publication. This book, along with History of the Conflict between Religion and Science by John William Draper (1811 – 1882), introduced the conflict thesis regarding the relation between science and religion and said it is perpetual hostility. Against that, we note Newton, Galileo, and Kepler were all very religious and much science was done by clergymen in nineteenth century England. White’s book, in particular, has many problems. One is that the very opposition to change is cast as science versus religion rather than recognizing a lot of it as simple resistance to change. Even science is not immune to that—witness the fifty year delay in the acceptance of continental drift. The historical interplay between science and religion is now recognized to be very complex with them sometimes in conflict, sometimes in concord, and most commonly, indifferent.

If we take a step back from the results of science and its relation to particular religious dogmas, and look instead at the relation between the scientific method and theology, we see a different picture. Like science and western philosophy, science and theology represent competing paradigms for the nature of knowledge.   Science is based on observation and observationally constrained models; Western philosophy on rational arguments; while theology is based more on spirituality, divine revelation, and spiritual insight. This is, in many ways, a more serious conflict than between scientific results and particular religions. Particular religions can change, and frequently have changed, in response to new scientific orthodoxy, but it is much more difficult to change one’s conceptual framework or paradigm. Also, as Thomas Kuhn (1922 – 1996) and Paul Feyerabend (1924 – 1994) pointed out, different paradigms tend to be incommensurate. They provide different frameworks that make communication difficult. They also have conflicting methods for deciding questions, making cross-paradigm conflict resolution difficult, if not impossible. Hence, there will be tension between science and theology forever, with neither dominating.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] NLP in the notation of effective field theorists.

Share

Science and Philosophy: Competing Paradigms

Friday, July 6th, 2012

For the antepenultimate[1] essay in this series, I will tackle the thorny issue of the relation between science and philosophy. Philosophy can be made as wide as you like to include anything concerned with knowledge. In that regard, science could be considered a subset of philosophy. It is even claimed that science arose out of philosophy, but that is an over simplification. Science owes at least as much to alchemy as to Aristotle. After all, both Isaac Newton (1642 – 1727) and Robert Boyle[2] (1627 – 1691) were alchemists and the philosophers, including Francis Bacon, vehemently opposed Galileo. Here, I wish to restrict philosophy to what might be call western philosophy—the tradition started with the ancient Greeks and continued ever since in monasteries and the hallowed halls of academia.

Let us start this discussion with Thomas Kuhn (1922 – 1996). He observed that Aristotelian physics and Newtonian physics did not just differ in degree, but were entirely different beasts. He, then, introduced the idea of paradigms to denote such changes of perspective. However, Kuhn misidentified the fault line. It was not between Aristotelian physics and Newtonian physics, but rather between western philosophy and science. Indeed, I would say that science (along with its sister discipline, engineering) is demarcated by a common definition of what knowledge is (see below). In science, classical and quantum mechanics are very different, yet they share a common paradigm for the nature of knowledge and, hence, we can compare the two from common ground.

Bertrand Russell (1872 –1970) in his A History of Western Philosophy makes a point similar to Kuhn’s. Russell claims that from the ancient Greeks up to the renaissance, philosophers would have been able to understand and discourse with each other. Plato (424 BCE – 348 BCE) and Machiavelli (1469 –1527) would have been able to discuss, if brought together. Similarly with Thomas Aquinas (1225 – 1274) and Martin Luther (1483 – 1546), if Aquinas refrained from having Luther burnt at the stake.  They shared a common paradigm, if not a common view. But with the advent of science, that changes. Neither Aristotle nor Aquinas would have understood Newton. The paradigm had shifted. This shift from philosophy to science is the best and, perhaps, the only real example of a paradigm shift in Kuhn’s original meaning.  Like Kuhn, Russell misidentified the fault line. It was not between early and late western philosophy, but between philosophy and science. C.P. Snow (1905 – 1980) in his 1959 lecture, The two Cultures, identifies a similar fault line but between science and the humanities more generally.

So what are these two paradigms? Philosophy is concerned with using rational arguments[3] to understand the nature of reality. Science turns that on its head and defines rational arguments through observation. A rotational argument is one that helps build models with increased predictive power. To doubt the Euclidian geometry of physical space-time or to suggest twins could age at different rates were at one time considered irrational ideas, beyond the pale. But now they are accepted due to observation-based modeling.  Philosophy tends to define knowledge as that which is true and known to be true for good reason (with debate over what good reason is). Science defines knowledge in terms of observation and observationally constrained models with no explicit mention of the metaphysics concept of truth. Science is concerned with serviceable, rather than certain knowledge.

Once one realizes science and philosophy are distinct paradigms, a lot becomes clear. For example, why philosophers have had so much trouble coming to grips with what science is. Scientific induction as proposed by Francis Bacon (1561 – 1626) does not exist. David Hume (1711 – 1776) started the philosophy of science down the dead end street to logical positivism. Immanuel Kant (1724 – 1804) thought Euclidean geometry was synthetic a priori information, and Karl Popper (1902 – 1994) introduced falsification, which is now largely dismissed by philosophers. Even today, the philosophic community as a whole does not understand what the scientific method is and tends toward the idea that it does not exist at all. All attempts, by either scientist or philosophers, to fit the square peg of science into the round hole of western philosophy have failed and will probably continue to do so into the indefinite future. Eastern philosophy is even more distant.

The different paradigms also provide the explanation of the misunderstanding between science and philosophy. Alfred Whitehead (1861 – 1947) claimed that all of modern philosophy is but footnotes to Plato. On the other hand, Carl Sagan (1934 – 1996) claims Plato and his followers delayed the advance of knowledge by two millennia. The two statements are not in contradiction if you have a negative conception of philosophy. And indeed, many scientists do have a negative conception of philosophy; a short list includes Richard Feynman (1918 – 1988), Ernest Rutherford (1871 – 1937), Steven Weinberg (b. 1933), Stephen Hawking (b. 1962), and Lawrence Krauss (b. 1954).  Feynman is quoted as saying: Philosophy of science is about as useful to scientists as ornithology is to birds. To a large extent, Feynman is correct. The philosophy of science has had little or no effect on the actual practice of science. It has, however, had a large impact on the scientist’s self-image of what they do. Newton was influenced by Francis Bacon, Darwin by Hume, and just try suggesting to a room full of physicists that science is not based on falsification[4].  Even this essay is built around Kuhn’s concept of a paradigm (but most of Kuhn’s other ideas on science are, to put it bluntly, wrong).

This series of essays has been devoted to defining the scientific paradigm for what knowledge is.  The conclusion I have reached, as noted above, is that western philosophy and science are based on different paradigms for the nature of knowledge. But are they competing or complementary paradigms? My take is that the two paradigms are incompatible as well as incommensurate. Knowledge cannot be simultaneously defined by what is true in the metaphysical sense, and by model building.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] That is N2LP in the compact notation of effective field theorists.

[2] The son of the Earl of Cork and the father of modern chemistry.

[3] This is an oversimplification but sufficient for our purposes.

[4] Although I am a theorist, I did that experiment. Not pretty.

Share

The Myth of the Rational Scientist

Friday, June 29th, 2012

There is this myth that scientists are unemotional, rational seekers of truth. This is typified by the quote from Bertrand Russell: But if philosophy is to attain truth, it is necessary first and foremost that philosophers should acquire the disinterested intellectual curiosity which characterises the genuine man of science (emphasis added).  But just get any scientist going on his pet theory or project, and any illusion of disinterest will vanish in a flash.  I guess most scientists are not genuine men, or women, of science. Scientists, at least successful ones, are marked more by obsession than disinterested intellectual curiosity. They are people who wake up at one in the morning and worry about factors of two or missed systematic errors in their experiments, people who convince themselves that their minor role is crucial to the great experiment, people who doggedly pursue a weakly motived theory or experiment.  In the end, most fade into oblivion, but some turn out spectacularly successful and that motivates the rest to keep slugging along. It’s a lot like trying to win the lottery.

The obsession leads to a second myth—that of the mad scientist: cold, obsessed to the point of madness, and caring only about his next result. The scientist who has both a mistress and a wife so that while the wife thinks he is with the mistress and the mistress thinks he with the wife, he is down at the laboratory getting some work done. The myth is typified by the character Dr. Faustus, who sold his soul to the devil for knowledge, Dr. Frankenstein from Mary Shelley’s book, or in real life, by the likes of Josef Mengele. The mad scientist has also been a stable of movies and science fiction. But most real scientists are not that obsessed, and all successful people, regardless of their field—science, sports or business—are driven.

In terms of pettiness, Sir Isaac Newton (1642 – 1727) takes the cake. He carefully removed references to Robert Hooke (1635 – 1703) and Gottfried Leibniz (1646 –1716) from versions of the Principia. In Newton’s defense, it can be said that the forger, William Chaloner, was the only person he had drawn and quartered.  I do not know of modern scientists taking things to that extreme, but there is a recorded case of one distinguished professor hitting another over the head with a teapot. According to the legend, the court ruled it justified. I guess it was the rational and disinterested thing to do. There is also an urban legend of a researcher urinating on his competitor’s equipment.  The surprising thing is that these reports, even if not true, are at least creditable.

In a similar vein, it has been suggested that many great scientists have suffered from autism or Asperger’s syndrome. These include Henry Cavendish (1731 – 1810), Charles Darwin (1809 – 1882), Paul Dirac (1902 – 1984), Albert Einstein (1879 – 1955), Isaac Newton (1642 – 1727), Charles Richter (1900 – 1985) and Nikola Tesla (1856 – 1943).  Many of these diagnoses have been disputed, but it indicates that ruling some of the symptoms of autism were present in these scientists’ behaviour, for example, the single-mindedness with which they pursued their research.

So, are scientists disinterested, autistic, overly obsessed, and/or mad? Probably not more than any other group of people. But to be successful in any field—and especially in science—is demanding. To become a scientist requires a lot of work, dedication, and talent. Consider the years in university. Typically there are four years as an undergraduate. It is at least another four years for a Ph.D. and typically longer. Then to become an academic, you have to spend a few years as a Post-Doctoral Fellow. It is a minimum of ten years of hard work after high school to become an academic. In my case, it was thirteen years from high school to a permanent job. To become a scientist, you have to be driven. Even after you become a scientist, you have to be driven to stay at or near the top. It is not clear if scientists are driven more by a love of their field, or by paranoia. I have seen both and they are not mutually exclusive.

If scientists really were the bastions of rationality that they are sometimes portrayed to be, science would probably grind to a halt. Most successful ideas start out half-baked in some scientist’s mind. Only scientists willing to flog such half-baked ideas can become famous. To become successful, an idea must be pursued before there is any convincing evidence to support it. It is only after the work is done that there can be reason to believe it.  Those who succeed in making their ideas mainstream are made into heroes, those that fail, into crackpots. Generally, it is a bit of a crapshoot.

While individual scientists are not disinterested, nor driven by logic rather than emotion, science as an enterprise is. The error control methods of science, especially peer review and independent repetition, average the biases and foibles of individual scientists to give reliable results. No one should be particularly surprised when results that have not undergone this vetting, particularly the latter, are found to wrong[1]. However, in the final analysis, the enterprise of science reflects the personality of its ultimate judges: observation and parsimony. They are notoriously hard-hearted, disinterested, and unemotional.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Hence, the recently noted medical research results that were wrong.

Share

The Myth of the Open Mind

Friday, June 22nd, 2012

The race of truly open-minded people is long extinct: To be open minded, I will suspend belief that that tawny blob over there is a leopard. Pounce, chomp, chomp. Even today, natural selection is working to remove the truly open minded from the gene pool: To be opened minded, I will remove any judgment of whether jaywalking and texting at the same time is a good or bad idea. Splat, crumple, crumple. As I said, the race of truly opened minded people is long extinct, if it ever actually existed.

You may complain that I am misrepresenting the concept of open-mindedness. That is probably true. When most people accuse someone of being closed-minded, they mean little more than that the person does not agree with them. Be that as it may, in general, the related concepts of open-mindedness and freedom from preconceived ideas are vastly overrated. But what about in science? Surely in science it is necessary to keep an open mind and eliminate preconceived ideas?  Perhaps, but here is what Henri Poincaré said on the topic:

It is often said that experiments should be made without preconceived ideas. That is impossible. Not only would it make every experiment fruitless, but even if we wished to do so, it could not be done. Every man has his own conception of the world, and this he cannot so easily lay aside. We must, for example, use language, and our language is necessarily steeped in preconceived ideas. Only, they are unconscious preconceived ideas, which are a thousand times the most dangerous of all.

Let’s look at this in a bit more detail. Consider his statement: would it make every experiment fruitless. I have served on many review panels and refereed many proposals. Not one of them was free of preconceived ideas or was truly open minded. I guess such a proposal would begin: To be open-minded to all points of view and to avoid preconceived ideas and prejudice we have used a random number generator to choose the beam species and energy. As I say, I have never seen a proposal like that, but I can easily imagine how it would be treated. Not kindly. Review committees are notoriously closed-minded. They demand that every proposal justify the work based on the current understanding in the field.  The value of an experiment depends on how it relates to the current models in the area. The experiments at the Large Hadron Collider (LHC) are given meaning by the standard model of particle physics. Every experiment at TRIUMF has to be justified based on what it will tell us, how it fits into the nuclear models.

What about the acceptance of new ideas? Surely, there, we have to be open-minded. Certainly not! Extraordinary claims require extraordinary proof. This is not a statement of open mindedness. The idea here goes back at least to Pierre-Simon Laplace (1749 – 1827): The weight of evidence for an extraordinary claim must be proportioned to its strangeness. We saw this closed mindedness play out recently with respect to neutrinos traveling faster than the speed of light. The initial claim was roundly rejected; the proponents criticized for publishing such a preposterous idea. In this case, the closed-minded people were correct
(they frequently are) as it was subsequently found that there was an experimental error.

Even if we wanted to be, we could not be open-minded. Frederick II (1194 – 1250) is said to have carried out an experiment were he had infants raised without people talking to them to see what the natural language was. What he found was that infants treated this way died. Even independent of that experiment, we know most children are talked to and pick up language and other preconceived ideas from their caregivers. As Poincaré said, language is steeped in preconceived ideas.  A truly open mind free from preconceived ideas is an impossibility.

Continuing Poincaré’s quote: Shall we say, that if we cause others [preconceived ideas] to intervene of which we are fully conscious, that we shall only aggravate the evil? I do not think so. I am inclined to think that they will serve as ample counterpoises — I was almost going to say antidotes. They will generally disagree, they will enter into conflict one with another, and ipso facto, they will force us to look at things under different aspects. This is enough to free us. He is no longer a slave who can choose his master. If you like, we should choose our preconceived ideas and choose them wisely. Then we are in charge, not them.

Open mindedness and freedom from preconceived ideas are only positive in small doses. One has to be open-minded enough to accept the next breakthrough, but not so open-minded as to follow every will-of-the-wisp.  The real genius in science is in knowing when to be open-minded and when to be as stubborn as a mule. It is in knowing which ideas to hold onto and which one to discard.

To receive a notice of future posts follow me on Twitter: @musquod.

Share

Science and Exploring the Past

Friday, June 15th, 2012

Many years ago when I was in a grade-eight math class, I was sitting looking out the windows at the dinosaurs playing. Ok, despite what my daughter thinks, I am not quite that old. What I was looking at was planes circling around in the distance. It turns out that a plane had crashed. It was a Handley Page HPR-7 Herald 211 operated by Eastern Provincial Airlines and all eight people on board were killed.  Now, it is sometimes claimed that science cannot explain the past. It’s even argued that historical sciences like paleontology, archeology, and cosmology, somehow use different methods of discovering the past, than say, determining the reason of a plane crash and that is again different from the method for discovering the laws of nature.  In reality, the methods are all the same.

I suppose, in response to the plane crash, people could have sat around and made predictions for future plane crashes but instead they used science to try to discover the past—what had caused the plane to crash. In this case it turned out to not be so difficult. The Aviation Safety Network describes the cause thus: Failure of corroded skin area along the bottom centre line of the aircraft beneath stringer No.32 which resulted in structural failure of the fuselage and aerial disintegration. This was found out by a metallographic examination which provided clear evidence of stress corrosion in the aluminum alloy. The planes of this type that were remaining in service were repaired to prevent them from crashing as well.

The approach to understanding why the Eastern Provincial Airline’s plane had crashed followed a similar approach to any other plane crash: you analyse the debris, gather records from the black box and whatever other information is available, and construct a model for what happened. You test the model by making predictions for future observations; for example, that corrosion will be found on other planes of the same type.  This sounds very much like the standard scientific method as proposed originally by Roger Bacon (1220 – 1292) and followed by scientists ever since: observe, hypothesize, test, rehypothesize, and repeat as necessary.

The same technique is used for any reconstruction of the past, be it plane crashes, the cause of Napoleon’s death, archeology, paleontology, evolution, and cosmology. The cause of Napoleon’s death is quite interesting as an exercise in forensic science. The original cause of death was suggested to be gastric cancer. But that is too mundane a cause of death for such an august figure. So the conspiracy advocates went to work and suggested he was poisoned by arsenic. How to test? Easy look for arsenic in samples of hair. Well, that was done and arsenic was found. Case closed? Not quite. Were there other sources of arsenic than deliberate poisoning? Yes, the wall paper in his room had arsenic in it. Also further investigation revealed that he had been exposed to arsenic long before he went to St. Helena.  In support of the caner hypothesis his father also died of stomach cancer.  The current consensus is that the original diagnosis was correct. He died of stomach cancer. But notice the play of events: hypothesis—arsenic poisoning, testing—look for arsenic in hair samples, refine hypothesis—check for other sources of arsenic, etc. We can see here the classic process of science being played out in reconstructing the past.

We can continue this technique into the more distant past: When did humans evolve? Why did the dinosaurs die out? How did the earth form? How did the solar system form? What if anything preceded the big bang? All of these questions can be tackled using the standard methods of science. Observations of present tell us about the past, counting tree ring tells us when the tree started to grow.

The interplay between what might be called natural history and natural laws is very intricate. We must interpret the past in order to extract the natural laws and use the natural laws to interpret the past. All our models of science have, explicitly or implicitly, both an historical and a law component. In testing a model for how the universe works—ie to develop the laws—we conduct an experiment. Once the experiment is finished, it becomes history and interpreting it is historical science.  For example, why did the OPERA experiment claim to see faster than light propagation for neutrinos? Or is the bump seen in searches for the Higgs boson real or an artifact of the detector? Those investigations are as much forensic science as trying to decide why Napoleon died or the dinosaurs went extinct.  Thus, all science is historical and sometimes, quite explicitly. Einstein abandoned the cosmological constant based on an alternate model for the history of the universe, namely that it is expanding rather than static.

So, we have science as a unified whole, encompassing the past, present, and future; the natural laws entangled with the natural history. But what about the dinosaurs I did not see out of the math-room windows? We can be quite sure they did not exist at that time and that Fred Flintstone did not have one as a pet (a saber-toothed pussy cat is another story). The study of evolution is much like that for plane crashes. You study the debris, in the case of evolution that “debris” includes fossils and the current distribution of species.  Consider the fossil Tiktaalik roseae, a tetrapod-like fish or a fish-like tetrapod, that was found a few years ago.  One can engage in futile semantic arguments about whether it is a fish, or a tetrapod, or a missing link, or whether it is the work of the devil. However, the significant point is that a striking prediction has been confirmed by a peer-reviewed observation. Using evolution, a model of fossil formation, and a model of the earth’s geology, a prediction was made that a certain type of fossil would be found in a certain type of rock. Tiktaalik roseae dramatically fulfilled that prediction and provides information on the fish-tetrapod transition.

The cause of plane crashes, Napoleon’s death, evolution, and the extinction of dinosaurs can all be explored by using the same empirically-constrained model-building techniques as the rest of science.  There is only one scientific method.

To receive a notice of future posts follow me on Twitter: @musquod.

Share

Cause and Effect: A Cornerstone of Science or a Myth?

Friday, June 8th, 2012

Cause and effect has been central to many arguments in science, philosophy and theology down through the ages, from Aristotle’s four causes[1] down to the present time. It is has frequently been used in philosophy and Christian apologetics in the form: The law of cause and effect is one of the most fundamental in all of science. But it has it naysayers as well. For example, Bertrand Russell (1872 –1970): All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word “cause” never occurs. … The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm. You can accuse Russell of many things, but being mealy-mouthed is not one of them. Karl Pearson (1856 – 1936), who has been credited with inventing mathematical statistics, would have agreed with Russell. He never talked about causation, though, only correlation.

One of the people who helped elevate cause and effect to its exalted heights was David Hume (1711 -1776). He was a leading philosopher of his day and known as one of the British empiricists (in contradistinction to the continental rationalists). Hume was one of the first to realize that the developing sciences had undermined Aristotle’s ideas on cause and effect and he proposed an alternate in two parts: first, Hume defined cause as “an object, followed by another, and where all objects similar to the first are followed by objects similar to the second”. This accounts for the external impressions. His second definition, which defines a cause as “an object followed by another, and whose appearance always conveys the thought to that other”, captures the internal sensation involved. Hume believed both were needed. In thus trying to relate cause and effect directly to observations Hume started the philosophy of science down two dead ends streets: one was the idea that cause and effect was central to science and the other lead to logical positivism.

Hume’s definitions are seriously flawed. Consider night and day. Day invariably night and the two are thought of together but night does not cause day in any sense of the word. Rather, both day and night are caused by the rotation of the earth, or, if you prefer, a geocentric frame, by the sun circling the earth.  The true cause has no aspect of one thing following another or one causing thought of the other. And the cause does not have to any way resemble the effect.  One can find many other similar cases: it getting light does not cause the sun to rise despite it getting light before the sun rises; it is the sun rising that causes it to get light. Trees losing their leaves does not cause winter but rather the days getting shorter causes the trees to lose their leaves and is a harbinger of winter. The root cause being the tilt of the earth’s axis of rotation with respect to the ecliptic.

As just seen, cause and effect is much more complicated than Hume and his successor thought, but not nonexistent as it detractors maintain. In the words of the statistician: correlation does not imply causation. However, it can give a strong hint.  The cock crowing does not cause the sun to rise but the correlation does suggest that the sun rising might just motivate, if not cause, the cock to crow. Similarly, consider lung cancer and smoking. Not all people who smoke get lung cancer and not all people who get lung cancer smoke (or inhale second hand smoke).  Nevertheless, there is a correlation. It was this correlation that started people looking to see if there is a cause and effect relation. Here we have correlation giving a hint; a hint that needed to be followed up. And it was followed up. Nicotine was found to be carcinogenic and the case was made convincing. A currently controversial topic is global warming and human activities. Here, as with smoking causing cancer, we have both correlation and a mechanism (the greenhouse effect of carbon dioxide and methane).

Cause and effect went out of favor as a cornerstone of science about the time quantum mechanics was developed. Quantum mechanics is non-deterministic with events occurring randomly. Within the context of quantum mechanics, there is no reason or cause for an atom to decay at one time and not at another. The rise of quantum mechanics and the decline in the prominence of cause and effect are probably indeed cause and effect. However, even outside quantum mechanics there are problems with cause and effect. Much of physics, as Russell observed, does not explicitly use cause and effect. The equations work equally well forwards or backwards, deriving the past from present as much as the future from the past.  Indeed, the equations of physics can even propagate spatially sideways rather than temporally forwards or backwards.

In spite of all that, the idea of cause and effect is useful. To understand its limitations and successes we have to go back to one of my mantras: the meaning is in the model. Cause and effect is not something that can be immediately deduced from observation, as Hume implies, but it is not a meaningless concept as Russell said or the physics discussion above might seem to imply. Rather, when we develop our models for a particular situation the idea of causation comes out of that model, is part and parcel of the model. We believe that the post causes the shadow and not the other way around, because of our model on the nature of light and vision. Similarly, the idea that the earth’s rotation causes day and night comes out of our model for light, vision and the solar system. The first chapter of Genesis indicates that this was not always considered obvious[2]. That smoking causing lung cancer is part of the biological model for cancer. That human activities causing global warming comes out of atmospheric modeling. But arising from a model does not make cause and effect any less real nor the concept less useful. Identifying smoking as a cause of cancer has saved many lives and identifying carbon dioxide and methane as the main causes of global warming will, hopefully, help save the world. Cause and effect may not a cornerstone of science but it is still a useful concept and certainly not a relic of a bygone age.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Discussed in a previous post.

[2] Day and night were created before the sun.

Share

In Defense of Mickey Mouse Science

Friday, June 1st, 2012

“Give it to me—the real news”
“So I will”
“Well, Dadamashay, let me see what skill you have. Tell me the big new news of these days, making it ever so small.”
“Listen”[1]

When, I was a graduate student, somewhat after the time of the Vikings in long boats, my thesis supervisor, Prof. Bhaduri[2], took me with him when he went on sabbatical to Copenhagen, a Mecca for nuclear physics at that time.  When we were leaving there, his officemate gave him a small Mickey Mouse figurine so he would know what kind of physics to work on. Well another man might have been angry, And another man might have been hurt, But another man never would have[3] stressed during his seminar that he was using a Mickey Mouse model. A yes, Mickey Mouse science, the simple model or calculation that brings out salient features that are all too often lost or obscured in the complete calculation.

We all know what big science is: the big detectors at the Large Hadron Collider (CMS has a 12,500 ton steel yoke) or the Super-Kamiokande (50,000 tons of water). That is big science. Even theoretical physics does big science: the massive calculations of lattice quantum chromodynamics (QCD) or the nuclear shell model. Now, there have been attacks on big science, either the LHC or lattice QCD, as being inherently evil because they are so big. Would you believe, even books written on the topic? I strongly disagree with that view. Large science is an essential part of science. Big is needed to answer the questions we want answers to. However, there is more to science than that. We need the little to complement the big, the simple to complement the complex. As a post-doc, I was returning from a somewhat annoying conference with Gerry Brown[4] (b. 1926), one of leading nuclear physicists of that generation, when he turned to me in exasperation and said that people did not realize how many hours of computer time went into his simple estimates. There is an interesting concept: using computer time to justify simple estimates, simple complementing the complex. The purpose of computing is insight, not numbers[5] and the simple Mickey Mouse models are essential in generating that insight—even when they are justified by complex calculations.

The simple models are useful in a number of ways. First, they are useful in checking the results of complex computer calculations.  I have learnt through bitter experience never to believe the result of a computer calculation until I have “understood” them (and not always then). That is, until using some simple model or estimates, either explicitly or implicitly, I can reproduce the main trends of the results. In trying to do that, I have frequently found errors. Never trust a number you do not understand.

Second, we want to understand what aspects of the model are important in reproducing the results and which are coincidental.  Scientific models are designed to predict future observations, but which aspects of the model are crucial to that endeavour. It is through the use of simple models that we can most easily explore the dependencies of the results on the assumptions.  We calculate some nuclear cross-section. Is that bump significant? What, if anything, does the location of the bump tell us? What about the turn up near threshold? Is that an artifact? We want to know more than merely if the calculation fits the data. It is here that the simple models come in. They give us the insight into how the models can be improved and what assumptions are not necessary and can be eliminated.

Finally, and most importantly, it is the simple models that allow us, as people, to understand the results. It is not just for the layman that we need the simple models, but for the expert as well. A prime example would be the non-relativistic quark model. Its success calculating the properties of the excited states of the proton was touted as proof of the quark model but all it tested was the symmetries built into the calculations. The simple approximations to the non-relativistic quark model revealed it pretentions. But as a Mickey Mouse model, the non-relativistic quark model gave us insight into QCD that would have been difficult if not impossible to obtain otherwise.

I suppose one could hook up the computers directly to the experiments and have them generate models, test the models against new observations and then modify the experimental apparatus without any human intervention. However, I am not sure that would be science.  Science is ultimately a human activity and the models we produce are products of the human mind. It is not enough that the computer knows the answer.  We want to have some feeling for the results, to understand them. Without the simple models, Mickey Mouse science, that would not be possible: the big news made ever so small.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Quoted from Rabindra Nath Tagore (1861 – 1941) in Fables. Also used as an inscription in R.K. Bhaduri’s book: Models of the Nucleon.
[2] A scholar and a gentleman.
[3] With apologies to Harry Chapin and the song: The Taxi.
[4] No, not the California politician.
[5] Quoted from Richard Hamming (1915 – 1998) 

Share

Science: The Art of the Appropriate Approximation

Friday, May 25th, 2012

There is this myth that science is exact. It is captured nicely in this quote from an old detective story:

In the sciences we must be exact—not approximately so, but absolutely so. We must know. It isn’t like carpentry. A carpenter may make a trivial mistake in a joint, and it will not weaken his house; but if the scientist makes one mistake the whole structure tumbles down. We must know. Knowledge is progress. We gain knowledge through observation and logic–inevitable logic. And logic tells us that while two and two make four, it is not only sometimes but all the time. – Jacques Futrelle, The Silver Box, 1907

Unless, of course, it is two litres of water and two litres of alcohol, then we get less than four litres. Note also the almost quaint idea that science is certain, not only exact, but certain. We must know. The view expressed in this quote is unfortunately not confined to century-old detective stories, but is part of the modern mythology of science. But in reality, science is much more like carpentry. A trivial mistake does not cause the whole to collapse, but I would not like to live in a house built by that man.

To the best of my knowledge, there has never been an exact calculation in all of physics. In principle, everything in the universe is connected. The earth and everything in it is connected by the gravitational field to the distant quasars. But you say, surely that is negligible, which is precisely the point. It is certainly not exactly zero, but with equal certainty, it is not large enough to be usefully included in any calculation. I know of no terrestrial calculation that includes it. Even closer objects like Jupiter have negligible effect. In the grand scheme, the planets are too far from the earth to have any earthly effect. Actually, it is not the gravitational field itself which is important but the tidal forces which are down an additional factor of the ratio of the radius of the earth to the distance to the planet in question. Hence, one does not expect astrology to be valid. The art of the appropriate approximation tells us so.

Everywhere we turn in science we see the need to make the appropriate approximations. Consider numerical calculations. Unless you are calculating the  hypotenuse of a triangle with side of 3 and 4 units, almost any numerical calculation will involve approximations. Irrational numbers are replaced with rational approximations, derivatives are replaced with finite differences, integrals with sums, and infinite sums with finite sums. Every one of these is an approximation—usually a valid approximation—but never-the-less an approximation. Mathematical constants are replaced by approximate values. Someone once asked me for assistance in debugging a computer program. I noticed that he had pi approximated to only about six digits. I suggested he put it in to fifteen digits (single precision on a CDC computer). That, amazingly enough, fixed the problem. Approximations, even seemingly harmless ones, can bite you.

Even before we start programing and deciding on numerical techniques, it is necessary to make approximations. What effects are important and which can be neglected? Is the four-body force necessary in your nuclear many-body calculation? What about the five-body force? Can we approximate the problem using classical mechanics, or is a full quantum treatment necessary? Thomas Kuhn (1922 – 1996) claimed that classical mechanics is not a valid approximation to relativity because the concept of mass is different. Fortunately, computers do not worry about such details and computationally classical mechanics is frequently a good approximation to relativity. The calculation of the precision of the perihelion of Mercury does not require the full machinery of general relativity, but only the much simpler post-Newtonian limit. And on and on it goes, seeking the appropriate approximation.

Sometimes the whole problem is in finding the appropriate approximation. If we assume nuclear physics can be derived from quantum chromodynamics (QCD), then nuclear physics is reduced to finding the appropriate approximation to the full QCD calculation, which is by no means a simple task. Do we use an approximation to the nuclear force based on power counting, or the old fashioned unitarity and crossing symmetry? (Don’t worry if you do not know what the words mean, they are just jargon and the only important thing is that the approximations lead to very different looking potentials.) Do the results depend on which approach is used, or only the amount work required to get the answer?

Similarly, in materials science, all the work is in identifying the appropriate approximation. The underlying forces are known: electricity and magnetism. The masses and charges of the particles (electrons and atomic nuclei) are known. It only remains to work out the consequences. Only, he says, only. Even in string theory, the current proposed theory of everything, the big question is how to find useful approximations to calculate observables. If that could be done, string theory would be in good shape. Most of science is the art of finding the appropriate approximation. Science may be precise, but it is not exact, and it is in finding the appropriate approximation that we take delight.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share

Measurement and the New SI Units

Friday, May 18th, 2012

The SI units will be changing again in the next few years. You would think that choosing the units of measurement would be an unemotional topic, but as I recall from Canada’s, only partially successful attempt to convert to the metric system, that is far from the case. I remember one rather irrational editorial on the topic where the writer went on about how the changing  definition of the metre was an indication that the people behind the metric system did not know what they were doing. Since this was in an English Canadian paper, he blamed the problem on the French for having blown the original definition. Ignorance profound. The writer would probably have been surprised to learn that the inch is defined as 2.54 centimeters except, of course, in the US where there is a second inch (the surveyor’s inch) defined as 39.37 inches equal one meter.  Ah, the joy of traditional measurements. There are at least three different gallons in use, and as for barrels, there are more than you can shake a stick at. However, the petroleum barrel is defined as exactly 158.987294928 litres. I am sure you wanted to know that and don’t forget the last decimal point—the 8 is very important. As far as I can see, the only reason for using the traditional units is familiarity and yes, I still use the inch and foot, but also the kilometer. And I believe it’s also safe to say, that the generation born after the country officially switched, also does the same. That is the joy of living in a country that has half converted to metric.

Measurements tend to be of two types. One is pure numbers like the number of ducks in a row (or in a pond). The other type is the measurement of a number with a dimension. Here we need a standard to compare against; a length of six feet only make sense if we know what a foot is. In other words, we have a standard for it. Thus, the need to define units so different people can compare their results, and when we buy a hogshead of beer, we know how much we are getting.

Editorial writers will have another chance to rant in a few years as the General Conference on Weights and Measures is set to change the definitions of the basic metric or Standard International (SI) units again—this time, not the metre but the kilogram and other units. The history of how the definition of the units have changed over time is quite interesting, involving not just changing technology but also changing tastes. The original metre was defined in terms of the distance from the equator to the North Pole. But this could not be determined sufficiently accurately, so the standard was shifted to a physical artifact; a rod kept in Paris with two marks on it. This was then shifted to the wavelength of light from a certain atomic transition and finally, to fixing the speed of light. Similarly, for time, the second went from being defined in terms of the length of the day to being defined in terms of the frequency of an atomic transition. There is a trend from defining the units in terms of macroscopic quantities—the size of the earth, the length of day, the length of a bar—to microscopic quantities, or more specifically, atomic properties. There is a simple reason for this, namely that it is in atomic systems that the most accurate measurements can be made. Unfortunately, it also makes the unit definitions esoteric and detached form everyday experience. Everyone can identify with the length of a foot, but it is not immediately clear what the speed of light has to do with distance. Telling my daughter it takes five nanoseconds for light to travel from her head to her foot doesn’t do much for her. There is also a trend, partly aesthetic, towards defining the base units by fixing the fundamental constants of nature.

A fundamental constant of nature, like the speed of light, starts it life as something that relates two apparently unrelated quantities. In the case of the speed of light, it is time and distance. But then over time, it comes to be just a way of relating different units for measuring the same thing. Indeed, time units are sometimes used for distances and vise versa. This even happens in everyday life, such as when the distance from Vancouver to Seattle is given as three hours, meaning, of course, an average travel time. But in science, the relation is more definite and defining the metre in terms of the speed of light makes it explicit that the fundamental constant, the speed of light, is just a conversion factor from one set of units to another, from seconds to metres (1 metre = 3.3 nanoseconds).

The new proposal for the base SI units continues this trend of defining units by fixing fundamental constants. The degree Celsius is now defined in terms of the properties of water—the so called triple point. In the proposed new system, it will be defined by fixing a fundamental constant, the Boltzmann constant. The Boltzmann constant relates degrees to energy. At the microscopic level, i.e. in statistical mechanics, temperature is just a measure of energy and the new definition of the degree makes this explicit. Again, a fundamental constant turned to a conversion factor between different units—degrees and joules. The case of the kilogram is more subtle. It is currently defined by a physical artifact—the standard kilogram stored in Paris. The new proposal is to determine the kilogram by fixing the fundamental constant; Planck’s constant. This is another example of a fundamental unit becoming just a conversion factor between different units, in this case between time and energy units, or equivalently distance and momentum units.

As a theorist, this new set of units makes it nice for me as I like to use what are called natural units in my calculations. These are given by setting the speed of light (c), Planck’s constant (ħ), Boltzmann’s constant (k) and π all equal to 1 (OK, usually not π, but I did see that legitimately done once). An interesting side effect of the new units is that they all have exact conversion from these natural units. There is another set of natural units called Planck units which are defined in terms of the gravitational strength and the strength of the electromagnetic force. (In the proposed change, the charge of the electron is used to define the electromagnetic units.) Ultimately, those may be the most elegant units but we are nowhere close to having the technology to make them the bases of the SI units.

Naturally, any change of units has the naysayers coming out of the woods. One of the criticisms of the new units is that, since the fundamental constants are fixed by definition, we can no longer study their time dependence. To some extent, this is true. For example, with the current definition of the kilogram, Planck’s constant changes every time atoms are lost or gained by the standard kilogram. This change will be lost with the new units. This illustrates the absurdity of asking if a fundamental constant changes in isolation. All that is meaningful is if the constant has changed with respect to some other quantity with the same dimensions. The new choice of units makes this explicit, which is a good thing.

There is much more to the new choice of units than I can cover here and the interested reader is referred to the relevant web pages: http://www.bipm.org/en/si/new_si/ , http://royalsociety.org/events/2011/new-si/ , or http://en.wikipedia.org/wiki/New_SI_definitions .

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

Share