• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Byron Jennings | TRIUMF | Canada

Read Bio

Questioning the existence of God and the God Particle

Friday, June 7th, 2013

Does God exist?  This is one of the oldest questions in philosophy and is still much debated. The debate on the God particle is much more recent but searching for it has cost a large fortune and inspired people’s careers. But before we can answer the questions implied in the title, we have to decide what we mean when we say something exists. The approach here follows that of my previous essay that defines knowledge in terms of models that make successful predictions.

Let us start with a simple question: What does it mean when we say a tree exists? The evidence for the existence of trees falls into two categories: direct and indirect. Every autumn, I rake the leaves in my backyard. From this I deduce that the neighbour has a tree. This is indirect evidence. I develop a model that the leaves in my backyard come from a tree in the neighbour’s yard. This model is tested by checking the prediction that the leaves are coming from the direction of the neighbour’s yard. Observations have confirmed this prediction.  Can I then conclude that a tree exists? Probably, but it would be useful to have direct evidence. To obtain this, I look into my neighbour’s yard. Yup, there is a tree. But not so fast–what my eye perceives is a series of impressions of light. The brain then uses that input to construct a model of reality and that model includes the tree. The tree we see is so obvious that we frequently forget that it is the result of model construction, subconscious model construction, but model construction none-the-less. The model is tested when I walk into the tree and hurt myself.

Now consider a slightly more sophisticated example: atoms. The idea of atoms, in some form or other, dates back to ancient India and Greece but the modern idea of atoms dates to John Dalton (1766 – 1844). He used the concept of atoms to explain why elements always interact in the ratios of small whole numbers. This is indirect evidence for the existence of atoms and was enough to convince the chemists but not the physicists of that time. Some like Ernst Mach (1838 – 1916) refused to believe in what they could not see up until the beginning of the last century[1]. But then Albert Einstein’s (1879 – 1955) famous 1905 paper[2] on Brownian motion (the motion of small particles suspended in a liquid) convinced even the most recalcitrant physicists that atoms exist.  Einstein showed that Brownian motion could be easily understood as the result of the motion of discrete atoms. This was still indirect evidence but convincing to almost everyone. Atoms were only directly seen after the invention of the scanning electron microscope and even then there was model dependence in interpreting the scanning electron microscope results. As with the tree, we claim that atoms exist because, as a shown by Dalton, Einstein and others, they form an essential part of models that have strong track record of successful predictions.

Now on to the God particle. What a name! The God particle has little in common with God but the name does sound good in the title of this essay. Then again, calling it the Higgs boson is not without problems as people other than Peter Higgs[3] (1920 – ) have claimed to have been the first to predict its existence. Back to the main point, why do we say the God particle exists? First there is the indirect evidence. The standard model of particle physics has an enviable record of successful predictions. Indeed, many (most?) particle physicists would be happier if it had had some incorrect predictions. We could replicate most of the successful predictions of the standard model without the God particle but only at the expense of making the model much more complicated. Like the recalcitrant physicists of old who rejected the atom, the indirect evidence for the God particle was not good enough for most modern-day particle physicists. Although few actually doubted its existence, like doubting Thomas, they had to see it for themselves. Thus, the Large Hadron Collider (LHC) and its detectors were built and direct evidence was found. Or was it? Would lines on a computer screen have convinced the logical positivists like Ernst Mach? Probably not, but the standard model predicted bumps in the cross-sections and the bumps were found. Given the accumulated evidence and its starring role in the standard model of particle physics, we confidently proclaim that the God particle, like the tree and the atom, exists. But remember, that even for the tree our arguments were model dependent.

Having discussed the God particle what about God? I would apply the same criteria to His/Her/Its existence as for the tree, the atom, or the God particle. As in those cases, the evidence can be direct or indirect.  Indirect evidence for God’s existence would be, for example, the argument from design attributed to William Paley (1743 – 1805). This argument makes an analogy between the design in nature and the design of a watch. The question is then is this a good analogy? If we adopt the approach of science this reduces to the question: Can the analogy be used to make correct predictions for observations? If it can, the analogy is useful, otherwise it should be discarded. There is also the possibility of direct evidence: Has God or His messengers ever been seen or heard? But as the previous examples show, nothing is ever really seen directly but depends on model construction. As optical illusions illustrate, what is seen is not always what is there. Even doubting Thomas may have been too ready to accept what he had seen. As with the tree, the atom or the God particle, the question comes back to: Does God form an essential part of a model with a track record of successful predictions?

So does God exist? I have outlined the method for answering this question and given examples of the method for trees, atoms and the God particle. Following the accepted pedagogical practice in nuclear physics, I leave the task of answering the question of God’s existence as an exercise for you, the reader.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Yes, 1905 was the last century. I am getting old.

[2] He had more than one famous 1905 paper.

[3] Why do we claim Peter Higgs exists?  But, I digress.


Knowledge and the Higgs Boson

Friday, May 17th, 2013

This essay makes a point that is only implicit in most of my other essays–namely that scientists are arro—oops that is for another post. The point here is that science is defined not by how it goes about acquiring knowledge but rather by how it defines knowledge. The underlying claim is that the definitions of knowledge as used, for example, in philosophy are not useful and that science has the one definition that has so far proven fruitful. No, not arrogant at all.

The classical concept of knowledge was described by Plato (428/427 BCE – 348/347 BCE) as having to meet three criteria: it must be justified, true, and believed. That description does seem reasonable. After all, can something be considered knowledge if it is false? Similarly, would we consider a correct guess knowledge? Guess right three times in a row and you are considered an expert –but do you have knowledge? Believed, I have more trouble with that: believed by whom? Certainly, something that no one believes is not knowledge even if true and justified.

The above criteria for knowledge seem like common sense and the ancient Greek philosophers had a real knack for encapsulating the common sense view of the world in their philosophy. But common sense is frequently wrong, so let us look at those criteria with a more jaundiced eye. Let us start with the first criteria: it must be justified. How do we justify a belief? From the sophists of ancient Greece, to the post-modernists and the-anything-goes hippies of the 1960s, and all their ilk in between it has been demonstrated that what can be known for certain is vanishingly small.

Renee Descartes (1596 – 1960) argues in the beginning of his Discourse on the Method that all knowledge is subject to doubt: a process called methodological skepticism. To a large extend, he is correct. Then to get to something that is certain he came up with his famous statement: I think, therefore I am.  For a long time this seemed to me like a sure argument. Hence, “I exist” seemed an incontrovertible fact. I then made the mistake of reading Nietzsche[1] (1844—1900). He criticizes the argument as presupposing the existence of “I” and “thinking” among other things. It has also been criticized by a number of other philosophers including Bertrand Russell (1872 – 1970). To quote the latter: Some care is needed in using Descartes’ argument. “I think, therefore I am” says rather more than is strictly certain. It might seem as though we are quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. Oh, well back to the drawing board.  

The criteria for knowledge, as postulated by Plato, lead to knowledge either not existing or being of the most trivial kind. No belief can be absolutely justified and there is no way to tell for certain if any proposed truth is an incontrovertible fact.  So where are we? If there are no incontrovertible facts we must deal with uncertainty. In science we make a virtue of this necessity. We start with observations, but unlike the logical positivists we do not assume they are reality or correspond to any ultimate reality. Thus following Immanuel Kant (1724 – 1804) we distinguish the thing-in-itself from its appearances. All we have access to are the appearances. The thing-in-itself is forever hidden.

But all is not lost. We make models to describe past observations. This is relatively easy to do. We then test our models by making testable predictions for future observations. Models are judged by their track record in making correct predictions–the more striking the prediction the better. The standard model of particle physics prediction of the Higgs[2] boson is a prime example of science at its best. The standard model did not become a fact when the Higgs was discovered, rather its standing as a useful model was enhanced.  It is the reliance on the track record of successful predictions that is the demarcation criteria for science and I would suggest the hallmark for defining knowledge. The scientific models and the observations they are based on are our only true knowledge. However, to mistake them for descriptions of the ultimate reality or the thing-in-itself would be folly, not knowledge.


[1] Reading Nietzsche is always a mistake. He was a madman.

[2] To be buzzword compliant, I mention the Higgs boson.


Don’t take Sporting Results Seriously

Friday, April 5th, 2013

A colleague of mine is an avid fan of the New York Yankees baseball team. At a meeting a few years ago, when the Yankees had finished first in the American league regular season, I pointed out to him that the result was not statistically significant. He did not take kindly to the suggestion. He actually got rather angry! A person, who in his professional life would scorn anyone for publishing a one sigma effect, was crowing about a one sigma effect for his favorite sports team. But then most people do ignore the effect of statistical fluctuations in sports.

In sports, there is a random effect in who wins or loses. The best team does not always win. In baseball where two teams will frequently play each other four games in a row over three or four days, it is relatively uncommon for one team to win all four games. Similarly a team at the top of the standings does not always beat a team lower down.  As they say in sports: on any given day, anything can happen. Indeed it can and frequently does.[1]

Let us return to American baseball. Each team plays 162 games during the regular season. If the results were purely statistical with each team having a 50% chance of winning any given game, then we would expect a normal distribution of the results with a spread of sigma=6.3 games. The actual spread or standard deviation for the last few seasons is closer to 11 games. Thus slightly more than half the spread in games won and lost is due to statistical fluctuations. Moving from the collective spread to the performance of individual teams, if a team wins the regular season by six games or one sigma, as with the Yankees above, there is a one in three chance that it is purely a statistical fluke. For a two-sigma effect, a team would have to win by twelve games or by eighteen games for a three-sigma effect. The latter would give over 99% confidence that the winner won justly, not due to a statistical fluctuation. When was the last time any team won by eighteen games? For particle physics we require an even higher standard–a five sigma effect to claim a discovery. Thus a team would have to lead by 30 games to meet this criterion. Now my colleague from the first paragraph suggested that by including more seasons the results become more significant.  He was right of course. If the Yankees finished ahead by six games for thirty-four seasons in a row that would be five-sigma effect. From this we can also see why sports results are never published in Physical Review with its five-sigma threshold for a discovery–there has yet to be such a discovery. To make things worse for New York Yankees’ fans they have already lost their chance for an undefeated season this year.

In other sports the statistics are even worse. In the National Hockey League (NHL) teams play eighty-two games and the spread in win-loss expected from pure chance is sigma=4.5. The actual distribution for last year was 6.3 sigma. The signal due the difference in the individual teams’ ability is all in the 1.8 sigma difference. Perhaps there is more parity in the NHL than in Major League Baseball. Or perhaps there is not enough statistics to tell. Speaking of not telling. Last year the Vancouver Canucks finished with the best record for the regular season, two games ahead of the New York Rangers and three games ahead of the St. Louis Blues. Only a fool or a Vancouver Canucks fan would think this ordering was significant and not just a statistical fluctuation. In the National Football League last year, 14 of the 32 teams were within two sigma of the top. Again much of the spread was statistical. It was purely a statistical fluke that the New England Patriots did not win the super bowl as they should have.

Playoffs are even worse (this is why the Canucks have never won a Stanley Cup). Consider a best of seven game series. Even if the two teams are equal, we would expect that the series would only go four games one in every eight (two cubed[2]) series.  When a series goes the full seven games one might as well flip a coin. Rare events, like one team winning the first three games and losing the last four, are expect to happen once in every sixty-four series and considering the number of series being played it is not surprising we see them occasionally.

Probably the worst example of playoff madness is the American college basketball tournament called, appropriately enough, March Madness. Starting with 64 teams or 68 depending on how you count, the playoffs proceed through a single elimination tournament. With over 70 games it is not surprising that strange things happen. One of the strangest would be that the best team wins.  To win the title the best team would have to win six straight games. If the best team has on average a 70% chance of winning each game they would only have a 12% chance of winning the tournament. Perhaps it would be better if they just voted on who is best.

But you say they would never decide a national championship based on a vote. Consider American college football. Now that is a multi-million dollar enterprise! Nobel Laureates do not get paid as much as US college football coaches. They do not generate as much money either. So what is more important to American universities–sports or science?

In the past, the US college national football champions were decided by a vote of some combination of sports writers, coaches and computers. Now that combination only decides who will play in the championship game. The national champion is ultimately decided by who wins that one final game. Is that better than the old system? More exciting but as they say: on any given day anything can happen. Besides sports is more about deciding winners and losers rather than who is best.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] With the expected frequency of course.

[2] Not two to the fourth power because one of the two teams has to win the first game and that team has to win the next three games.


Women in Physics and Mathematics

Friday, March 8th, 2013

Dedicated to Johanna[1]

There are two observations about women in physics and mathematics that are at odds with each other. The first is that there are relatively few women in science. In a typical seminar or conference presentation I have counted that just over ten percent of the audience is female. The second is that, despite the relatively few women, they are by no means second-rate scholars. The first person to ever win two Nobel Prizes was a woman–Marie Curie (1867–1924). But I do not have to go far-far away and long-long ago to find first rate women scientists. I just have to go down the corridor, well actually down the corridor and up a flight of stairs since my office is in the ground floor administrative ghetto while the real work gets done on the second floor.  Since women are demonstratively capable, why are there so few of them in the mathematical sciences?

A cynic could say they are too bright to waste their time on such dead end fields but as a physicist I could never admit the validity of that premise. So why are there so few women in physics and mathematics? It is certainly true that in the past these subjects were considered too hard or inappropriate for women. Despite her accomplishments and two Nobel prizes, Madam Curie was never elected to the French Academy of Sciences. Since she was Polish as well as a woman the reason may have been as much due to xenophobia as misogyny.

Another interesting example of a successful woman scientist is Caroline Herschel (1750–1848). While not as famous as her brother William (1738–1822), she still made important discoveries in astronomy including eight comets and three nebulae. The comment from Wikipedia is in many ways typical: Caroline was struck with typhus, which stunted her growth and she never grew past four foot three. Due to this deformation, her family assumed that she would never marry and that it was best for her to remain a house servant. Instead she became a significant astronomer in collaboration with William. Not attractive enough to marry and not wanting to be a servant she made lasting contributions to astronomy.  If she had been considered beautiful we would probably never have heard of her! Sad.

Sophie Germain (1776–1831) is another interesting example. She overcame family opposition to study mathematics. Not being allowed to attend the lectures of Joseph Lagrange (1736–1813) she obtained copies of his lecture notes from other students and submitted assignments under an assumed male name. Lagrange, to his credit, became her mentor when he found out that the outstanding student was a woman. She also used a pseudonym in her correspondence with Carl Gauss[2] (1777–1855). After her death, Gauss made the comment: [Germain] proved to the world that even a woman can accomplish something worthwhile in the most rigorous and abstract of the sciences and for that reason would well have deserved an honorary degree. High praise from someone like Gauss, but why: even a woman? It reminds one of the quote from Voltaire (1694–1778) regarding the mathematician Émilie du Châtelet (1706–1749): a great man whose only fault was being a woman. Fault? And so it goes. Even outstanding women are not allowed to stand on their own merits but are denigrated for being women.

But what about today, does this negative perception still continue? While I have observed that roughly ten percent of attendees at physics lectures tend to be female, the distribution is not uniform. There tend to be more women from countries like Italy and France. I once asked a German colleague if she thought Marie Curie as a role model played a role in the larger (or is that less small) number of female physicists from those counties. She said no, that it was more to do with physics not being as prestigious in those counties. Cynical but probably true; through prejudice and convention women are delegated to roles of less prestige rather than those reflecting their interests and abilities.

My mother is probably an example of that. The only outlet she had for her mathematical ability was tutoring hers and the neighbour’s children, and filling out the family income tax forms. From my vantage point, she was probably as good at mathematics as many of my colleagues. One wonders how far she could have gone given the opportunity, a B. Sc., a Ph. D? One will never know. The social conventions and financial considerations made it impossible. Her sisters became school teachers while she married a small time farmer and raised five children. It is a good thing she did because otherwise I would not exist.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] A fellow graduate student who died many years ago of breast cancer.

[2] Probably the greatest mathematician that ever existed.


Is science merely fiction?

Friday, February 8th, 2013

Hans Vaihinger (1852 – 1933) was a German philosopher who introduced the idea of “as if” into philosophy. His book, Die Philosophie des Als Ob (The Philosophy of ‘As If’), was published in 1911, but written more than thirty years earlier. He seems to have survived the publish or perish paradigm for thirty years.

In his book, Vaihinger argued that we can never know the true underlying reality of the world but only construct systems which we assume match the underlying reality. We proceed as if they were true.  A prime example is Newtonian mechanics. We know that the underlying assumptions are false—the fixed Euclidean geometry for example—but proceed as if they were true and use them to do calculations. The standard model of particle physics also falls into this category. We know that at some level it is false but we use it anyway since it is useful. Vaihinger himself used the example of electrons and protons as things not directly observed but assumed to exist. They are, in short, useful fictions.

Vaihinger’s approach is a good response to Ernst Mach’s (1838 – 1916) refusal to believe in atoms because they could not be seen.  In the end, Mach lost that fight but not without casualties.  His positivism had a negative effect on physics in many ways was a contributing factor in Ludwig Boltzmann’s (1844 – 1906) suicide.  The philosophy of ‘as if’ is the antithesis of positivism, which holds closely to observation and rejects things like atoms which cannot be directly seen. Even as late as the early twentieth century, some respectable physics journals insisted that atoms be referred to as mathematical fictions.  Vaihinger would say to proceed as if they were true and not worry about their actual existence. Indeed, calling them mathematical fictions is not far from the philosophy of ‘as if’.

The ideas of Vaihinger had precursors. Vaihinger drew on Jeremy Bentham’s (1748 – 1832) work  Theory of Fictions. Bentham was the founder of modern utilitarianism and a major influence on John Stuart Mill (1806 – 1873) among others.  ‘As if’ is very much a form of utilitarianism: If a concept is useful, use it.

The idea of ‘as if’ was further developed in what is known as factionalism. According to fictionalism, statements that appear to be descriptions of the world should be understood as cases of ‘make believe,’ or pretending to treat something as literally true (a ‘useful fiction’ or ‘as if’).  Possible worlds or concepts, regardless of whether they really exist or not, may be usefully discussed. In the extreme case, science is only a useful discussion of fictions; ie science is fiction.

The core problem goes back at least to Plato (424/423 BCE – 348/347 BCE) with the parable of the cave (from The Republic). There, he talks about prisoners who are chained in a cave and can only see the wall of the cave.  A fire behind them casts shadows on the wall and the prisoners perceive these shadows as reality since this is all they know. Plato then argues that philosophers are like a prisoner who is freed from the cave and comes to understand that the shadows on the wall are not reality at all. Unfortunately, Plato (and many philosophers after him) then goes off in the wrong direction. They take ideas in the mind (Plato’s ideals) as the true reality. Instead of studying reality, they study the ideals which are reflections of a reflection. While there is more to idealism than this, it is the chasing after a mirage or, rather, the image reflected in a mirage.

Science takes the other tack and says we may only be studying reflections on a wall or a mirage but let us do the best job we can of studying those reflections. What we see is indeed, at best, a pale reflection of reality. The colours we perceive are as much a property of our eyes as of any underlying reality. Even the number of dimensions we perceive may be wrong. String theory seems to have settled on eleven as the correct number of dimensions but that is still in doubt. Thus, science can be thought of as ‘as if’ or fictionalism.

But that is far too pessimistic, even for a cynic like me. The correct metaphor for science is the model. What we build in science are not fictions but models. Like fictions and ‘as if,’ these are not reality and should never be mistaken for such, but models are much more than fictions. They capture a definite aspect of reality and portray how the universe functions. So while we scientists may be studying reflections on a wall, let us do so with the confidence that we are learning real but limited knowledge of how the universe works.

To receive a notice of future posts follow me on Twitter: @musquod.


Publish or Perish?

Friday, January 11th, 2013

Here I sit with my feet up thinking deep thoughts and some fool tells me I have to publish if I want to get paid. Curse you, Robert Boyle (1627–1692)! Don’t they know that publishing takes time away from thinking deep thoughts? After all, thinking deep thoughts is the sole reason for theoretical physicists to exist. Aw well, I suppose, I must. But things were different before Robert Boyle. Alchemists were very careful about who would learn their secrets. A lot of the information was passed down orally to apprentices or written in code. After all, if you had learned the magic incantation for turning lead into gold, you did not want your competitors butting in and driving up the price of lead. But that all changed with Robert Boyle. He started the trend of publishing his results so others could build on what he had done. Also perhaps because he could not, himself, understand what his assistant, Robert Hooke (1635 – 1703), had done and thought that others perhaps could if he made the results available. Thus he published and started a trend.

Since the time of Robert Boyle, publishing has become the standard by which scientists are judged. One needs at least one publication for a Ph. D, 15 to 20 for a permanent job (in my specialty), and one very good one to get a Nobel Prize. Unfortunately, publishing does not directly correlate with how much you get paid. Now, my father cut down trees for a living and was paid for each ton of trees trucked to the pulp mill. Perhaps it could be similar with scientists: pay them by the ton of paper consumed rather than produced. At the end of the year, weigh up the paper used to publish their work and pay accordingly. A good journal would then be one that had a large circulation and used a lot of dead trees.  Of course, then you might get a bunch of really prolific writers, but a lack of deep thinkers.  And never mind electronic publication—that throws a whole new element in. Guess we’ll have to throw the paid-by-the-ton scheme out the window.

The world of electronic publishing leads to an interesting digression: What is a publication? Everyone agrees that words printed on dead trees and circulated form a publication. But what about words that never appear on dead trees? With even Newsweek becoming an electronic only publication, I guess that electronic publications must be considered legitimate. Going further, what about preprint servers like arXiv? It seems to me that arXiv largely replaces the need for the traditional journals. I always took the point of view that I would put my papers on arXiv so other scientists would read them and then submit it to a regular journal so I could put it on my CEV as a peer-reviewed publication. I also used that electronic archive as my main source of information on what was going on in my field. The archival, printed journals I rarely looked at. If we had a rating and peer-review system for papers on the electronic archives we could safely do away with the traditional journals. However, my boss does like to brag about the number of laboratory papers that make the cover of Nature.

But back to the main point, publishing is important. The first reason is that while publishing  a lot of papers does not necessarily indicate that one is making a major contribution, no papers probably does indicate that one is sleeping rather than thinking deep thoughts. Thus, papers published should be considered the first indication of scientific productivity—and a baseline for your supervisor to keep paying your or not. The second reason (and the one that Boyle initiated when he didn’t understand his assistant’s work) is that peer review, in the broad sense, plays a major role in error control. It is one’s peers that will ultimately decide if one’s thoughts are deep or shallow, on track or not. The only way one’s peers can critique one’s work is if it is published and made available. The third reason is that science progresses by building on what has gone before, and to this, we must thank Boyle. It is the published journals and the much-maligned archival journals that keep the record of what has been learned. While much that is published can safely be forgotten, the gems, like Einstein’s papers, are also there.

So if you wish to flourish as a scientist and not perish, it is best to publish—but only good papers, as to not bog down the archives or kill too many trees. As for me, I wonder if I can count these blogs as publications on my CEV. That would give me an additional sixty publications. Probably not. Anyway when I retire in a few years, I will have a CEV burning party so it really does not matter.

To receive a notice of future posts follow me on Twitter: @musquod.



Friday, December 7th, 2012

In any genealogy there are always things one wants to hide; the misfit relative, the children born on the wrong side of the sheet, or the relative Aunt Martha just does not like. As a genealogist it takes a lot of effort to find these things out. Genealogies tend to be sanitized: the illegitimate grandchild becomes a legitimate child, the misfit relative somehow missed being included, and Uncle Ben aged 15 years during the 10 years between censuses and died at age one hundred although he was only eighty[1] according to the earliest records. The roots of science, like all family histories, have undergone a similar sanitization process. In a previous essay, I gave the sanitized version. In this essay, I give the unauthorized version. Aunt Martha would not be happy.

The Authorized Version has the origins of science tied closely to the Greek philosophical tradition with science arising within and from that tradition. While that has some truth to it, it is not the whole truth. It was two millennia from Aristotle to the rise of science and there have been many rationalizations for this delay, many starring Christianity as the culprit. However, Christianity did not gain political strength for a few centuries after Aristotle’s death. So, if the cause was Christianity, it must have had a miraculous non-causal effect.

A lot happened between Aristotle and the rise of science: the rise and fall of the Roman Empire, the rise of Christianity and Islam, the Renaissance, the reformation, and the printing press.  A lot of what could be called knowledge was developed but did not provide major gains in philosophy. The Romans were for the most part engineers not philosophers, but to do engineering takes real knowledge. Let us pass on to the Arabs. They are generally considered as a mere repository for Greek knowledge which was then passed on to the West largely intact but with some added commentary. I suspect that is not correct, as I will argue shortly.

There are two contributions to the development of science that are frequently downplayed: astrology and alchemy. These are the ancestors that science wants to hide. We all know the story of the Ptolemy and Copernicus, but the motivation for the development of astronomy was astrological and religious. From the ancient Babylonians to the present day, people have tried to divine the future by studying the stars. It is no accident that astronomy was one of the first sciences. It had practical applications: astrology (Kepler was a noted astrologer) and the calculation of religious holidays, most notably Easter.  One of the reasons Copernicus’s book was not banned was because the church found it useful for calculating the date of Easter. The motion of the planets is also sufficiently complicated that they could not be predicted trivially, yet they were sufficiently simple to be amenable to treatment by the mathematics of the day. Hence, it became the gold standard of science.  Essentially we dropped the motivation but kept the calculations.

Alchemy, the other problem ancestor, is even more interesting.  The Arabs, those people who are considered to have produced nothing new, had within their ranks Jābir ibn Hayyān (721 – 815),  chemist and alchemist, astronomer and astrologer, engineer, geographer, philosopher, and physicist, pharmacist and physician—in general, an all-around genius. He, along with Robert Boyle (1627 – 1691), is regarded as the founder of modern chemistry, but note how far in advance Jābir ibn Hayyān was—900 years.  Certainly he took alchemy beyond the occult to the practical. Although the alchemists never succeeded in turning lead into gold, they did produce a lot of useful metallurgy and chemistry. It is indeed possible that along with his chemical pursuits, Jābir ibn Hayyān forged the foundation of science by going down to the laboratory and seeing how things actually worked.  It is no surprise that the first two people to introduce something like science into Western Europe, Frederick II (1194 – 1250) and Roger Bacon (c. 1214–1294), were both very familiar with Arab scholarship and presumably with Jābir ibn Hayyān’s work.  In addition, both Isaac Newton (1642 – 1727) and Robert Boyle (1627 – 1691) were alchemists. A major role for alchemy in the development of science cannot be creditably denied.

It is perhaps wrong to think of astrology and alchemy as separate. To turn cabbage into sauerkraut, you need to know the phase of the moon[2] and the same probably holds true for turning lead into gold. Hence, most alchemists were also astrologers.  But alchemy and astrology have always had a dark side of occult, showmanship, and outright fraud. A typical, perhaps apocryphal, example would be Dr. Johann Georg Faust (c. 1480 – c. 1540). He was killed around 1540 when his laboratory exploded. Or his laboratory exploded when the devil came to collect his soul.  This person is presumably the origin of the Dr. Faust legend of the man who sold his soul to the devil for knowledge.  It interesting that this legend arose in the late sixteenth century just as science was beginning to rise from obscurity. The general population’s suspicion of learning, there from the beginning, has perhaps never really gone away.

The philosophers and theologians, the beautiful people, had their jobs in the monasteries and universities but science owes more to the people who sold their soul, or at least their health, to the devil for knowledge. These were the people who actually went down to the laboratories and did the dirty work to see how the world actually works.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] My genealogy of the Musquodoboit Valley has all these and more (http://www.rootsweb.ancestry.com/~canns/mus.pdf).

[2] Or so my uncle claimed.



Friday, November 2nd, 2012

Thomas Edison (1847 – 1931) was a genius. He was also the ultimately practical person devoted to producing inventions with commercial applications. His quote on airships from 1897 is typical: I am not, however, figuring on inventing an airship. I prefer to devote my time to objects which have some commercial value, as the best airships would only be toys. Fortunately the Wright brothers liked playing with toys and indeed the airplane was just a toy for many years after it was first invented. But just ask Boeing, Airbus, or even Bombardier if airplanes are still toys. Progress requires both the practical people, like Edison, and the people who play with toys, like the Wright brothers.

Let’s pick on Edison again. The practical Edison patented something known as the Edison effect, but did nothing more with it. The effect was this:  if a second electrode is put in a light bulb, it is found that an electrical current would flow if the voltage was applied in the right direction. This lead to the diode which improved radio reception and in the hands of people, who liked playing with toys, lead to the vacuum tube. The vacuum tube is now largely obsolete but began the electronics revolution. Again, we see that progress depends on the people who like playing with toys as well as the people concerned with immediate practical applications. The practical use of an observation, like the Edison effect, is frequently not immediately obvious.

With the light bulb, Edison played a different role. The light bulb is at the end of the chain of discovery. It relies on all the impractical work of people like Michael Faraday (1791 – 1867) and James Maxwell (1831 – 1879), who developed the ideas needed for the practical generation and transmission of electrical power. Without the power grid that their discoveries made possible, the light bulb would have only been a toy.

The discovery of radium is another example of a pure research project leading to practical results. At one time, radium was used extensively to treat cancer. To quote Madam Marie Curie[1] (1867 – 1934): We must not forget that when radium was discovered no one knew that it would prove useful in hospitals. The work was one of pure science. And this is a proof that scientific work must not be considered from the point of view of the direct usefulness of it. It must be done for itself, for the beauty of science, and then there is always the chance that a scientific discovery may become like the radium, a benefit for humanity.

An even more striking example of how serendipitously science advances technology is the modern computer. It relies on transistors which are very much quantum devices. The early development of quantum mechanics was driven by the study of atomic physics. So, I could just imagine Earnest Rutherford (1871 – 1937), an early experimenter in atomic physics, thinking: I want to help develop a computing device so I will scatter some alpha particles. Not bloody likely! The implications of pure research are simply unknowable. However, I doubt the Higgs boson will ever have practical applications. The energy scale is simply too far removed from the everyday scales.

But pure research contributes to society in another way. A prime example is the cyclotron. It was invented in 1932 for use in the esoteric study of nuclear physics. Initially, they were only in top physics departments and laboratories. Now they are in the basements of many hospitals were they are used to make rare isotopes for medical imaging and treatment. The techniques developed for pure research frequently find their way into practical use. The idea is captured nicely in the term: space age technology.  While standing on the moon did not produce any real benefits to mankind, the technology developed in the enterprise did; hence the term: space age technology.

Of course, I cannot leave this topic without bring up the World Wide Web. The initial development was done at CERN in support of particle physics. I remember a colleague getting all excited about this new software development, but initially it was something only a geek like her could love. The links were denoted by numbers that had to be typed in, no clicking on links. Then the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign developed a browser, Mosaic, with a graphical interface and embedded pictures. This browser was released in 1993 and looks much like any browser today. The rest is history. But, two other things were needed to make the World Wide Web a hit. The first was computers (those things that were developed from Rutherford scattering alpha particles) with sufficient capabilities to run the more powerful browsers and, of course, the internet itself. The internet was initially just an academic network but the World Wide Web provided the impetus to drive it into most homes. Here again we see a combination of efforts: academic at CERN and NCSA and commercial at the internet providers.

Thus, we see pure research providing the raw material for technological development. The raw material is either the models, like quantum mechanics, or the inventions like cyclotrons. These are then used by practical men like Edison to generate useful technology. However, there is also a cultural component: satisfying our curiosity. While the spinoffs may be the main reason politicians and the tax payers support pure science, it is not the motivation driving the scientists who work in pure science. In my own case, I went into physics to understand how the universe works. To a large extend that desire has been fulfilled, not so much by my own efforts but by learning what others have discovered. More generally, the driving force in pure science is curiosity on how the universe works and the joy of discovery. Like Christopher Columbus (1451 – 1506), Robert Scott (1868 – 1912) or Captain James Kirk (b. 2233), pure scientists are exploring new worlds and going where no man, or woman, has gone before.

[1] The first person to win two Nobel prizes.



Friday, October 5th, 2012

Many years ago, I served on a committee responsible for recommending funding levels for research grants. After the awards were announced, a colleague commented that all we did was count the number of publications and award grants in proportion to that number. So, I checked and did a scatter plot. Boy, did they scatter. The correlation between the grant size and the number of publications was not that strong. I then tried citations; again a large scatter. Well, perhaps the results really were random—nah, that could not happen; I was on the committee after all.

I did not do a multivariable analysis, but there were no simple correlations between what might be called quantitative indicators and the size the research grant. This supports the conclusions of the Expert Panel on Science Performance and Research Funding: Mapping research funding allocation directly to quantitative indicators is far too simplistic, and is not a realistic strategy[1]. Trying to do that is making the mistake of the logical positivists who wanted to attach significance directly to the measurements. As I have argued in previous essays, the meaning is always in the model and logical positivism leads to a dead end.

In deciding funding levels, the situation is too complicated for the use of a simple algorithm.  Consider the number of publications. There are different types of publications: letters, regular journal articles, review articles, conference contributions, etc. Publications are of different lengths. Should one count pages rather than publications? Or is one letter worth two regular journal papers; letters being shorter and considered by some to be more important than regular articles.  But, in reality, one wants to see a mix of the different types of publications. A review article might indicate standing in the field but one also wants to see original papers.  Is a paper in a prestigious journal worth more than one in a more mundane journal? What is a prestigious journal anyway? There is also the question of multi-author papers. One gets suspicious if all the papers are with more senior or well-known authors but all single author papers is also a warning sign. Generally co-authoring papers with junior collaborators is a good thing. In some fields, all papers include all members of the collaboration so the number of coauthors carries very little information. The order of authors on a publication may or may not be important. And on it goes. Expert judgment is, as always, required to sort out what it all means.

Citations are an even bigger can of worms. Even in a field as small as sub-atomic theoretical physics there are distinct variations in the pattern of citations among the subfields: string theory, particle phenomenology or nuclear physics. For example, the lifetime for citations in particle phenomenology is significantly less than in nuclear physics. Then there is the question of self-citations, citations to one’s own work or, more subtle, to close collaborators. And what about review articles?  Is a citation to a review article as important as one to an article on original research?   Review articles frequently collect more references. My most cited paper is a review article. A person can, with a bit of effort, sort this all out. Setting up an algorithm would be damn near impossible. A person could even, gasp, read some of the papers and form an independent opinion of their validity. But that could introduce biases. Hence, numbers are important but they must be interpreted. This leads to the conclusion: Quantitative indicators should be used to inform rather than replace expert judgment in the context of science assessment for research funding allocation.[2]

The other problem with simple algorithms is the feedback loop. With a simple algorithm, researchers naturally change their behaviour to maximize their grants. For example, if we judge on the number of publications, people split papers up, publish weak papers, or publish what is basically the same thing several times. I have done that myself. None of these improve the quality of the work being done. Expert judgment can generally spot these a mile away. After all, the experts have used these tricks themselves.

More generally, there is the problem of trying to reduce everything to questions that have nice quantitative answers. Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong questions, which can always be made precise[3]. There seems to be this argument that since science normally uses quantitative methods, administration should follow suit so it can have the success of science. It is like the medieval argument that since most successful farmers had three cows; the way to make farmers successful was to give them all three cows.  But the wrong question can never give the right answer. It is far better to ask the right question and then work on getting a meaningful answer. What we want to do at a science laboratory, or for funding science generally, is to advance our understanding of how the universe works to maximum extent possible and use the findings for the benefit of society. The real question is how do we do this? That is neither an easy question to answer nor one that can be easily quantified. Not being quantifiable does not make it a meaningless question. There are various metrics an informed observer can use to make intelligent judgments. But it is very important that administrators avoid the siren call of logical positivism and not try to attach meaning directly to a few simple measurements.

[1] Quote from: Informing research choices: indicators and judgment. Expert Panel on Science Performance and Research Funding. Council of Canadian Academies (2012).

[2] Ibid

[3] Tukey, J. W. (1962). The future of data analysis. Annals of Mathematical Statistics 33(1), 1-67.



Friday, September 7th, 2012

This essay is in part motivated by an excellent Quantum Diary post, Art and Science: Both or Neither, written by a fellow TRIUMFonian, Jordan Pitcher.  He explores the incommensurability of art and science. In response, this essay is about related matters of art and taste such as if the original Bugs Bunny cartoons represent the pinnacle of the cartoon trade (beyond debate) or if my essays are worth reading (debatable).

Now, science is about learning how the universe operates, but there is more to life than that; he says while listening to: Oh, give me the beat boys and free my soul, I wanna get lost in your rock and roll, And drift away[1]. I do not know if that is art but it is certainly not science. This brings us to the aesthetic side of life, including music, art, literature, theatre, movies, and cartoons (see above). With art, my house is full of the pictures my mother-in-law painted and it ain’t half bad stuff. I particularly like the autumn scenes. Similarly, I have my tastes in literature (I particularly like a good Rex Stout (1886 – 1975) detective story—and don’t say that is not literature); theatre (Wicked was not bad); movies (I took my daughter to The Smurfs[2] but can skip movies with no loss); and the appropriate use of grammar (ain’t, ain’t half bad[3]). That brings me to fonts: Is comic sans really that bad? And I have seen major disagreements on the relative merits of sans serif versus serif—a plague on both your houses. And do not forget food; a fried mackerel would go real good about now.

I lived through the culture wars of the 1960s and it turned me off culture wars—meaningless arguments over personal preferences.  There were great debates about whether rock and roll was legitimate music or the work of the devil. There was even a debate about whether the lyrics of one particular song (Louie Louie) were obscene, but no one could tell what the lyrics actually were so the argument was moot. Similarly, is abstract art really art or can art only be more realistic works like Rubens (1557 – 1640)? Hmm, perhaps that is not the best example but there can be no doubt that the arts enrich life. Again, perhaps Rubens might not be the best example.

A central characteristic of science is that it has mechanisms, comparison to observation, and parsimony to uniquely determine which model or approach is best. But there are no similar criteria to decide if Beethoven (1770 – 1827) is really better than the Beastie Boys (1981 – 2012). I do not particularly like either (Oh, give me the beat). Considering Beethoven’s staying power and the Beastie Boys record sales, I guess I am in a minority. So it is across the whole scope of the arts; some people like one thing and some another. One should not mistake personal preference for objective reality, but give me that beat.

It is also not a scientist versus humanitarian kind of thing. The likes and dislikes cut across that divide. Perhaps the likes of scientists may be tilted in a somewhat different direction but the spread in each group is large. There is, instead, a large upbringing and cultural influence on what ones likes and dislikes. My Asian born daughter loves sushi but my Nova Scotian raised relatives would not touch it with a three metre (roughly ten foot) pole. The choice of preferred music, art, food, etc depends at least in part on what one was exposed to while growing up. There is probably even a genetic component to what one likes and dislikes. I inherited my like for and ability in mathematics from my mother; similarly my inability to write coherently. The latter plagued my time in school and university. What one sees has a genetic component; as in colour blindness. There are also studies suggesting some women have four rather than three types of colour sensors. It would be strange if inherited differences such as these did not affect our aesthetic tastes. Indubitably, some of the differences in our tastes are indeed in our brains—either acquired or inherited.

The one downside of all the differences in taste is that some denizens of the art world think that since the arts have no or only weak objective standards, science cannot have any either. This leads to nonsense like the claim that science is purely cultural. Conversely, there is the equally ridiculous perception that the arts should have objective standards like science. Salt herring is an acquired taste (shudder).

So let us recognize that science and the arts are indeed very different in how they make judgments and celebrate the diversity permitted by the subjectivity in the arts. After all, life would be very boring if all we had to read was Margaret Atwood (b. 1939) or Farley Mowat, (b. 1921).

To receive a notice of future posts follow me on Twitter: @musquod.



[1] From the song Drift Away written by Mentor Williams

[2] I particularly liked Azrael although my sister says I am like brainy smurf (not a compliment).

[3] See A Dictionary of Modern English Usage (1926), by Henry Fowler (1858–1933)