• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Byron Jennings | TRIUMF | Canada

Read Bio

In Defense of Scientism and the Joys of Self-Publishing.

Friday, April 17th, 2015

As long-time readers of Quantum Diaries know I have been publishing here for a number of years and this is my 85th and last post[1]. A couple of years ago, I collected the then current collection, titled it “In Defense of Scientism,” after the title of one of the essays, and sent it off to a commercial publisher. Six months later, I got an e-mail from the editor complaining that he had lost the file and only found it by accident, and he somehow inferred that it was my fault. After that experience, it was no surprise he did not publish it.

With all the talk of self-publishing these days, I thought I would give it a try. It is easy, at least compared to finding the Higgs boson! There are a variety of options that give different levels of control, so one can pick and choose preferences – like off an á la carte menu. The simplest form of self-publishing is to go to a large commercial publisher.  The one I found would, for $50.00 USD up front and $12.00 a year, supply print on demand and e-books to a number of suppliers. Not sure that I could recover the costs from the revenue – and being a cheapskate – I decided not to go that route. There are also commissioned alternatives with no upfront costs, but I decided to interact directly with three (maybe four, if I can jump over the humps the fourth has put up) companies.  One of the companies treated their print-on-demand and digital distribution arms as distinct, even to the point of requiring different reimbursement methods. That is the disadvantage of doing it yourself, sorting it all out. The advantage of working directly with the suppliers is more control over the detailed formatting and distribution.

From then on things got fiddly[2], for example, reimbursement. Some companies would only allow payment by electronic fund transfer, others only by check. The weirdest example was one company that did electronic fund transfers unless the book was sold in Brazil or Mexico. In those cases, it is by check but only after $100.00 has been accumulated. One company verified, during account setup, that the fund transfer worked by transferring a small amount, in my case 16 cents. And then of course there are special rules if you earn any money in the USA. For USA earnings there is a 30% withholding tax unless you can document that there is a tax treaty that allows you to get around it. The USA is the only country that requires this. Fine, being an academic, I am used to jumping through hoops.

Next was the question of an International Standard Book Number (ISBN). They are not required but are recommended. That is fine since in Canada you can get them for free. Just as well since each version of the book needs a different number. The paperback needs a different number from the electronic and each different electronic format requires its own number. As I said, it is a good thing it is free. Along with the ISBN, I got a reminder that the Library of Canada requires one copy of each book that sells more than four copies and two copies if it goes over a hundred and of course a separate electronic copy if you publish electronically. Fun, fun, fun[3]. There are other implications of getting you own ISBN number. Some of the publishers would supply an ISBN free of charge but then would put the book out under their own imprint and, in some cases, give wider distribution to those books. But again, getting your own number ultimately gives you more control.

With all this research in hand, it was time to create and format the content. I had the content from the four years’ worth of Quantum Diary posts and all I had to do was put it together and edit for consistency. Actually, Microsoft Word worked quite well with various formatting features to help. I then gave it to my wife to proofread. That was a mistake; she is still laughing at some of the typos. At least there is now an order of magnitude fewer errors. I should also acknowledge the many editorial comments from successive members of the TRIUMF communications team.

The next step was to design the book cover. There comes a point in every researcher’s career when they need support and talent outside of themselves. Originally, I had wanted to superimpose a picture of a model boat on a blackboard of equations. With that vision in mind, I set about the hallways to seek and enroll the talent of a few staff members who could make it happen. After normal working hours, of course. A co-op communication student suggested that the boat be drawn on the blackboard rather than a picture superimposed. The equations were already on a blackboard and are legitimate. The boat was hand drawn by a talented lady in accounting, drawing it first onto an overhead transparency[4] and then projecting it onto a blackboard. A co-op student in the communications team produced the final cover layout according to the various colour codes and margin bleeds dictated by each publisher. For both my own and your sanity, I won’t go into all the details. In the end, I rather like how the cover turned out.

For print-on-demand, they wanted a separate pdf for the cover and for the interior. They sent very detailed instructions so that was no problem. It only took about three tries to get it correct. The electronic version was much more problematic. I wonder if the companies that produce both paper and digital get it right. I suspect not. There is a free version of a program that converts from Word to epub format but the results have some rather subtle errors, like messing up the table of contents. I ended up using one of the digital publisher’s conversion services provided as a free service. If you buy a copy and it looks messed up, I do not want to hear about it.[5] One company (the fourth mentioned above) added a novel twist. I jumped all the hoops related to banking information for wire transfers, did the USA tax stuff and then went to upload the content. Ah, I needed to download a program to upload the content. That should not have been a problem but it ONLY runs on their hardware. The last few times I used their hardware it died prematurely so they can stuff it.

Now, several months after I started the publishing process, I have jumped through all the hoops! All I have to do is lay back and let the money roll in so I can take early retirement. Well, at my age, early retirement is no longer a priori possible but at least I hope to get enough money to buy the people who helped me prepare the book a coffee. So everyone, please rush out and buy a copy. Come on, at least one of you.

As a final point, you may wonder why there is a drawing of a boat on the cover of a book about the scientific method. Sorry, to find out you will have to read the book. But I will give you a hint. It is not that I like to go sailing. I get seasick.

To receive a notice of my blogs and other writing follow me on Twitter: @musquod.

[1] I know, I have promised this before, but his time trust me. I am not like Lucy in the Charlie Brown cartoons pulling the football away.

[2] Epicurus, who made the lack of hassle the greatest good, would not have approved.

[3] Reminds me of an old Beach Boys song.

[4] An old overhead projector was found in a closet.

[5] Hey! We got through an entire conversation about formatting and word processing software without mentioning LaTeX despite me having been the TRIUMF LaTeX guru before I went over to the dark side and administration.

Share

Operationalism and Operational Definitions

Tuesday, February 3rd, 2015

Physicists frequently stray into the field of philosophy; notable examples include Thomas Kuhn (1922 –1996) and Henri Poincaré (1854 – 1912). This is perhaps because physicists frequently work in areas far removed from everyday experiences and, in order to be successful in communicating their ideas, underlying assumptions must be dealt with explicitly. Although less well known today than Kuhn and Poincaré, Percy Bridgman (1882 – 1961) also falls into this category. In physics, he is noted for his work on high-pressure physics, winning the Nobel Prize in 1946. In philosophy, he is credited with coining the term OPERATIONAL DEFINITION and promoting the idea of operationalism. These ideas are laid out in the 1927 book: THE LOGIC OF MODERN PHYSICS. If nothing else, it shows the folly of using MODERN in book titles. None-the-less, it is an interesting little book and, in its time, quite influential.

In his book, Bridgman introduces several interesting ideas, for example, that when one explores new areas in science, one should not be surprised that the supporting concepts have to change. Hence we should not be surprised when classical concepts fail in the relativistic or quantum domains. This illustrates why interpretations of quantum mechanics, explaining it terms of classical concepts, are poorly motivated. A related idea is that an explanation is the description of a given phenomenon in terms of familiar concepts.. Of course, with this definition, what qualifies as a valid explanation depends on what the explainee is familiar with. If one cannot succeed using established concepts, one must explain the new idea using familiar, albeit far removed, concepts But what happens when even this does not work? Bridgeman suggests that the solution is to introduce new concepts and become familiar with them. Seems reasonable to me. Thus quantum mechanics can be explained in terms of the, familiar to me, concept of the wave function; no need for many worlds and the like.

While it is natural to think of high speed (relativity) or small size (quantum mechanics) as new areas of science, Bridgman includes increased precision as well. He talks about the penumbra of uncertainty that surrounds all measurements and that is penetrated by increasing the precision of the measurements. Thus the idea of the distinct high-energy and precision frontiers, commonly discussed in modern particle physics planning exercises, goes back at least to 1927.

Bridgman was also a phenomenologist to the core. He did not believe that a priori knowledge could constrain what could happen; in his words: Experience is determined only by experience. C.I. Lewis (1983 – 1964) in his 1929 book Mind and the World-Order agrees. The similar ideas, in books of about the same time, indicate the concerns of that age.

Despite these interesting sidelights, the main idea in THE LOGIC OF MODERN PHYSICS is that concepts are defined by how they are measured; that is by the measurement operation, hence the term operationalism. So why was he interested in operational definitions? It was to avoid the problem in classical mechanics where concepts like distance and time were taken for granted. It then came as shock when the concepts proved to be rather complex when special relativity was invented. To avoid such shocks in the future, Bridgman proposed the idea of operational definitions. For example, to measure length you go down to the local Canadian Tire® store (in the USA it would be Walmart®), buy a tape measure and use it measure length. Thus the concept of length is defined by Canadian Tire®, oops, I mean by a tape measure. What if I measure length by surveying techniques that make use of tranquilization? Bridgman claimed that that is a distinctly different concept and is covered by the same term only for convenience. Here at TRIUMF, distance and location are also measured using laser tacking. This is again a different concept than the original concept of length. Things get even more complicated when we talk about the distance to stars which use again a different operation. Bridgman suggests that length loses it meaning at lengths less than the size the electron because such lengths cannot be measured. Today we would say they can be measured but length in that case is simply a parameter, in a mathematical formula, describing the scattering of particles. Hence we do not have one concept of length or distance but many, although they are the same numerically in regions where the techniques overlap.

Bridgman then goes on to consider various other concepts and how they might be defined operationally. He seems to have been very much influenced by Albert Einstein (1879 – 1955) and Einstein’s discussion of the synchronization of clocks (which actually goes back to Poincaré). The possible operational definitions of velocity are particularly interesting. In contradistinction to the definition given by Einstein based on clocks synchronized and distances measured in a fixed inertial frame, Bridgman suggests that the velocity of a car could also be defined by counting mileposts that the car passes to determine distance and using the clock on the car dashboard to measure time. This velocity can become infinite and would be useful to a person going to a distant solar system who is interested in how many of his years it takes to get there. For most purposes Einstein’s definition is more convenient and hence it is the one in textbooks though other definitions remain possible.

And on it goes. In some cases the definitions seemed quite forced. Never-the-less, three groups of people picked up on the idea of operational definitions. One group was the logical positivists. They tried to avoid theory and were pleased when a physicist gave definitions directly in terms of observables. The second group was the phycologists, who wanted a more secure foundation for their subject. The third group was in quality control and business management where Walter Shewhart (1891 – 1967) and Edwards Demming (1900 – 1993) adopted the idea.

However the concept, as the end all and be all of meaning, had its problems. Like logical positivism, it missed the idea that the meaning is in the model. While we may have different ways to measure length there is common idea behind them all. We can consider this common idea to be an abstraction from the different operational defined concepts or we can take the operational definitions as approximations to the abstract idea. One could say that operationally there is no difference between the two approaches.

Ultimately, operational definitions are useful. They tie concepts tightly to observations where they are less likely to be dislodged by future discoveries or new models. They also help eliminate fuzzy thinking. A lot of the concepts that do not have operational definitions are, in general, poorly defined. Who knows, I might even take the concept of scientific realism seriously if someone gave me an operational definition of it.

To receive a notice of my future posts and my pending book, In Defense of Scientism, follow me on Twitter: @musquod.

Share

String Theory and the Scientific Method

Friday, January 9th, 2015

It seems some disagreements are interminable: the Anabaptists versus the Calvinists, capitalism versus communism, the Hatfields versus the McCoys, or string theorists versus their detractors. It is the latter I will discuss here although the former may be more interesting. This essay is motivated[1] by a comment in the December 16, 2014 issue of Nature by George Ellis and Joe Silk. The comment takes issue with attempts by some string theorists and cosmologists to redefine the scientific method by eliminating the need for experimental testing and relying on elegance or similar criteria instead. I have a lot of sympathy with Ellis and Silk’s point of view but believe that it is up to scientists to define what science is and that hoping for deliverance by outside people, like philosophers, is doomed to failure

To understand what science is and what science is not, we need a well-defined model for how science behaves. Providing that well-defined model is the motivation behind each of my essays. The scientific method is quite simple: build models of how the universe works based on observation and simplicity. Then test them by comparing their predictions against new observation. Simplicity is needed since observations underdetermine the models (see for example: Willard Quine’s (1908 –2000) essay: The Two Dogmas of Empiricism).  Note also that what we do is build models: the standard model of particle physics, the nuclear shell model, string theory, etc. Quine refers to the internals of the models as myths and fictions. Henri Poincaré (1854 – 1912) talks of conventions and Hans Vaihinger (1852 –1933) of the philosophy of as if otherwise known as fictionalism. Thus it is important to remember that our models, even the so-called theory of everything, are only models and not reality.

It is the feedback loop of observation, model building and testing against new observation that define science and give it its successes. Let me repeat: The feedback loop is essential. To see why, consider example of astrology and why scientists reject it. Its practitioners consider it to be the very essence of elegance. Astrology uses careful measurements of current planetary locations and mathematics to predict their future locations, but it is based on an epistemology that places more reliance on the eloquence of ancient wisdom than on observation. Hence there is no attempt to test astrological predictions against observations. That would go against their fundamental principles of eloquence and the superiority of received knowledge to observation. Just as well, since astrological predictions routinely fail. Astrology’s failures provide a warning to those who wish to replace prediction and simplicity with other criteria. The testing of predictions against observation and simplicity are hard taskmasters and it would be nice to escape their tyranny but that path is fraught with danger, as astrology illustrates. The feedback loop from science has even been picked up by the business management community and has been built into the very structure of the management standards (see ISO Annex SL for example). It would be shame if management became more scientific than physics.

But back to string theory. Gravity has always been a tough nut to crack. Isaac Newton (1643 – 1727) proposed the decidedly inelegant idea of instantaneous action at a distance and it served well until 1905 and the development of special theory of relativity. Newton’s theory of gravity and special relativity are inconsistent since the latter rules out instantaneous action at a distance. In 1916, Albert Einstein (1879 – 1955) with an honorable mention to David Hilbert (1862 – 1943) proposed the general theory of relativity to solve the problem. In 1919, the prediction of the general theory of relativity for the bending of light by the sun was confirmed by an observation by Arthur Eddington (1882 – 1944). Notice the progression: conflict between two models, proposed solution, confirmed prediction, and then acceptance.

Like special relativity and Newtonian gravity, general relativity and quantum mechanics are incompatible with one another. This has led to attempts to generate a combined theory. Currently string theory is the most popular candidate, but it seems to be stuck at the stage general relativity was in 1917 or maybe even 1915: a complicated (some would say elegant, others messy) mathematical theory but unconfirmed by experiment. Although progress is definitely being made, string theory may stay where it is for a long time. The problem is that the natural scale of quantum gravity is the Planck mass and this scale is beyond what we can explore directly by experiment. However, there is one place that quantum gravity may have left observable traces and that is in its role in the early Universe. There are experimental hints that may indicate a signature in the cosmic microwave background radiation but we must await further experimental results. In the meantime, we must accept that current theories of quantum gravity are doubly uncertain. Uncertain, in the first instance, because, like all scientific models, they may be rendered obsolete by new a understanding and uncertain, in the second instance, because they have not been experimentally verified through testable predictions.

Let’s now turn to the question of multiverses. This is an even worse dog’s breakfast than quantum gravity. The underlying problem is the fine tuning of the fundamental constants needed in order for life as we know it to exist. What is needed for life, as we do not know it, to exist is unknown. There are two popular ideas for why the Universe is fined tuned. One is that the constants were fine-tuned by an intelligent designer to allow for life as we know it. This explanation has the problem that by itself it can explain anything but predict nothing. An alternate is that there are many possible universes, all existing, and we are simply in the one where we can exist. This explanation has the problem that by itself it can explain anything but predict nothing.  It is ironic that to avoid an intelligent designer, a solution based on an equally dubious just so story is proposed. Since we are into just so stories, perhaps we can compromise by having the intelligent designer choosing one of the multiverses as the one true Universe. This leaves the question of who the one true intelligent designer is. As an old farm boy, I find the idea that Audhumbla, the cow of the Norse creation myth, is the intelligent designer to be the most elegant. Besides the idea of elegance, as a deciding criterion in science, has a certain bovine aspect to it. Who decides what constitutes elegance? Everyone thinks their own creation is the most elegant. This is only possible in Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average (A PRAIRIE HOME COMPANION – Garrison Keillor (b. 1942)). Not being in Lake Wobegon, we need objective criteria for what constitutes elegance. Good luck with that one.

Some may think the discussion in the last paragraph is frivolous, and quite by design it is.  This is to illustrate the point that once we allow the quest for knowledge to escape from the rigors of the scientific method’s feedback loop all bets are off and we have no objective reason to rule out astrology or even the very elegant Audhumbla. However, the idea of an intelligent designer or multiverses can still be saved if they are an essential part of a model with a track record of successful predictions. For example, if that animal I see in my lane is Fenrir, the great gray wolf, and not just a passing coyote, then the odds swing in favor of Audhumbla as the intelligent designer and Ragnarok is not far off. More likely, evidence will eventually be found in the cosmic microwave background or elsewhere for some variant of quantum gravity. Until then, patience (on both sides) is a virtue.

Though the mills of science grind slowly;
Yet they grind exceeding small;
Though with patience they stand waiting,
With exactness grind they all.[2]

[1] I have already broken my new year’s resolution to post no more philosophy of science blogs but this is the last, I promise.

[2] With apologies to Henry Wadsworth Longfellow (1807 – 1882)

Share

Not all philosophy is useless.

Friday, December 5th, 2014

In this, the epilogue to my philosophic musing, I locate my view of the scientific method within the landscape of various philosophical traditions and also tie it into my current interest of project management. As strange as it may seem, this triumvirate of the scientific method, philosophy and management meet in the philosophic tradition known as pragmatism and in the work of W. Edwards Deming (1900 – 1993), a scientist and management guru who was strongly influenced by the pragmatic philosopher C.I. Lewis (1883 – 1964), who in turn strongly influenced business practices. And I do mean strongly in both cases. The thesis of this essay is that Lewis, the pragmatic philosopher, has had influence in two directions: in business practice and in the philosophy of science. Surprisingly, my views on the scientific method are very much in this pragmatic tradition and not crackpot.

The pragmatic movement was started by Charles S. Peirce (1839 – 1914) and further developed by Williams James (1842 – 1910) and John Dewey (1859 – 1952). The basic idea of philosophic pragmatism is given by Peirce in his pragmatic maxim as: “To ascertain the meaning of an intellectual conception one should consider what practical consequences might result from the truth of that conception—and the sum of these consequences constitute the entire meaning of the conception.” Another aspect of the pragmatic approach to philosophic questions was that the scientific method was taken as given with no need for justification from the outside, i.e. the scientific method was used as the definition of knowledge.
How does this differ from the workaday approach to defining knowledge? Traditionally, going back at least to Plato (428/427 or 424/423 BCE – 348/347 BCE) knowledge has been defined as:
1) Knowledge – justified true belief
The leaves open the question of how belief is justified and since no justification is ever 100% certain, we can never be sure the belief is true. That is a definite problem. No wonder the philosophic community has spent two and a half millennia in fruitless efforts to make sense of it.

A second definition of knowledge predates this and is associated with Protagoras (c. 490 B.C. – c. 420 B.C.) and the sophists:
2) Knowledge – what you can convince people is true
Essentially, the argument is that since we cannot know that a belief is true with 100% certainty; what is important is what we can convince people of. This same basic idea shows up in the work of modern philosophers of science with the idea that scientific belief is basically a social phenomenon and what is important is what the community convinces itself is true. This was part of Thomas Kuhn’s (1922 – 1996) thesis.

While we cannot know what is true, we can know what is useful. Following the lead of scientists, the pragmatists effectively defined knowledge as:
3) Knowledge – information that helps you predict and modify the future
If we take predicting and modifying the future as the practical consequence of information, this definition of knowledge is consistent with the pragmatic maxim. The standard model of particle physics is not knowledge by the strict application of definition 1) since it is not completely true; however it is knowledge by definition 3 since it helps us predict and modify the future. The scientific method is built on definition 3). The modify clause is included in the definition since the pragmatists insisted on that aspect of knowledge. For example, C.I. Lewis said that without the ability to act there is no knowledge.

The third definition of knowledge given above does not correspond to what many people think of as knowledge so Dewy suggested using the term “warranted assertions” rather than knowledge: The validity of the standard model is a warranted assertion. Fortunately, this terminology never caught on. In contrast, James’s pragmatic idea of “truth’s cash value”, derided at the time, has caught on. In a recent book “How to Measure Anything,” on risk management, Douglas W. Hubbard spends a lot of space on what is essentially the cash value of information. In business, that is what is important. The pragmatists were, perhaps, just a bit ahead of their time. Hubbard, whether he knows it or not, is a pragmatist.
Dewey coined the term “instrumentalism” to describe the pragmatic approach. An idea or a belief is like a hand, an instrument for coping. A belief has no more metaphysical status than a fork. When your fork proves inadequate to the task of eating soup, it makes little sense to argue about whether there is something inherent in the nature of forks or something inherent in the nature of soup that accounts for the failure. You just reach for a spoon . However, most pragmatists did not consider themselves to be instrumentalists but rather used the pragmatic definition of knowledge to define what is meant by real.

Now I turn to C.I. Lewis. He is alternately regarded as the last of the classical pragmatists or the first of the neo-pragmatists. He was quite influential in his day as a professor at Harvard from 1920 to his retirement in 1953. In particular, his 1929 book “Mind and the World Order” had a big influence on epistemology and surprisingly on ISO management standards. One can see a lot of the ideas developed by Kuhn already present in the work of C.I. Lewis , for example, the role of theory in interpreting observation. Or as Deming, influenced by Lewis, expressed it: “There is no knowledge without theory.” As a theorist, I like that. At the time, this was quite radical. The logical positivists took the opposite tack and tried to eliminate theory from their epistemology. Lewis and Kuhn argued this was impossible. The idea that theory was necessary for knowledge was not new to Lewis but is also present in the works of Henri Poincaré (1854 – 1912) who was duly reference by Lewis.

Another person Lewis influenced was Willard V. O. Quine (1908 – 2000), although Quine and Lewis did not agree. Quine is perhaps best known outside the realm of pure philosophy for the Duhem-Quine thesis, namely that it is impossible to test a scientific hypothesis in isolation because an empirical test of the hypothesis requires one or more background assumptions. This was the death knell of any naïve interpretation of Sir Karl Popper’s (1902 –1994) idea that science is based on falsification. But Quine’s main opponents were the logical positivists. Popper was just collateral damage. Quine published a landmark paper in 1951: “Two Dogmas of Empiricism”. I would regard this paper as the high point in the discussion of the scientific method by a philosopher and it reasonably readable (unlike Lewis’s “The Mind and the World Order”). Beside the Duhem-Quine thesis, the other radical idea is that observation underdetermines scientific models and that simplicity and conservatism are necessary to fill the gap. This idea also goes back to Poincaré and his idea of conventionalism – much of what is regarded as fact is only convention.

To a large extent my ideas match well with the ideas in “Two Dogmas of Empiricism.” Quine summarizes it nicely as: “The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges.” and “The edge of the system must be kept squared with experience; the rest, with all its elaborate myths or fictions, has as its objective the simplicity of laws.” Amen.

Unfortunately, after the two dogmas of empiricism were brought to light, the philosophy of science regressed. In a recent discussion of simplicity in science I came across, there was neither a single mention of Quine’s work nor his correct identification of the role of simplicity – to relieve the under determination of models by observation. Philosophers found no use for his ideas and have gone back to definition 1) of knowledge. Sad

Where philosophers have dropped the ball it was picked by people in, of all places management. Two people influenced by Lewis were Walter A. Shewhart (1891 – 1967) and Edwards Deming. It is said that Shewhart read Lewis’s book fourteen times and Deming read it nine times. Considering how difficult that book is, it probably required that many readings just to comprehend it. Shewhart is regarded as the father of statistical process control, a key aspect of quality control. He also invented the control chart, a key component of statistical process control. Shewhart’s 1939 book “Statistical Method from the viewpoint of Quality Control” is a classic in the field but it devoted a large part to showing how his ideas are consistent with Lewis’s epistemology. In this book, Shewhart introduced the Shewhart cycle, which was modified by Deming (and sometimes called the Deming cycle). Under its current name Do-Plan-Check-Act (DPCA cycle) it forms the basis of the ISO management standards.

shewhart

The original Shewhart cycle as given in Shewhart’s book.

What is this cycle? Here it is as captured from Shewhart’s book. This is the first place where production is seen as part of a cycle and in the included caption Shewhart explicitly relates it to the scientific method as given by Lewis. Deming added another step to the cycle, which strikes me as unnecessary; the act step. It can easily be incorporated in the specification or plan stage (as it is in Shewhart’s diagram). But Deming was influenced by Lewis who regarded knowledge without the possibility of acting as impossible, hence the act step. This idea has become ingrained in ISO management standards as the slogan “continual improvement” (Clause 10 in the standards). To see the extent Deming was guided by Lewis’s ideas just look at Deming’s 1993 book “The New Economics.” He summarizes his approach in what he calls a system of profound knowledge. This has four parts: knowledge of system, knowledge of variation, theory of knowledge and knowledge of physiology. The one that seems out of place is the third; why include theory of knowledge? Deming believed that this was necessary for running a company and he explicitly refers to Lewis’s 1929 book. Making the reading of Lewis’s book mandatory for business managers would certainly have the desirable effect of cutting down the number of managers. To be fair to Deming, he does suggest starting in about the middle of the book. We have two unbroken chain – 1) Peirce, Lewis, Shewhart, Deming, ISO management standards and 2) Pierce, Lewis, Quine, my philosophical musings . It reminds one of James Burke’s TV program “Connections”.

Popper may be the person many scientists think of to justify how they work but Quine would probably be better and Quine’s teacher, C.I. Lewis, through Deming, has provided the philosophic foundation for business management. Within the context of definition 3) for knowledge both science and business have been very successful. Your reading of this essay required both. In contradistinction, standard western philosophy based on definition 1) has largely failed; philosophers still do not know how to acquire knowledge. However, not all philosophy is useless, some of it is pragmatic.

Share

Good Management is Science

Friday, October 10th, 2014

Management done properly satisfies Sir Karl Popper’s (1902 – 1994) demarcation criteria for science, i.e. using models that make falsifiable or at least testable predictions. That was brought home to me by a book[1] by Douglas Hubbard on risk management where he advocated observationally constrained (falsifiable or testable) models for risk analysis evaluated through Monte Carlo calculations. Hmm, observationally constrained models and Monte Carlo calculations, sounds like a recipe for science.

Let us take a step back. The essence of science is modeling how the universe works and checking the assumptions of the model and its predictions against observations. The predictions must be testable. According to Hubbard, the essence of risk management is modeling processes and checking the assumptions of the model and its predictions against observations. The predictions must be testable. What we are seeing here is a common paradigm for knowledge in which modeling and testing against observation play a key role.

The knowledge paradigm is the same in project management. A project plan, with its resource loaded schedules and other paraphernalia, is a model for how the project is expected to proceed. To monitor a project you check the plan (model) against actuals (a fancy euphemism for observations, where observations may or may not correspond to reality). Again, it reduces back to observationally constrained models and testable predictions.

The foundations of science and good management practices are tied even closer together. Consider the PDCA cycle for process management that is present, either implicitly or explicitly, in essentially all the ISO standards related to management. It was originated by Walter Shewhart (1891 – 1967), an American physicist, engineer and statistician, and popularized by Edwards Deming (1900 – 1993), an American engineer, statistician, professor, author, lecturer and management consultant. Engineers are into everything. The actual idea of the cycle is based on the ideas of Francis Bacon (1561 – 1629) but could equally well be based on the work of Roger Bacon[2] (1214 – 1294). Hence, it should probably be called the Double Bacon Cycle (no, that sounds too much like a breakfast food).

But what is this cycle? For science, it is: plan an experiment to test a model, do the experiment, check the model results against theCapture observed results, and act to change the model in response to the new information from the check stage or devise more precise tests if the predictions and observations agree. For process management replace experiment with production process. As a result, you have a model for how the production process should work and doing the process allows you to test the model. The check stage is where you see if the process performed as expected and the act stage allows you to improve the process if the model and actuals do not agree. The key point is the check step. It is necessary if you are to improve the process; otherwise you do not know what is going wrong or, indeed, even if something is going wrong. It is only possible if the plan makes predictions that are falsifiable or at least testable. Popper would be pleased.

There is another interesting aspect of the ISO 9001 standard. It is based on the idea of processes. A process is defined as an activity that converts inputs into outputs. Well, that sound rather vague, but the vagueness is an asset, kind of like degrees of freedom in an effective field theory. Define them as you like but if you choose them incorrectly you will be sorry. The real advantage of effective field theory and the flexible definition of process is that you can study a system at any scale you like. In effective field theory, you study processes that operate at the scale of the atom, the scale of the nucleus or the scale of the nucleon and tie them together with a few parameters. Similarly with processes, you can study the whole organization as a process or drill down and look at sub process at any scale you like, for CERN or TRIUMF that would be down to the last magnet. It would not be useful to go further and study accelerator operations at the nucleon scale. At a given scale different processes are tied together by their inputs and outputs and these are also used to tie process at different scales.

As a theoretical physicist who has gone over to the dark side and into administration, I find it amusing to see the techniques and approaches from science being borrowed for use in administration, even Monte Carlo calculations. The use of similar techniques in science and administration goes back to the same underlying idea: all true knowledge is obtained through observation and its use to build better testable models, whether in science or other walks of life.

[1] The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard (Apr 27, 2009)

[2] Roger Bacon described a repeating cycle of observation, hypothesis, and experimentation.

Share

Is the Understandability of the Universe a Mirage?

Friday, September 5th, 2014

Isaac Asimov (1920 – 1992) “expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”. Albert Einstein (1870 – 1955) claimed: “The most incomprehensible thing about the world is that it is comprehensible”. Indeed there is general consensus in science that not only is the universe comprehensible but is it mostly well described by our current models. However, Daniel Kahneman counters: “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance”.

Well, that puts a rather different perspective on Asimov’s and Einstein’s claims.  So who is this person that is raining on our parade? Kahneman is a psychologist who won the 2002 Nobel Prize in economics for his development of prospect theory. A century ago everyone quoted Sigmund Freud (1856 – 1939) to show how modern they were. Today, Kahneman seems to have assumed that role.[1]

Kahneman’s Nobel Prize winning prospect theory, developed with Amos Tversky (1937 –1996), replaced expected utility theory. The latter assumed that people made economic choices based on the expected utility of the results, that is they would behave rationally. In contrast, Kahneman and company have shown that people are irrational in well-defined and predictable ways. For example, it is understood that the phrasing of a question can (irrationally) change how people answer, even if the meaning of the question is the same.

Kahneman’s book, Thinking, Fast and Slow, really should be required reading for everyone. It explains a lot of what goes on (gives the illusion of comprehension?) and provides practical tips for thinking rationally. For example, when I was on a visit in China, the merchants would hand me a calculator to type in what I would pay for a given item. Their response to the number I typed in was always the same: You’re joking, right?  Kahneman would explain that they were trying to remove the anchor set by the first number entered in the calculator. Anchoring is a common aspect of how we think.

Since, as Kahneman argues, we are inherently irrational one has to wonder about the general validity of the philosophic approach to knowledge; an approach based largely on rational argument. Science overcomes our inherent irrationality by constraining our rational arguments by frequent, independently-repeated observations.  Much as with project management, we tend to be irrationally overconfident of our ability to estimate resource requirements.  Estimates of project resource requirements not constrained by real world observations leads to the project being over budget and delivered past deadlines. Even Kahneman was not immune to this trap of being overly optimistic.

Kahneman’s cynicism has been echoed by others. For example, H.L. Mencken (1880 –1956) said:  “The most common of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind”. Are the cynics correct? Is our belief that the universe is comprehensible, and indeed mostly understood, a mirage based on our unlimited ability to ignore our ignorance? A brief look at history would tend to support that claim.  Surely the Buddha, after having achieved enlightenment, would have expressed relief and contentment for living in a century in which we finally got the basis of the universe straight. Saint Paul, in his letters, echoes the same claim that the universe is finally understood. René Descartes, with the method laid out in the Discourse on the Method and Principles of Philosophy, would have made the same claim.  And so it goes, almost everyone down through history believes that he/she comprehends how the universe works. I wonder if the cow in the barn has the same illusion. Unfortunately, each has a different understanding of what it means to comprehend how the universe works, so it is not even possible to compare the relative validity of the different claims. The unconscious mind fits all it knows into a coherent framework that gives the illusion of comprehension in terms of what it considers important. In doing so, it assumes that what you see is all there is.  Kahneman refers to this as WYSIATI (What You See Is All There Is).

To a large extent the understandability of the universe is mirage based on WYSIATI—our ignorance of our ignorance. We understand as much as we are aware of and capable of understanding; blissfully ignoring the rest. We do not know how quantum gravity works, if there is intelligent life elsewhere in the universe[2], or for that matter what the weather will be like next week. While our scientific models correctly describe much about the universe, they are, in the end, only models and leave much beyond their scope, including the ultimate nature of reality.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Let’s hope time is kinder to Kahneman than it was to Freud.

[2] Given our response to global warming, one can debate if there is intelligent life on earth.

Share

Higgs versus Descartes: this round to Higgs.

Friday, August 1st, 2014

René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.

Share

‘Essentially, all models are wrong, but some are useful’

Friday, July 4th, 2014

Since model building is the essence of science, this quote has a bit of a bite to it. It is from George E. P. Box (1919 – 2013), who was not only an eminent statistician but also an eminently quotable one.  Another quote from him: One important idea is that science is a means whereby learning is achieved, not by mere theoretical speculation on the one hand, nor by the undirected accumulation of practical facts on the other, but rather by a motivated iteration between theory and practice.  Thus he saw science as an iteration between observation and theory. And what is theory but the building of erroneous, or at least approximate, models?

To amplify that last comment: The main point of my philosophical musings is that science is the building of models for how the universe works; models constrained by observation and tested by their ability to make predictions for new observations, but models nonetheless. In this context, the above quote has significant implications for science. Models, even those of science, are by their very nature simplifications and as such are not one hundred per cent accurate. Consider the case of a map. Creating a 1:1 map is not only impractical[2] but even if you had one it would be one hundred per cent useless; just try folding a 1:1 scale map of Vancouver. A model with all the complexity of the original does not help us understand the original.  Indeed the whole purpose of a model is to eliminate details that are not essential to the problem at hand.

By their very nature, numerical models are always approximate and this is probably what Box had in mind with his statement. One neglects small effects like the gravitational influence of a mosquito. Even as one begins computing, one makes numerical approximations, replacing integrals with sums or vise versa, derivatives with finite differences, etc. However, one wants to control errors and keep them to a minimum. Statistical analysis techniques, such as Box developed, help estimate and control errors.

To a large extent it is self-evident that models are approximate; so what? Again to quote George Box: Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. What would he have thought of a model with twenty plus parameters, like the standard model of particle physics? His point is a valid one. All measurements have experimental errors. If your fit is perfect you are almost certainly fitting noise. Hence, adding more parameters to get a perfect fit is a fool’s errand. But even without experimental error, a large number of parameters frequently means something important has been missed. Has something been missed in the standard model of particle physics with its many parameters or is the universe really that complicated?

There is an even more basic reason all models are wrong. This goes back at least as far as Immanuel Kant (1724 – 1804). He made the distinction between observation of an object and the object in itself. One never has direct experience of things, the so-called noumenal world; what one experiences is the phenomenal world as conveyed to us by our senses. What we see is not even what has been recorded by the eye.  The mind massages the raw observation into something it can understand; a useful but not necessarily accurate model of the world. Science then continues this process in a systematic manner to construct models to describe observations but not necessarily the underlying reality.

Despite being by definition at least partially wrong, models are frequently useful. The scale model map is useful to tourists trying to find their way around Vancouver or to a general plotting strategy for his next battle. But, if the maps are too far wrong the tourist will get lost and fall into False Creek and the general will go down in history as a failure. Similarly, the models for weather predictions are useful although they are certainly not a hundred per cent accurate. However, they do indicate when it safe to plan a picnic or cut the hay; provided they are right more than by chance and the standard model of particle physics, despite having many parameters and not including gravity, is a useful description of a wide range of observations. But to return to the main point, all models, even useful ones, are wrong because they are approximations and not even approximations to reality but to our observations of that reality. Where does that leave us? Well, let us save the last word for George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Hence the foolishness of talking about theoretical breakthroughs in science. All breakthroughs arise from pondering about observations and observations testing those ponderings.

[2] Not even Google could produce that.

Share

“Theoretical Physics is a Quest for Simplicity”

Friday, June 6th, 2014

Theoretical physics, simplicity. Surely the two words do not go together. Theoretical physics has been the archetypal example of complicated since its invention. So what did Frank Wilczek (b. 1951) mean by that statement[1] quoted in the title? It is the scientist’s trick of taking a well-defined word, such as simplicity, and giving it a technical meaning. In this case, the meaning is from algorithmic information theory. That theory defines complexity (Kolmogorov complexity[2]) as the minimum length of a computer program needed to reproduce a string of numbers. Simplicity, as used in the title, is the opposite of this complexity. Science, not just theoretical physics, is driven, in part but only in part, by the quest for this simplicity.

How is that you might ask. This is best described by Greg Chaitin (b. 1947), a founder of algorithmic information theory. To quote: This idea of program-size complexity is also connected with the philosophy of the scientific method. You’ve heard of Occam’s razor, of the idea that the simplest theory is best? Well, what’s a theory? It’s a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it’s cooked up, and the data is incomprehensible, it’s random. In that case the theory isn’t doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!—That’s the idea…

In many ways this is quite nice; the best theory is the one that compresses the most empirical information into the shortest description or computer program.  It provides an algorithmic method to decide which of two competing theories is best (but not an algorithm for generating the best theory). With this definition of best, a computer could do science: generate programs to describe data and check which is the shortest. It is not clear, with this definition, that Copernicus was better than Ptolemy. The two approaches to planetary motion had a similar number of parameters and accuracy.

There are many interesting aspects of this approach. Consider compressibility and quantum mechanics. The uncertainty principle and the probabilistic nature of quantum mechanics put limits on the extent to which empirical data can be compressed. This is the main difference between classical mechanics and quantum mechanics. Given the initial conditions and the laws of motion, classically the empirical data is compressible to just that input. In quantum mechanics, it is not. The time, when each individual atom in a collection of radioactive atoms decays, is unpredictable and the measured results are largely incompressible. Interpretations of quantum mechanics may make the theory deterministic, but they cannot make the empirical data more compressible.

Compressibility highlights a significant property of initial conditions. While the data describing the motion of the planets can be compressed using Newton’s laws of motion and gravity, the initial conditions that started the planets on their orbits cannot be. This incompressibility tends to be a characteristic of initial conditions. Even the initial conditions of the universe, as reflected in the cosmic microwave background, have a large random non-compressible component – the cosmic variance.  If it wasn’t for quantum uncertainly, we could probably take the lack of compressibility as a definition of initial conditions. For the universe, the two are the same since the lack of compressibility in the initial conditions is due to quantum fluctuations but that is not always the case.

The algorithmic information approach makes Occam’s razor, the idea that one should minimize assumptions, basic to science. If one considers that each character in a minimal computer program is a separate assumption, then the shortest program does indeed have the fewest assumptions. But you might object that some of the characters in a program can be predicted from other characters. However, if that is true the program can probably be made shorter. This is all a bit counterintuitive since one generally does not take such a fine grained approach to what one considers an assumption.

The algorithmic information approach to science, however, does have a major shortcoming. This definition of the best theory leaves out the importance of predictions. A good model must not only compress known data, it must predict new results that are not predicted by competing models. Hence, as noted in the introduction, simplicity is only part of the story.

The idea of reducing science to just a collection of computer programs is rather frightening. Science is about more than computer programs[3]. It is, and should be, a human endeavour. As people, we want models of how the universe works that humans, not just computers, can comprehend and share with others. A collection of bits on a computer drive does not do this.

To receive a notice of future posts follow me on Twitter: @musquod.



[1] From “This Explains Everything”, Ed, John Brockman, Harper Perennial, New York, 2013

[2] Also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity.

[3] In this regard, I have a sinking feeling that I am fighting a rearguard action against the inevitable.

Share

Mathematics: Invented or Discovered?

Friday, May 2nd, 2014

The empirical sciences, like physics and chemistry, are partially invented and partially discovered. Although the empirical observations are surely discovered, the models that describe them are invented through human ingenuity. But what about mathematics which is based on pure thought? Are its results invented or discovered?

Not surprisingly there are different views on this topic. Some people maintain that mathematical results are invented, others claim that they are discovered. Is there a universe of mathematical results just waiting to be discovered or are mathematical results invented by the mathematician and would disappear, like a fairy tale, when mathematicians vanish in the heat death of universe when all the available energy is used up? Invented or discovered? Perhaps some results are invented and others discovered. There is, however, a third view, namely that mathematics is game played by manipulating symbols according to well defined rules. At some level this is probably true.  All those who prefer Monopoly,® put up your hands!

What are the foundations of logic? Bertrand Russell (1872 – 1970) and Alfred Whitehead (1861 – 1947) tried to derive mathematics from logic. The result was the book: Principia Mathematica (1910), a real tour de force. Their derivation still required axioms or assumptions beyond pure logic and it has been questioned on other grounds. An alternate to this approach is set theory, in particular based on the Zermelo–Fraenkel axioms, with the axiom of choice. And an alternate to that is category theory. Whatever all that is. It is certainly very technical. The quest for foundations of mathematics and even logic, like the quest for the Holy Grail, is probably never ending. But the question remains: Was logic and set (category) theory, themselves, invented or discovered?

Let us look at things more simply. Historically, mathematics probably arose empirically: two stones plus two stones equals one stone plus three stones. Then it was realized that this holds for any tokens, stones, bushels of wheat or sheep.  The generalization from specific examples to the generic 2+2=1+3 could be considered an early example of the scientific method: generalizing from specific examples to a general rule. But one plus one does not always equal two. Consider a litre of liquid plus a litre of liquid. If one is water and the other alcohol, the result is less than two litres if they are put in the same container. Adding one litre of water to one litre of concentrated sulfuric acid is even more interesting[1].

Multiplication is also easy to demonstrate with counters. Division is a bit more problematic but if we think of dividing a bushel of wheat into equal parts the idea of fractions is quite natural. Dividing a sheep is messier. Subtraction however leads to a problem: negative numbers. Naively, we cannot have fewer than zero stones but subtraction can lead to that idea. So were negative numbers invented or discovered? We can finesse the problem of negative numbers by saying that negative numbers correspond to what we owe. If I have minus three stones it means I owe someone three stones.

Thus thinking of stones and bushels of wheat, we can understand the rational numbers, numbers written as the ratio of two whole numbers. The Pythagoreans in ancient Greece would have claimed that is all there is. Then can the thorny problem of the square root of two? This arises in connection with the Patagonian theorem. Some poor sod showed that the square root of two could not be written as the ratio of two whole numbers and was thus irrational. He was thrown into the sea for his efforts[2]. The square root of two does not exist in the universe of numbers discovered using stones, sheep, and bushels of wheat. Is it possible to have square root of two stones? Was it invented to make the Patagonian theorem work or was it discovered?

The example of the square root of minus one is even more perplexing. We can think of the square root of two as an extra number inserted between 1.414 and 1.415. But there is no place to insert the square root of minus one.  So again the question arises: Was it invented or discovered? Perhaps it is best to say it was assumed: Assume the square root of minus one can be treated like a normal number[3] and see what happens. A lot of good things as it turned out but does that mean it exists in any real sense. Perhaps it is just a useful fiction.

Nevertheless, mathematics has developed, discovering or inventing new results. As a phenomenologist, I would say we do not have enough information to assert if mathematics was invented or discovered. If we could contact extra-terrestrial mathematicians, it would be interesting to see if their mathematics was different or the same as ours. If it was different, that would be a strong indication that mathematics is invented. Or less black and white, the difference between terrestrial and extra-terrestrial mathematics would tell us the extent to which mathematics is discovered or invented.

In any event mathematics is a very interesting game, whether based on set theory or category theory, whether discovered or invented, and certainly more profitable than Monopoly®[4] in the long run.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Do not try this at home.

[2] At least that is the legend.

[3] √(-1)+√(-1) = 2 √(-1) , etc.

[4] On the other hand, oligarchy, as any large multinationals will tell you, is very profitable.

Share