• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

Posts Tagged ‘Philosophy of science’

Physicists frequently stray into the field of philosophy; notable examples include Thomas Kuhn (1922 –1996) and Henri Poincaré (1854 – 1912). This is perhaps because physicists frequently work in areas far removed from everyday experiences and, in order to be successful in communicating their ideas, underlying assumptions must be dealt with explicitly. Although less well known today than Kuhn and Poincaré, Percy Bridgman (1882 – 1961) also falls into this category. In physics, he is noted for his work on high-pressure physics, winning the Nobel Prize in 1946. In philosophy, he is credited with coining the term OPERATIONAL DEFINITION and promoting the idea of operationalism. These ideas are laid out in the 1927 book: THE LOGIC OF MODERN PHYSICS. If nothing else, it shows the folly of using MODERN in book titles. None-the-less, it is an interesting little book and, in its time, quite influential.

In his book, Bridgman introduces several interesting ideas, for example, that when one explores new areas in science, one should not be surprised that the supporting concepts have to change. Hence we should not be surprised when classical concepts fail in the relativistic or quantum domains. This illustrates why interpretations of quantum mechanics, explaining it terms of classical concepts, are poorly motivated. A related idea is that an explanation is the description of a given phenomenon in terms of familiar concepts.. Of course, with this definition, what qualifies as a valid explanation depends on what the explainee is familiar with. If one cannot succeed using established concepts, one must explain the new idea using familiar, albeit far removed, concepts But what happens when even this does not work? Bridgeman suggests that the solution is to introduce new concepts and become familiar with them. Seems reasonable to me. Thus quantum mechanics can be explained in terms of the, familiar to me, concept of the wave function; no need for many worlds and the like.

While it is natural to think of high speed (relativity) or small size (quantum mechanics) as new areas of science, Bridgman includes increased precision as well. He talks about the penumbra of uncertainty that surrounds all measurements and that is penetrated by increasing the precision of the measurements. Thus the idea of the distinct high-energy and precision frontiers, commonly discussed in modern particle physics planning exercises, goes back at least to 1927.

Bridgman was also a phenomenologist to the core. He did not believe that a priori knowledge could constrain what could happen; in his words: Experience is determined only by experience. C.I. Lewis (1983 – 1964) in his 1929 book Mind and the World-Order agrees. The similar ideas, in books of about the same time, indicate the concerns of that age.

Despite these interesting sidelights, the main idea in THE LOGIC OF MODERN PHYSICS is that concepts are defined by how they are measured; that is by the measurement operation, hence the term operationalism. So why was he interested in operational definitions? It was to avoid the problem in classical mechanics where concepts like distance and time were taken for granted. It then came as shock when the concepts proved to be rather complex when special relativity was invented. To avoid such shocks in the future, Bridgman proposed the idea of operational definitions. For example, to measure length you go down to the local Canadian Tire® store (in the USA it would be Walmart®), buy a tape measure and use it measure length. Thus the concept of length is defined by Canadian Tire®, oops, I mean by a tape measure. What if I measure length by surveying techniques that make use of tranquilization? Bridgman claimed that that is a distinctly different concept and is covered by the same term only for convenience. Here at TRIUMF, distance and location are also measured using laser tacking. This is again a different concept than the original concept of length. Things get even more complicated when we talk about the distance to stars which use again a different operation. Bridgman suggests that length loses it meaning at lengths less than the size the electron because such lengths cannot be measured. Today we would say they can be measured but length in that case is simply a parameter, in a mathematical formula, describing the scattering of particles. Hence we do not have one concept of length or distance but many, although they are the same numerically in regions where the techniques overlap.

Bridgman then goes on to consider various other concepts and how they might be defined operationally. He seems to have been very much influenced by Albert Einstein (1879 – 1955) and Einstein’s discussion of the synchronization of clocks (which actually goes back to Poincaré). The possible operational definitions of velocity are particularly interesting. In contradistinction to the definition given by Einstein based on clocks synchronized and distances measured in a fixed inertial frame, Bridgman suggests that the velocity of a car could also be defined by counting mileposts that the car passes to determine distance and using the clock on the car dashboard to measure time. This velocity can become infinite and would be useful to a person going to a distant solar system who is interested in how many of his years it takes to get there. For most purposes Einstein’s definition is more convenient and hence it is the one in textbooks though other definitions remain possible.

And on it goes. In some cases the definitions seemed quite forced. Never-the-less, three groups of people picked up on the idea of operational definitions. One group was the logical positivists. They tried to avoid theory and were pleased when a physicist gave definitions directly in terms of observables. The second group was the phycologists, who wanted a more secure foundation for their subject. The third group was in quality control and business management where Walter Shewhart (1891 – 1967) and Edwards Demming (1900 – 1993) adopted the idea.

However the concept, as the end all and be all of meaning, had its problems. Like logical positivism, it missed the idea that the meaning is in the model. While we may have different ways to measure length there is common idea behind them all. We can consider this common idea to be an abstraction from the different operational defined concepts or we can take the operational definitions as approximations to the abstract idea. One could say that operationally there is no difference between the two approaches.

Ultimately, operational definitions are useful. They tie concepts tightly to observations where they are less likely to be dislodged by future discoveries or new models. They also help eliminate fuzzy thinking. A lot of the concepts that do not have operational definitions are, in general, poorly defined. Who knows, I might even take the concept of scientific realism seriously if someone gave me an operational definition of it.

To receive a notice of my future posts and my pending book, In Defense of Scientism, follow me on Twitter: @musquod.

Share

Will Self’s CERN

Friday, January 16th, 2015

“It doesn’t look to me like the rose window of Notre Dame. It looks like a filthy big machine down a hole.” — Will Self

Like any documentary, biography, or other educational program on the radio, Will Self’s five-part radio program Self Orbits CERN is partially a work of fiction. It is based, to be sure, on a real walk through the French countryside along the route of the Large Hadron Collider, on the quest for a promised “sense of wonder”. And it is based on real tours at CERN and real conversations. But editorial and narrative choices have to be made in producing a radio program, and in that sense it is exactly the story that Will Self wants to tell. He is, after all, a storyteller.

It is a story of a vast scientific bureaucracy that promises “to steal fire from the gods” through an over-polished public relations team, with day-to-day work done by narrow, technically-minded savants who dodge the big philosophical questions suggested by their work. It is a story of big ugly new machines whose function is incomprehensible. It is the story of a walk through thunderstorms and countryside punctuated by awkward meetings with a cast of characters who are always asked the same questions, and apparently never give a satisfactory answer.

Self’s CERN is not the CERN I recognize, but I can recognize the elements of his visit and how he might have put them together that way. Yes, CERN has secretariats and human resources and procurement, all the boring things that any big employer that builds on a vast scale has to have. And yes, many people working at CERN are specialists in the technical problems that define their jobs. Some of us are interested in the wider philosophical questions implied by trying to understand what the universe is made of and how it works, but some of us are simply really excited about the challenges of a tiny part of the overall project.

“I think you understand more than you let on.”Professor Akram Khan

The central conflict of the program feels a bit like it was engineered by Self, or at least made inevitable by his deliberately-cultivated ignorance. Why, for example, does he wait until halfway through the walk to ask for the basic overview of particle physics that he feels he’s missing, unless it adds to the drama he wants to create? By the end of the program, he admits that asking for explanations when he hasn’t learned much background is a bit unfair. But the trouble is not whether he knows the mathematics. The trouble, rather, is that he’s listened to a typical, very short summary of why we care about particle physics, and taken it literally. He has decided in advance that CERN is a quasi-religious entity that’s somehow prepared to answer big philosophical questions, and never quite reconsiders the discussion based on what’s actually on offer.

If his point is that particle physicists who speak to the public are sometimes careless, he’s absolutely right. We might say we are looking for how or why the universe was created, when really we mean we are learning what it’s made of and the rules for how that stuff interacts, which in turn lets us trace what happened in the past almost (but not quite) back to the moment of the Big Bang. When we say we’re replicating the conditions at that moment, we mean we’re creating particles so massive that they require the energy density that was present back then. We might say that the Higgs boson explains mass, when more precisely it’s part of the model that gives a mechanism for mass to exist in models whose symmetries forbid it. Usually a visit to CERN involves several different explanations from different people, from the high-level and media-savvy down to the technical details of particular systems. Most science journalists would put this information together to present the perspective they wanted, but Self apparently takes everything at face value, and asks everyone he meets for the big picture connections. His narrative is edited to literally cut off technical explanations, because he wants to hear about beauty and philosophy.

Will Self wants the people searching for facts about the universe to also interpret them in the broadest sense, but this is much harder than he implies. As part of a meeting of the UK CMS Collaboration at the University of Bristol last week, I had the opportunity to attend a seminar by Professor James Ladyman, who discussed the philosophy of science and the relationship of working scientists to it. One of the major points he drove home was just how specialized the philosophy of science can be: that the tremendous existing body of work on, for example, interpreting Quantum Mechanics requires years of research and thought which is distinct from learning to do calculations. Very few people have had time to learn both, and their work is important, but great scientific or great philosophical work is usually done by people who have specialized in only one or the other. In fact, we usually specialize a great deal more, into specific kinds of quantum mechanical interactions (e.g. LHC collisions) and specific ways of studying them (particular detectors and interactions).

Toward the end of the final episode, Self finds himself at Voltaire’s chateau near Ferney, France. Here, at last, is what he is looking for: a place where a polymath mused in beautiful surroundings on both philosophy and the natural world. Why have we lost that holistic approach to science? It turns out there are two very good reasons. First, we know an awful lot more than Voltaire did, which requires tremendous specialization discussed above. But second, science and philosophy are no longer the monopoly of rich European men with leisure time. It’s easy to do a bit of everything when you have very few peers and no obligation to complete any specific task. Scientists now have jobs that give them specific roles, working together as a part of a much wider task, in the case of CERN a literally global project. I might dabble in philosophy as an individual, but I recognize that my expertise is limited, and I really enjoy collaborating with my colleagues to cover together all the details we need to learn about the universe.

In Self’s world, physicists should be able to explain their work to writers, artists, and philosophers, and I agree: we should be able to explain it to everyone. But he — or at least, the character he plays in his own story — goes further, implying that scientific work whose goals and methods have not been explained well, or that cannot be recast in aesthetic and moral terms, is intrinsically suspect and potentially valueless. This is a false dichotomy: it’s perfectly possible, even likely, to have important research that is often explained poorly! Ultimately, Self Orbits CERN asks the right questions, but it is too busy musing about what the answers should be to pay attention to what they really are.

For all that, I recommend listening to the five 15-minute episodes. The music is lovely, the story engaging, and the description of the French countryside invigorating. The jokes were great, according to Miranda Sawyer (and you should probably trust her sense of humour rather than the woefully miscalibrated sense of humor that I brought from America). If you agree with me that Self has gone wrong in how he asks questions about science and which answers he expects, well, perhaps you will find some answers or new ideas for yourself.

Share

It seems some disagreements are interminable: the Anabaptists versus the Calvinists, capitalism versus communism, the Hatfields versus the McCoys, or string theorists versus their detractors. It is the latter I will discuss here although the former may be more interesting. This essay is motivated[1] by a comment in the December 16, 2014 issue of Nature by George Ellis and Joe Silk. The comment takes issue with attempts by some string theorists and cosmologists to redefine the scientific method by eliminating the need for experimental testing and relying on elegance or similar criteria instead. I have a lot of sympathy with Ellis and Silk’s point of view but believe that it is up to scientists to define what science is and that hoping for deliverance by outside people, like philosophers, is doomed to failure

To understand what science is and what science is not, we need a well-defined model for how science behaves. Providing that well-defined model is the motivation behind each of my essays. The scientific method is quite simple: build models of how the universe works based on observation and simplicity. Then test them by comparing their predictions against new observation. Simplicity is needed since observations underdetermine the models (see for example: Willard Quine’s (1908 –2000) essay: The Two Dogmas of Empiricism).  Note also that what we do is build models: the standard model of particle physics, the nuclear shell model, string theory, etc. Quine refers to the internals of the models as myths and fictions. Henri Poincaré (1854 – 1912) talks of conventions and Hans Vaihinger (1852 –1933) of the philosophy of as if otherwise known as fictionalism. Thus it is important to remember that our models, even the so-called theory of everything, are only models and not reality.

It is the feedback loop of observation, model building and testing against new observation that define science and give it its successes. Let me repeat: The feedback loop is essential. To see why, consider example of astrology and why scientists reject it. Its practitioners consider it to be the very essence of elegance. Astrology uses careful measurements of current planetary locations and mathematics to predict their future locations, but it is based on an epistemology that places more reliance on the eloquence of ancient wisdom than on observation. Hence there is no attempt to test astrological predictions against observations. That would go against their fundamental principles of eloquence and the superiority of received knowledge to observation. Just as well, since astrological predictions routinely fail. Astrology’s failures provide a warning to those who wish to replace prediction and simplicity with other criteria. The testing of predictions against observation and simplicity are hard taskmasters and it would be nice to escape their tyranny but that path is fraught with danger, as astrology illustrates. The feedback loop from science has even been picked up by the business management community and has been built into the very structure of the management standards (see ISO Annex SL for example). It would be shame if management became more scientific than physics.

But back to string theory. Gravity has always been a tough nut to crack. Isaac Newton (1643 – 1727) proposed the decidedly inelegant idea of instantaneous action at a distance and it served well until 1905 and the development of special theory of relativity. Newton’s theory of gravity and special relativity are inconsistent since the latter rules out instantaneous action at a distance. In 1916, Albert Einstein (1879 – 1955) with an honorable mention to David Hilbert (1862 – 1943) proposed the general theory of relativity to solve the problem. In 1919, the prediction of the general theory of relativity for the bending of light by the sun was confirmed by an observation by Arthur Eddington (1882 – 1944). Notice the progression: conflict between two models, proposed solution, confirmed prediction, and then acceptance.

Like special relativity and Newtonian gravity, general relativity and quantum mechanics are incompatible with one another. This has led to attempts to generate a combined theory. Currently string theory is the most popular candidate, but it seems to be stuck at the stage general relativity was in 1917 or maybe even 1915: a complicated (some would say elegant, others messy) mathematical theory but unconfirmed by experiment. Although progress is definitely being made, string theory may stay where it is for a long time. The problem is that the natural scale of quantum gravity is the Planck mass and this scale is beyond what we can explore directly by experiment. However, there is one place that quantum gravity may have left observable traces and that is in its role in the early Universe. There are experimental hints that may indicate a signature in the cosmic microwave background radiation but we must await further experimental results. In the meantime, we must accept that current theories of quantum gravity are doubly uncertain. Uncertain, in the first instance, because, like all scientific models, they may be rendered obsolete by new a understanding and uncertain, in the second instance, because they have not been experimentally verified through testable predictions.

Let’s now turn to the question of multiverses. This is an even worse dog’s breakfast than quantum gravity. The underlying problem is the fine tuning of the fundamental constants needed in order for life as we know it to exist. What is needed for life, as we do not know it, to exist is unknown. There are two popular ideas for why the Universe is fined tuned. One is that the constants were fine-tuned by an intelligent designer to allow for life as we know it. This explanation has the problem that by itself it can explain anything but predict nothing. An alternate is that there are many possible universes, all existing, and we are simply in the one where we can exist. This explanation has the problem that by itself it can explain anything but predict nothing.  It is ironic that to avoid an intelligent designer, a solution based on an equally dubious just so story is proposed. Since we are into just so stories, perhaps we can compromise by having the intelligent designer choosing one of the multiverses as the one true Universe. This leaves the question of who the one true intelligent designer is. As an old farm boy, I find the idea that Audhumbla, the cow of the Norse creation myth, is the intelligent designer to be the most elegant. Besides the idea of elegance, as a deciding criterion in science, has a certain bovine aspect to it. Who decides what constitutes elegance? Everyone thinks their own creation is the most elegant. This is only possible in Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average (A PRAIRIE HOME COMPANION – Garrison Keillor (b. 1942)). Not being in Lake Wobegon, we need objective criteria for what constitutes elegance. Good luck with that one.

Some may think the discussion in the last paragraph is frivolous, and quite by design it is.  This is to illustrate the point that once we allow the quest for knowledge to escape from the rigors of the scientific method’s feedback loop all bets are off and we have no objective reason to rule out astrology or even the very elegant Audhumbla. However, the idea of an intelligent designer or multiverses can still be saved if they are an essential part of a model with a track record of successful predictions. For example, if that animal I see in my lane is Fenrir, the great gray wolf, and not just a passing coyote, then the odds swing in favor of Audhumbla as the intelligent designer and Ragnarok is not far off. More likely, evidence will eventually be found in the cosmic microwave background or elsewhere for some variant of quantum gravity. Until then, patience (on both sides) is a virtue.

Though the mills of science grind slowly;
Yet they grind exceeding small;
Though with patience they stand waiting,
With exactness grind they all.[2]

[1] I have already broken my new year’s resolution to post no more philosophy of science blogs but this is the last, I promise.

[2] With apologies to Henry Wadsworth Longfellow (1807 – 1882)

Share

Not all philosophy is useless.

Friday, December 5th, 2014

In this, the epilogue to my philosophic musing, I locate my view of the scientific method within the landscape of various philosophical traditions and also tie it into my current interest of project management. As strange as it may seem, this triumvirate of the scientific method, philosophy and management meet in the philosophic tradition known as pragmatism and in the work of W. Edwards Deming (1900 – 1993), a scientist and management guru who was strongly influenced by the pragmatic philosopher C.I. Lewis (1883 – 1964), who in turn strongly influenced business practices. And I do mean strongly in both cases. The thesis of this essay is that Lewis, the pragmatic philosopher, has had influence in two directions: in business practice and in the philosophy of science. Surprisingly, my views on the scientific method are very much in this pragmatic tradition and not crackpot.

The pragmatic movement was started by Charles S. Peirce (1839 – 1914) and further developed by Williams James (1842 – 1910) and John Dewey (1859 – 1952). The basic idea of philosophic pragmatism is given by Peirce in his pragmatic maxim as: “To ascertain the meaning of an intellectual conception one should consider what practical consequences might result from the truth of that conception—and the sum of these consequences constitute the entire meaning of the conception.” Another aspect of the pragmatic approach to philosophic questions was that the scientific method was taken as given with no need for justification from the outside, i.e. the scientific method was used as the definition of knowledge.
How does this differ from the workaday approach to defining knowledge? Traditionally, going back at least to Plato (428/427 or 424/423 BCE – 348/347 BCE) knowledge has been defined as:
1) Knowledge – justified true belief
The leaves open the question of how belief is justified and since no justification is ever 100% certain, we can never be sure the belief is true. That is a definite problem. No wonder the philosophic community has spent two and a half millennia in fruitless efforts to make sense of it.

A second definition of knowledge predates this and is associated with Protagoras (c. 490 B.C. – c. 420 B.C.) and the sophists:
2) Knowledge – what you can convince people is true
Essentially, the argument is that since we cannot know that a belief is true with 100% certainty; what is important is what we can convince people of. This same basic idea shows up in the work of modern philosophers of science with the idea that scientific belief is basically a social phenomenon and what is important is what the community convinces itself is true. This was part of Thomas Kuhn’s (1922 – 1996) thesis.

While we cannot know what is true, we can know what is useful. Following the lead of scientists, the pragmatists effectively defined knowledge as:
3) Knowledge – information that helps you predict and modify the future
If we take predicting and modifying the future as the practical consequence of information, this definition of knowledge is consistent with the pragmatic maxim. The standard model of particle physics is not knowledge by the strict application of definition 1) since it is not completely true; however it is knowledge by definition 3 since it helps us predict and modify the future. The scientific method is built on definition 3). The modify clause is included in the definition since the pragmatists insisted on that aspect of knowledge. For example, C.I. Lewis said that without the ability to act there is no knowledge.

The third definition of knowledge given above does not correspond to what many people think of as knowledge so Dewy suggested using the term “warranted assertions” rather than knowledge: The validity of the standard model is a warranted assertion. Fortunately, this terminology never caught on. In contrast, James’s pragmatic idea of “truth’s cash value”, derided at the time, has caught on. In a recent book “How to Measure Anything,” on risk management, Douglas W. Hubbard spends a lot of space on what is essentially the cash value of information. In business, that is what is important. The pragmatists were, perhaps, just a bit ahead of their time. Hubbard, whether he knows it or not, is a pragmatist.
Dewey coined the term “instrumentalism” to describe the pragmatic approach. An idea or a belief is like a hand, an instrument for coping. A belief has no more metaphysical status than a fork. When your fork proves inadequate to the task of eating soup, it makes little sense to argue about whether there is something inherent in the nature of forks or something inherent in the nature of soup that accounts for the failure. You just reach for a spoon . However, most pragmatists did not consider themselves to be instrumentalists but rather used the pragmatic definition of knowledge to define what is meant by real.

Now I turn to C.I. Lewis. He is alternately regarded as the last of the classical pragmatists or the first of the neo-pragmatists. He was quite influential in his day as a professor at Harvard from 1920 to his retirement in 1953. In particular, his 1929 book “Mind and the World Order” had a big influence on epistemology and surprisingly on ISO management standards. One can see a lot of the ideas developed by Kuhn already present in the work of C.I. Lewis , for example, the role of theory in interpreting observation. Or as Deming, influenced by Lewis, expressed it: “There is no knowledge without theory.” As a theorist, I like that. At the time, this was quite radical. The logical positivists took the opposite tack and tried to eliminate theory from their epistemology. Lewis and Kuhn argued this was impossible. The idea that theory was necessary for knowledge was not new to Lewis but is also present in the works of Henri Poincaré (1854 – 1912) who was duly reference by Lewis.

Another person Lewis influenced was Willard V. O. Quine (1908 – 2000), although Quine and Lewis did not agree. Quine is perhaps best known outside the realm of pure philosophy for the Duhem-Quine thesis, namely that it is impossible to test a scientific hypothesis in isolation because an empirical test of the hypothesis requires one or more background assumptions. This was the death knell of any naïve interpretation of Sir Karl Popper’s (1902 –1994) idea that science is based on falsification. But Quine’s main opponents were the logical positivists. Popper was just collateral damage. Quine published a landmark paper in 1951: “Two Dogmas of Empiricism”. I would regard this paper as the high point in the discussion of the scientific method by a philosopher and it reasonably readable (unlike Lewis’s “The Mind and the World Order”). Beside the Duhem-Quine thesis, the other radical idea is that observation underdetermines scientific models and that simplicity and conservatism are necessary to fill the gap. This idea also goes back to Poincaré and his idea of conventionalism – much of what is regarded as fact is only convention.

To a large extent my ideas match well with the ideas in “Two Dogmas of Empiricism.” Quine summarizes it nicely as: “The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges.” and “The edge of the system must be kept squared with experience; the rest, with all its elaborate myths or fictions, has as its objective the simplicity of laws.” Amen.

Unfortunately, after the two dogmas of empiricism were brought to light, the philosophy of science regressed. In a recent discussion of simplicity in science I came across, there was neither a single mention of Quine’s work nor his correct identification of the role of simplicity – to relieve the under determination of models by observation. Philosophers found no use for his ideas and have gone back to definition 1) of knowledge. Sad

Where philosophers have dropped the ball it was picked by people in, of all places management. Two people influenced by Lewis were Walter A. Shewhart (1891 – 1967) and Edwards Deming. It is said that Shewhart read Lewis’s book fourteen times and Deming read it nine times. Considering how difficult that book is, it probably required that many readings just to comprehend it. Shewhart is regarded as the father of statistical process control, a key aspect of quality control. He also invented the control chart, a key component of statistical process control. Shewhart’s 1939 book “Statistical Method from the viewpoint of Quality Control” is a classic in the field but it devoted a large part to showing how his ideas are consistent with Lewis’s epistemology. In this book, Shewhart introduced the Shewhart cycle, which was modified by Deming (and sometimes called the Deming cycle). Under its current name Do-Plan-Check-Act (DPCA cycle) it forms the basis of the ISO management standards.

shewhart

The original Shewhart cycle as given in Shewhart’s book.

What is this cycle? Here it is as captured from Shewhart’s book. This is the first place where production is seen as part of a cycle and in the included caption Shewhart explicitly relates it to the scientific method as given by Lewis. Deming added another step to the cycle, which strikes me as unnecessary; the act step. It can easily be incorporated in the specification or plan stage (as it is in Shewhart’s diagram). But Deming was influenced by Lewis who regarded knowledge without the possibility of acting as impossible, hence the act step. This idea has become ingrained in ISO management standards as the slogan “continual improvement” (Clause 10 in the standards). To see the extent Deming was guided by Lewis’s ideas just look at Deming’s 1993 book “The New Economics.” He summarizes his approach in what he calls a system of profound knowledge. This has four parts: knowledge of system, knowledge of variation, theory of knowledge and knowledge of physiology. The one that seems out of place is the third; why include theory of knowledge? Deming believed that this was necessary for running a company and he explicitly refers to Lewis’s 1929 book. Making the reading of Lewis’s book mandatory for business managers would certainly have the desirable effect of cutting down the number of managers. To be fair to Deming, he does suggest starting in about the middle of the book. We have two unbroken chain – 1) Peirce, Lewis, Shewhart, Deming, ISO management standards and 2) Pierce, Lewis, Quine, my philosophical musings . It reminds one of James Burke’s TV program “Connections”.

Popper may be the person many scientists think of to justify how they work but Quine would probably be better and Quine’s teacher, C.I. Lewis, through Deming, has provided the philosophic foundation for business management. Within the context of definition 3) for knowledge both science and business have been very successful. Your reading of this essay required both. In contradistinction, standard western philosophy based on definition 1) has largely failed; philosophers still do not know how to acquire knowledge. However, not all philosophy is useless, some of it is pragmatic.

Share

This blog is all about particle physics and particle physicists. We can all agree, I suppose, on the notion of the particle physicist, right? There are even plenty of nice pictures up here! But do we know or are we aware of what a particle really is? This fundamental question tantalized me from the very beginning of my studies and before addressing more involved topics I think it is worth spending some time on this concept. Through the years I probably changed my opinion several times, according to the philosophy underlying the topic that I was investigating. Moreover, there’s probably not a single answer to this question.

  1. The Standard Model: from geometry to detectors

The human mind conceived the Standard Model of Particle Physics to give a shape on the blackboard to the basic ingredients of particle physics: it is a field theory, with quantization rules, namely a quantum field theory and its roots go deep down to differential geometry.
But we know that “particles” like the Higgs boson have been discovered through complex detectors, relying on sophisticated electronic systems, tons of Monte Carlo simulations and data analysis. Quite far away from geometry, isn’t it?
So the question is: how do we fill this gap between theory and experiment? What do theoreticians think about and experimentalists see through the detectors? Furthermore, does a particle’s essence change from its creation to its detection?

  1. Essence and representation: the wavefunction

 Let’s start with simple objects, like an electron. Can we imagine it as a tiny thing floating here and there? Mmm. Quantum mechanics already taught us that it is something more: it does not rotate around an atomic nucleus like the Earth around the Sun (see, e.g., Bohr’s model). The electron is more like a delocalized “presence” around the nucleus quantified by its “wavefunction”, a mathematical function that gives the probability of finding the electron at a certain place and time.
Let’s think about it: I just wrote that the electron is not a localized entity but it is spread in space and time through its wavefunction. Fine, but I still did not say what an electron is.

I have had long and intensive discussions about this question. In particular I remember one with my housemate (another theoretical physicist) that was about to end badly, with the waving of frying pans at each other. It’s not still clear to me if we agreed or not, but we still live together, at least.

Back to the electron, we could agree on considering its essence as its abstract definition, namely being one of the leptons in the Standard Model. But the impossibility of directly accessing it forces me to identify it with its most trustful representation, namely the wavefunction. I know its essence, but I cannot directly (i.e. with my senses) experience it. My human powers stop to the physical manifestation of its mathematical representation: I cannot go further.
Renè Magritte represented the difference between the representation of an object and the object itself in a famous painting “The treachery of images”:

magritte_pipe

“Ceci n’est pas une pipe”, it says, namely “This is not a pipe”. He is right, the picture is its representation. The pipe is defined as “A device for smoking, consisting of a tube of wood, clay, or other material with a small bowl at one end” and we can directly experience it. So its representation is not the pipe itself.

As I explained, this is somehow different in the case of the electron or other particles, where experience stops to the representation. So, according to my “humanity”, the electron is its wavefunction. But, to be consistent with what I just claimed: can we directly feel its wavefunction? Yes, we can. For example we can see its trace in a cloud chamber, or more elaborate detectors. Moreover, electricity and magnetism are (partly) manifestations of electron clouds in matter, and we experience those in everyday life.

bubbleplakat

You may wonder why I go through all these mental wanderings: just write down your formulas, calculate and be happy with (hopefully!) discoveries.

I do it because philosophy matters. And is nice. And now that we are a bit more aware of the essence of things that we are investigating, we can move a step forward and start addressing Quantum Chromo Dynamics (QCD), from its basic foundations to the latest results released by the community. I hope to have sufficiently stimulated your curiosity to follow me during the next steps!

Again, I want to stress that this is my own perspective, and maybe someone else would answer these questions in a different way. For example, what do you think?

Share

Good Management is Science

Friday, October 10th, 2014

Management done properly satisfies Sir Karl Popper’s (1902 – 1994) demarcation criteria for science, i.e. using models that make falsifiable or at least testable predictions. That was brought home to me by a book[1] by Douglas Hubbard on risk management where he advocated observationally constrained (falsifiable or testable) models for risk analysis evaluated through Monte Carlo calculations. Hmm, observationally constrained models and Monte Carlo calculations, sounds like a recipe for science.

Let us take a step back. The essence of science is modeling how the universe works and checking the assumptions of the model and its predictions against observations. The predictions must be testable. According to Hubbard, the essence of risk management is modeling processes and checking the assumptions of the model and its predictions against observations. The predictions must be testable. What we are seeing here is a common paradigm for knowledge in which modeling and testing against observation play a key role.

The knowledge paradigm is the same in project management. A project plan, with its resource loaded schedules and other paraphernalia, is a model for how the project is expected to proceed. To monitor a project you check the plan (model) against actuals (a fancy euphemism for observations, where observations may or may not correspond to reality). Again, it reduces back to observationally constrained models and testable predictions.

The foundations of science and good management practices are tied even closer together. Consider the PDCA cycle for process management that is present, either implicitly or explicitly, in essentially all the ISO standards related to management. It was originated by Walter Shewhart (1891 – 1967), an American physicist, engineer and statistician, and popularized by Edwards Deming (1900 – 1993), an American engineer, statistician, professor, author, lecturer and management consultant. Engineers are into everything. The actual idea of the cycle is based on the ideas of Francis Bacon (1561 – 1629) but could equally well be based on the work of Roger Bacon[2] (1214 – 1294). Hence, it should probably be called the Double Bacon Cycle (no, that sounds too much like a breakfast food).

But what is this cycle? For science, it is: plan an experiment to test a model, do the experiment, check the model results against theCapture observed results, and act to change the model in response to the new information from the check stage or devise more precise tests if the predictions and observations agree. For process management replace experiment with production process. As a result, you have a model for how the production process should work and doing the process allows you to test the model. The check stage is where you see if the process performed as expected and the act stage allows you to improve the process if the model and actuals do not agree. The key point is the check step. It is necessary if you are to improve the process; otherwise you do not know what is going wrong or, indeed, even if something is going wrong. It is only possible if the plan makes predictions that are falsifiable or at least testable. Popper would be pleased.

There is another interesting aspect of the ISO 9001 standard. It is based on the idea of processes. A process is defined as an activity that converts inputs into outputs. Well, that sound rather vague, but the vagueness is an asset, kind of like degrees of freedom in an effective field theory. Define them as you like but if you choose them incorrectly you will be sorry. The real advantage of effective field theory and the flexible definition of process is that you can study a system at any scale you like. In effective field theory, you study processes that operate at the scale of the atom, the scale of the nucleus or the scale of the nucleon and tie them together with a few parameters. Similarly with processes, you can study the whole organization as a process or drill down and look at sub process at any scale you like, for CERN or TRIUMF that would be down to the last magnet. It would not be useful to go further and study accelerator operations at the nucleon scale. At a given scale different processes are tied together by their inputs and outputs and these are also used to tie process at different scales.

As a theoretical physicist who has gone over to the dark side and into administration, I find it amusing to see the techniques and approaches from science being borrowed for use in administration, even Monte Carlo calculations. The use of similar techniques in science and administration goes back to the same underlying idea: all true knowledge is obtained through observation and its use to build better testable models, whether in science or other walks of life.

[1] The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard (Apr 27, 2009)

[2] Roger Bacon described a repeating cycle of observation, hypothesis, and experimentation.

Share

Why pure research?

Thursday, October 2nd, 2014

With my first post on Quantum Diaries I will not address a technical topic; instead, I would like to talk about the act (or art) of “studying” itself. In particular, why do we care about fundamental research, pure knowledge without any practical purpose or immediate application?

A. Flexner in 1939 authored a contribution to Harper’s Magazine (issue 179) named “The usefulness of useless knowledge”. He opens the discussion with an interesting question: “Is it not a curios fact that in a world steeped in irrational hatreds which threaten civilization itself, men and women – old and young – detach themselves wholly or partly from the angry current of daily life to devote themselves to the cultivation of beauty, to the extension of knowledge […] ?”

Nowadays this interrogative is still present, and probably the need for a satisfactory answer is even stronger.

From a pragmatic point of view, we can argue that there are many important applications and spin-offs of theoretical investigations into the deep structure of Nature that did not arise immediately after the scientific discoveries. This is, for example, the case of QED and antimatter, the theories for which date back to the 1920s and are nowadays exploited in hospitals for imaging purposes (like in PET, positron emission tomography). The most important discoveries affecting our everyday life, from electricity to the energy bounded in the atom, came from completely pure and theoretical studies: electricity and magnetism, summarized in Maxwell’s equations, and quantum mechanics are shining examples.

It may seem that it is just a matter of time: “Wait enough, and something useful will eventually pop out of these abstract studies!” True. But that would not be the most important answer. To me this is: “Pure research is important because it generates knowledge and education”. It is our own contribution to the understanding of Nature, a short but important step in a marvelous challenge set up by the human mind.

Personally, I find that research into the yet unknown aspects of Nature responds to some partly conscious and partly unconscious desires. Intellectual achievements provide a genuine ‘spiritual’ satisfaction, peculiar to the art of studying. For sake of truth I must say that there are also a lot of dark sides: frustration, stress, graduate-depression effects, geographical and economic instability and so on. But leaving for a while all these troubles aside, I think I am pretty lucky in doing this job.

source_of_knowledge

Books, the source of my knowledge

During difficult times from the economic point of view, it is legitimate to ask also “Why spend a lot of money on expensive experiments like the Large Hadron Collider?” or “Why fund abstract research in labs and universities instead of investing in more socially useful studies?”

We could answer by stressing again the fact that many of the best innovations came from the fuzziest studies. But in my mind the ultimate answer, once for all, relies in the power of generating culture, and education through its diffusion. Everything occurs within our possibilities and limitations. A willingness to learn, a passion for teaching, blackboards, books and (super)computers: these are our tools.

Citing again Flexner’s paper: “The mere fact spiritual and intellectual freedoms bring satisfaction to an individual soul bent upon its own purification and elevation is all the justification that they need. […] A poem, a symphony, a painting, a mathematical truth, a new scientific fact, all bear in themselves all the justification that universities, colleges and institutes of research need or require.”

Last but not least, it is remarkable to think about how many people from different parts of the world may have met and collaborated while questing together after knowledge. This may seem a drop in the ocean, but research daily contributes in generating a culture of peace and cooperation among people with different cultural backgrounds. And that is for sure one of the more important practical spin-offs.

Share

Isaac Asimov (1920 – 1992) “expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”. Albert Einstein (1870 – 1955) claimed: “The most incomprehensible thing about the world is that it is comprehensible”. Indeed there is general consensus in science that not only is the universe comprehensible but is it mostly well described by our current models. However, Daniel Kahneman counters: “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance”.

Well, that puts a rather different perspective on Asimov’s and Einstein’s claims.  So who is this person that is raining on our parade? Kahneman is a psychologist who won the 2002 Nobel Prize in economics for his development of prospect theory. A century ago everyone quoted Sigmund Freud (1856 – 1939) to show how modern they were. Today, Kahneman seems to have assumed that role.[1]

Kahneman’s Nobel Prize winning prospect theory, developed with Amos Tversky (1937 –1996), replaced expected utility theory. The latter assumed that people made economic choices based on the expected utility of the results, that is they would behave rationally. In contrast, Kahneman and company have shown that people are irrational in well-defined and predictable ways. For example, it is understood that the phrasing of a question can (irrationally) change how people answer, even if the meaning of the question is the same.

Kahneman’s book, Thinking, Fast and Slow, really should be required reading for everyone. It explains a lot of what goes on (gives the illusion of comprehension?) and provides practical tips for thinking rationally. For example, when I was on a visit in China, the merchants would hand me a calculator to type in what I would pay for a given item. Their response to the number I typed in was always the same: You’re joking, right?  Kahneman would explain that they were trying to remove the anchor set by the first number entered in the calculator. Anchoring is a common aspect of how we think.

Since, as Kahneman argues, we are inherently irrational one has to wonder about the general validity of the philosophic approach to knowledge; an approach based largely on rational argument. Science overcomes our inherent irrationality by constraining our rational arguments by frequent, independently-repeated observations.  Much as with project management, we tend to be irrationally overconfident of our ability to estimate resource requirements.  Estimates of project resource requirements not constrained by real world observations leads to the project being over budget and delivered past deadlines. Even Kahneman was not immune to this trap of being overly optimistic.

Kahneman’s cynicism has been echoed by others. For example, H.L. Mencken (1880 –1956) said:  “The most common of all follies is to believe passionately in the palpably not true. It is the chief occupation of mankind”. Are the cynics correct? Is our belief that the universe is comprehensible, and indeed mostly understood, a mirage based on our unlimited ability to ignore our ignorance? A brief look at history would tend to support that claim.  Surely the Buddha, after having achieved enlightenment, would have expressed relief and contentment for living in a century in which we finally got the basis of the universe straight. Saint Paul, in his letters, echoes the same claim that the universe is finally understood. René Descartes, with the method laid out in the Discourse on the Method and Principles of Philosophy, would have made the same claim.  And so it goes, almost everyone down through history believes that he/she comprehends how the universe works. I wonder if the cow in the barn has the same illusion. Unfortunately, each has a different understanding of what it means to comprehend how the universe works, so it is not even possible to compare the relative validity of the different claims. The unconscious mind fits all it knows into a coherent framework that gives the illusion of comprehension in terms of what it considers important. In doing so, it assumes that what you see is all there is.  Kahneman refers to this as WYSIATI (What You See Is All There Is).

To a large extent the understandability of the universe is mirage based on WYSIATI—our ignorance of our ignorance. We understand as much as we are aware of and capable of understanding; blissfully ignoring the rest. We do not know how quantum gravity works, if there is intelligent life elsewhere in the universe[2], or for that matter what the weather will be like next week. While our scientific models correctly describe much about the universe, they are, in the end, only models and leave much beyond their scope, including the ultimate nature of reality.

To receive a notice of future posts follow me on Twitter: @musquod.

[1] Let’s hope time is kinder to Kahneman than it was to Freud.

[2] Given our response to global warming, one can debate if there is intelligent life on earth.

Share

René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.

Share

Since model building is the essence of science, this quote has a bit of a bite to it. It is from George E. P. Box (1919 – 2013), who was not only an eminent statistician but also an eminently quotable one.  Another quote from him: One important idea is that science is a means whereby learning is achieved, not by mere theoretical speculation on the one hand, nor by the undirected accumulation of practical facts on the other, but rather by a motivated iteration between theory and practice.  Thus he saw science as an iteration between observation and theory. And what is theory but the building of erroneous, or at least approximate, models?

To amplify that last comment: The main point of my philosophical musings is that science is the building of models for how the universe works; models constrained by observation and tested by their ability to make predictions for new observations, but models nonetheless. In this context, the above quote has significant implications for science. Models, even those of science, are by their very nature simplifications and as such are not one hundred per cent accurate. Consider the case of a map. Creating a 1:1 map is not only impractical[2] but even if you had one it would be one hundred per cent useless; just try folding a 1:1 scale map of Vancouver. A model with all the complexity of the original does not help us understand the original.  Indeed the whole purpose of a model is to eliminate details that are not essential to the problem at hand.

By their very nature, numerical models are always approximate and this is probably what Box had in mind with his statement. One neglects small effects like the gravitational influence of a mosquito. Even as one begins computing, one makes numerical approximations, replacing integrals with sums or vise versa, derivatives with finite differences, etc. However, one wants to control errors and keep them to a minimum. Statistical analysis techniques, such as Box developed, help estimate and control errors.

To a large extent it is self-evident that models are approximate; so what? Again to quote George Box: Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. What would he have thought of a model with twenty plus parameters, like the standard model of particle physics? His point is a valid one. All measurements have experimental errors. If your fit is perfect you are almost certainly fitting noise. Hence, adding more parameters to get a perfect fit is a fool’s errand. But even without experimental error, a large number of parameters frequently means something important has been missed. Has something been missed in the standard model of particle physics with its many parameters or is the universe really that complicated?

There is an even more basic reason all models are wrong. This goes back at least as far as Immanuel Kant (1724 – 1804). He made the distinction between observation of an object and the object in itself. One never has direct experience of things, the so-called noumenal world; what one experiences is the phenomenal world as conveyed to us by our senses. What we see is not even what has been recorded by the eye.  The mind massages the raw observation into something it can understand; a useful but not necessarily accurate model of the world. Science then continues this process in a systematic manner to construct models to describe observations but not necessarily the underlying reality.

Despite being by definition at least partially wrong, models are frequently useful. The scale model map is useful to tourists trying to find their way around Vancouver or to a general plotting strategy for his next battle. But, if the maps are too far wrong the tourist will get lost and fall into False Creek and the general will go down in history as a failure. Similarly, the models for weather predictions are useful although they are certainly not a hundred per cent accurate. However, they do indicate when it safe to plan a picnic or cut the hay; provided they are right more than by chance and the standard model of particle physics, despite having many parameters and not including gravity, is a useful description of a wide range of observations. But to return to the main point, all models, even useful ones, are wrong because they are approximations and not even approximations to reality but to our observations of that reality. Where does that leave us? Well, let us save the last word for George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Hence the foolishness of talking about theoretical breakthroughs in science. All breakthroughs arise from pondering about observations and observations testing those ponderings.

[2] Not even Google could produce that.

Share