• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for August, 2011

– By Byron Jennings, Theorist and Project Coordinator

In the June 2011 compilation of the top 500 fastest computers in the world, 91% run some version of the Linux/GNU operating system. This compares with 1.2% running MS Windows. A student, Linus Throvalds, and a rag-tag bunch of developers across the internet initially developed the kernel of the LINUX/GNU. It has since become a major industrial enterprise but still has developers spread across the internet. The development process for the kernel has been criticized, most notably by Microsoft, for its lack of road map or central planning.  Thorvalds considers this a feature, not a bug, and stated the point very clearly: “And don’t EVER make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle.” That’s giving your intelligence much too much credit. At 91% to 1.2%, he probably is onto something (not to mention the android/Linux cell phone market share – android: 36%, MS Windows Phone 7: 1% in mid 2011).

OK so what has that got to do with science, capitalism, or evolution? Quite a bit actually.  The thesis of this blog is that all three of these are successful due to the superiority of “ruthless massively parallel trial-and-error with a feedback cycle” to central planning.

We go back to Michael Polanyi. In a trip to the Soviet Union in 1936 he was told the distinction between pure and applied science was mistaken, and that in a socialist society all scientific research takes place in accordance with the needs of the latest Five Year Plan. Polanyi, in reaction, showed science behaves much like a free market in ideas with the corollary that central planning is as destructive in science as in the economy. A typical quote from Polanyi: “Any attempt at guiding scientific research towards a purpose other than its own is an attempt to deflect it from the advancement of science. (…) You can kill or mutilate the advance of science, you cannot shape it. For it can advance only by essentially unpredictable steps, pursuing problems of its own, and the practical benefits of these advances will be incidental and hence doubly unpredictable.” The essential point is “unpredictable.” In the long term, science is too unpredictable to control in any useful manner. We do not know in advance which line of inquiry will lead to breakthroughs or whether the breakthroughs will be for good or evil. If we knew, there would be no need for research.  The same reason holds for the failure of central planning in the economy: the problem is too complex and unpredictable. Central planning only works when the system under consideration is simple and predictable.

So what do we replace central planning with? Back to Thorvalds’ ruthless massively parallel trial-and-error with a feedback cycle. The massively parallel process permits many different approaches to be explored simultaneously and results are ,obtained in a timely manner. We do this in science by having different scientists work on different approaches to a given problem. The ruthlessness comes in rejecting or ignoring all the approaches that fail. Most of what is published in science is ignored and only a few papers have a big impact. I have seen the statement that, on average, a published paper is read only twice. This means most published papers are never read at all (perhaps not even by all the authors). There is no way to tell in advance which research will fall into the unread category. You try all approaches and see which ones work.

It is similar in a capitalist society. Companies try many different approaches. The ones that work make their owners rich, while the ones that fail go bankrupt. The examples are legendary: IBM moved with the times and, for a while, was almost synonymous with computers (hence the slogan; no one ever got fired for buying IBM). Digital Equipment Corporation (DEC), once a computer heavy weight, faded into oblivion (CEO Ken Olsen: There is no reason for any individual to have a computer in his home.)

The ruthlessness in capitalism comes in by allowing companies to fail. Capitalism breaks down when companies become too big to fail, or with monopolies and oligarchies. Companies that are too big to fail, monopolies, and oligarchies might as well be run by the government since they have lost the attribute (ruthless feedback) that makes the capitalist model work.

Evolution works on the same principle as the capitalist society. There is no central planning but a massively parallel system with a very ruthless feedback loop: adapt or die. It is ironic that the left attacks capitalism and supports evolution, while right attacks evolution and supports capitalism since both evolution and capitalism depend on one and the same principle: self organization through ruthless feedback.

For simple systems, central planning does work. Science or economies can be directed but only in the short term. Eventually the Soviet Union collapsed. So will science – if we regulate it too closely.

Share

DES first-light countdown, 6 months to go

Blanco telescope, on left, at sunset in Chile. Photo: Brenna Flaugher.

I have promised to provide updates on our progress towards the first light of the Dark Energy Survey’s, or DES.  First light is the first official look at the sky after readying the camera and its detection software. If you recall, we were supposed to deliver the Dark Energy Camera, or DECam, imager this summer.

So, without further ado, I am pleased to announce: Here it is!

I have been collecting DES-related pictures and videos for a while and the picture to right by Fermilab photographer Reidar Hahn is by far my favorite shot of the imager (first published in Fermilab Today). It shows the focal plane completely populated with 74 shiny, blue CCDs, ready to catch some extragalactic photons.

Shipment arrangements and installation schedule are being worked out as I write. Our team has already set foot in Chile to assemble at the Blanco Telescope the various parts that we tested on our Fermilab telescope simulator in February. Installation will start soon.

I am joining them in November in Chile and can hardly wait. But this is when my wave-like abilities, which I mentioned in a previous post, comes to play. My wave function is now quite stretched as I perform a variety of tasks these days. Read on and see it for yourself.

Blue-tinged Charged Coupler Devices in the Dark Energy Camera imager. Photo: Reidar Hahn

Galaxy clusters are my favorite thing in the universe, and in addition to my work on the DES cluster analysis group I am now building a new catalog based on the Sloan Digital Sky Survey, or SDSS, data. Our group at Fermilab is wrapping up all the parallel threads of work on that data set, and that means that I am doing a lot of writing these days, too. Papers are coming out soon and this is terrific news, which puts my mind at ease with respect to that nightmarish pressure for publications.

I’ve also been volunteered to help in the calibration group by writing a code, which was being referred to as “George” until we found a more appropriate name. But since people now ask me, “How is George doing?” all the time, I am afraid the name has stuck. Right now I am working in the first module, “George I”, to process a series of monochromatic calibration images and create a map of the system response curve at every point of the DECam focal plane. This new task is exciting because it connects the instrument I helped build with the science I want to do (good calibration is an essential requirement for cluster science) and it also gives me the opportunity to learn more about our data management system.

The reason for my excitement about this is the fact that a reliable, fast and easy-to-use database is the key for success and productivity in our field. I use the SDSS SkyServer for all my SDSS analyses and it is just great to be able to upload a text file with sky coordinates and visualize, with a single click, all the objects you are interested in. Every time I submit a query to the SDSS database, using the CasJobs web interface or my own little scripts, I wonder what the DES equivalents for that will look like. It feels a little like waiting for the next release of your favorite gadget. And as a heavy user and big fan, I, of course, have my own wish list of improvements.

Blanco Telescope April 2011. Photo: Jose Francisco Salgado

My number one wish? Support to upload my own little codes, in addition to data tables, and run them directly on the query outputs. This way I could save only the processed tables, use them later in combination with other data, make all my plots and download only the final results. That would make my work so much easier!

Well, but I’d better stop here and go back to work. George is doing well today, but there is still a long way to go. I will be back with more updates soon. Stay tuned.

–Marcelle Soares-Santos

Share

– By Saige McVea, TRIUMF High School Fellowship Student

I was asked this question by Dave Ottewell, a veteran physicist who has worked with both the TITAN and DRAGON groups during his 37 year long career at TRIUMF, Canada’s National Laboratory for Particle and Nuclear Physics. It was during my second week participating in the TRIUMF High School Fellowship, and I must admit that since then, I have been completely and utterly disillusioned.

The first few days of my six week internship were spent in a haze; not only is the TRIUMF lab massive and labyrinthine to a newcomer with a poor sense of direction, but the individuals who work there (though they appear ordinary) speak a dialect of English rich in acronyms and scientific jargon. Quite simply, I was lost. However, I was also fortunate enough to be placed under the supervision of Jennifer Fallis, a post-doctorate research associate, and Chris Ruiz, the group leader with whom I have been working on the DRAGON experiment.

Once somewhat familiar with my new environment and the language being spoken, I was given a small project. I was to design a platform to which a pinhole camera, an LED light, and an alpha source could be mounted so that the deterioration of ultra-thin carbon foils could be observed within the MCP chamber. This seemed incredibly simple at first when compared to what I had previously been trying to understand, but in reality, it proved to be rather problematic.

Saige with her work for DRAGON

Acquiring the camera from a spy shop with Lars Martin, Jennifer, and Gabriel (a student participating in the Emerging Aboriginal Scholars Summer Camp) began the job on a comical note. However, every necessary step after that point was time consuming and rather frustrating. Parts needed to be located so that the camera could be tested in a vacuum chamber, LED light configurations needed to be explored so a quality image could be obtained, and the dimensions of the existing components of the alpha source platform needed to be verified. When I finally had my sketch of the platform completed, I decided to double check that it would not protrude into the oncoming beam-line, and was again exasperated. The current extendable arm could not sufficiently withdraw; therefore, the new platform would most likely interfere with the radioactive beam. This would be an easy fix if the extendable arm required (SBLM-275-6) to correct this issue did not cost $1100 and take 35 days to be delivered – a duration exceeding the length of my stay.

The collapse of the camera installation, however, allowed me to take on other projects during my time at TRIUMF. I was taught how to do some very basic data analysis of the Magnesium-24 run using a root terminal and elementary C++ programming. By using MCP time of flight to select BGO (Bismuth Germanate) detector data for the E0 spectrum, centroid positions and resonance energies could be determined. My data analysis was compared to that of Dave Hutcheon, who used the separator time of flight (time between detection of a gamma ray and a heavy ion arriving at the DSSSD) instead of MCP time of flight, and fortunately, our results were in agreement.

Other slices of my working hours were spent attending student lectures and seminars. While at the ARIEL workshop, I made “tweets” concerning the speakers’ presentations. Although there was an abundance of things very much beyond me, I was forced to focus on the scraps of information that I did understand. I also learned during these seminars that scientists can get extremely passionate about their beliefs in theoretical physics. Whether the Higgs exists, dark energy is real, or supersymmetry is valid, I cannot say; but I am very glad that much still remains unknown. Most recently, I have been updating DRAGON’s astro website, and will perhaps continue to do so after my work term has ended.

So, returning to my complete and utter disillusionment, a career in physics is nothing like I would have expected. It does not entail familiar procedures or strategized experiments with flawless results obtained in pristine laboratories that yield clear and obvious conclusions. From what I have seen, a career in physics it is about dedication, incessant learning, collaboration with peers, and the prevalent mentality that “if at first you don’t succeed, try, try again.”

I would like to give a huge thanks to the TRIUMF High School Fellowship Committee for giving me this wonderful opportunity, all members of the DRAGON team for patiently instructing me over these past six weeks, and the 2011 summer co-ops for being a fantastic group of people. I wish you all the best!

 

Share

Fermilab issued this press release today.

Alex Romanenko, sitting on the edge of a large cryogenic vessel, holds one of the superconducting RF cavities made of niobium. Photo: Fermilab (Click on image for larger version)

Alex Romanenko, a materials scientist at Fermi National Accelerator Laboratory, will receive $2.5 million from the Department of Energy’s Office of Science to expand his innovative research to develop superconducting accelerator components. These components could be applied in fields such as medicine, energy and discovery science.

Romanenko w as named a recipient of a DOE Early Career Research Program award for his research on the properties of superconducting radio-frequency cavities made of niobium metal. The prestigious award, which is given annually to the most promising researchers in the early stages of their careers, includes a $2.5 million award over five years to continue work in the specified area.

“Dr. Romanenko and his proposed research show great promise,” said Tim Hallman, associate director of the DOE’s Office of Science for Nuclear Physics. “We are pleased that he has been selected to receive an Early Career Research Program award to continue this work.”

Romanenko’s work could explain why some superconducting radio frequency cavities are highly efficient at accelerating charged particles to high speeds while others are not, as well as prescribe new ways to make cavities even more powerful. His research links t he performance of SRF cavities to the quality of the niobium metal used to make them. In particular, he investigates specific defects and impurities in niobium. Although scientists take painstaking measures to ensure that the niobium is completely pure and that the final SRF cavities are free from any contaminants, dust or debris, the cavities do not always perform the way that they should. Romanenko’s research is dedicated to finding out why that happens.

Romanenko began his research on SRF cavities as a graduate student at Cornell University, an institution known for its SRF research. He continued his award-winning work at Fermilab when he joined the laboratory in 2009 as a Peoples Fellow, a prestigious position given to scientists who have the potential to be leaders in their field. (More information at
http://www.fnal.gov/pub/today/archive_2011/today11-03-02.html )

Through his research, Romanenko found that a new, previously unexplored, type of defect near the cavity surface may result in surface differences that are responsible for a cavity’s inferior performance. What he found was surprising: the defect sites often contained niobium-hydrogen compounds, which might form when the cavities are prepared for operation. Specifically, he was able to pinpoint the problematic area to the first 40 nanometers of a cavity’s surface, a thickness equivalent to 120 layers of niobium atoms.  

“The technology of these cavities has developed so fast recently that it is ahead of the corresponding science,” Romanenko explained. “We know how to make them work to a certain level of performanc e, but do not necessarily understand the full physics behind why they do so. I hope to understand why cavities behave in certain ways first, improve on this and then apply what I learn to other materials.”

If Romanenko can isolate the specific nanostructural effects that cause problems in cavities, then Lance Cooley, Romanenko’s supervisor and head of the new Superconducting Materials Department in Fermilab’s Technical Division, is prepared to direct other scientists to develop ways to prevent or control them and transfer that knowledge to industry. This could someday make it possible to mass-produce nearly perfect niobium cavities as well as lay the groundwork for cavities made from other superconducting materials that can perform at higher temperatures and accelerating fields. Such high-performance cavities—strung together to create powerful, intense particle beams—would lead to accelerators that can be used in indust ry, in hospitals and at research institutions. These accelerators are needed, for example, to produce a range of radioisotopes for medical diagnostics and have the potential to treat nuclear waste, among other applications. (More information at http://www.acceleratorsamerica.org/applications/index.html)

Strung together like the pearls of a necklace and cooled to ultralow temperatures, SRF cavities can accelerate particles with high efficiency. Photo: Fermilab (Click on image for larger version)

“This award recognizes the high caliber of research that takes place at Fermilab,” Cooley said. “It is because of the laboratory’s existing world-class research program that Alex’s research is likely to succeed.”

The monetary award will cover part of Romanenko’s research efforts, fund a postdoctoral associate and a part-time technician, and pay for advanced analysis techniques used to examine surfaces in the next five years.

Fermilab is a national laboratory supported by the Office of Science of the U.S. Department of Energy, operated under contract by Fermi Research Alliance, LLC.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov

Share

No Higgs is good Higgs!

Tuesday, August 16th, 2011

Much has been said about the Higgs boson, mostly how great it would be to find it. But what about if we do not find it? Could that be useful? In fact, yes, that’d be a great discovery.

Finding the Higgs or proving beyond any doubt that it does not exist, will be equally useful as Rolf Heuer, CERN Director General reminded the audience at the recent European Physics Society meeting. The first outcome would be immediately gratifying: job done! But excluding a Higgs boson, at least one of the kind predicted by the Standard Model, our current theoretical model, will put theorists on the right track. What we need is not the Higgs boson per se but understanding how it all works.

The Higgs boson is the simplest solution to the Brout–Englert–Higgs mechanism, a mathematical trick named after the three physicists who developed it. This is what we need to provide mass to all elementary particles such as the electrons, the quarks, and all the heavy bosons we have seen here at CERN and elsewhere, namely, the W and Z bosons. Without this mechanism, the current equations we have to describe elementary particles only produce massless particles. And we know these particles all have a mass, as witnessed countless times in our detectors.

The Brout–Englert–Higgs mechanism also solves another fundamental problem called “unitarity violation”. In simple words, unitarity means that the sum of all probabilities is equal to one. Imagine having marbles of three different colors in a bag. Say we have 20% red ones and 50% yellow ones. I do not need to tell you the remainder, the green marbles, account for 30%. We can all guess that. But if unitarity was violated in this case, the green marbles would account for something different from 30%. The sum of all probabilities would not be 100%. You’d think I was losing my marbles…

This is exactly what theorists know will happen with the sum of all the different ways a particle can decay. They will start being different from one once we look at higher energies. The Brout–Englert–Higgs mechanism stabilizes all that and brings back minimal sanity. Without it, we know all the equations we have to describe the world of elementary particles will cease to work.

This is precisely why theorists are so confident we are bound to find something new with the Large Hadron Collider or LHC. This new accelerator is powerful enough to bring us in the energy regime where we know the equations start failing. So new particles, linked to new layers of the theory we have not yet been able to explore, are bound to show up. They are needed to stabilize the current theoretical framework we have to describe nature. We know something is missing, we simply don’t quite know what this new something might be.

Many models already exist that would preserve unitarity and explain the origin of mass. It needs not be the Standard Model Higgs boson. This is just the simplest explanation. It could be something more complex, in the form of “Technicolor” or extra dimensions. There are many models out there; we simply need to be nudged in the right direction. What we will discover with our detectors will reveal which model is the right one.

Finding the Higgs or not finding it will tell us which way to go.

— Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline

Share

arXiv.org Anniversary

Tuesday, August 16th, 2011

An interesting article about the history of arXiv.org was brought to my attention today.  I always find it interesting to read about historical events in physics, because the stories are often fascinating, and yet rarely make it into the textbooks.  Here is the link to the paper on the arXiv itself, written by the creator himself.

http://arxiv.org/abs/1108.2700

Share

Pas de Higgs, bon Higgs!

Tuesday, August 16th, 2011

On a beaucoup dit et écrit sur l’importance du boson de Higgs. Et si on ne le trouvait pas ? En fait, ce serait une aussi grande découverte.

Qu’on trouve le Higgs ou qu’on prouve hors de tout doute qu’il n’existe pas seront deux découvertes tout aussi importantes,  comme l’a rappelé récemment Rolf Heuer, le directeur général du CERN lors de la récente conférence de la Société Européenne de Physique. Bien sûr, le trouver nous donnerait une satisfaction immédiate. Si on l’exclut, du moins celui prédit par le Modèle Standard, le cadre théorique actuel, cela mettrait les théoriciennes et théoriciens sur la seule bonne piste. Ce dont on a besoin, ce n’est pas du boson de Higgs en particulier, mais bien de comprendre comment la nature fonctionne réellement.

Le boson de Higgs n’est que la solution la plus simple et la plus élégante au mécanisme de Brout–Englert-Higgs, l’outil mathématique dont on a besoin pour pouvoir expliquer comment les particules élémentaires (électrons, quarks, bosons Z et W etc.) acquièrent une masse. Sans ce mécanisme, nos équations actuelles ne produisent que des particules sans masse, alors qu’on sait bien qu’elles en ont une, comme on a pu le confirmer maintes et maintes fois dans nos détecteurs, au CERN et ailleurs.

Mais le mécanisme de Brout–Englert-Higgs résout en même temps un autre problème tout aussi fondamental appelé « violation d’unitarité ». En simple, l’unitarité signifie que la somme des probabilités est égale à un. Prenons l’exemple d’un sac rempli de boules de trois couleurs. Si je vous dis que 20% d’entre elles sont rouges et que 50% sont jaunes, tout le monde aura deviné que les vertes comptent pour les 30% restant. Mais si l’unitarité n’était pas préservée dans cet exemple, les boules vertes formeraient autre chose que 30% et la somme diffèrerait de 100% ! De quoi perdre la boule…

Et bien, justement, la théorie prédit que c’est ce qui arrivera à la somme de toutes les différentes façons qu’une particule a de se désintégrer. Cette somme ne fera plus un lorsqu’on observera ces désintégrations à plus hautes énergies. Le mécanisme de Brout–Englert-Higgs permet justement de stabiliser tout ça et remet tout sur des rails solides.

Ce problème de violation d’unitarité a une conséquence majeure. Pour le résoudre, on sait qu’on doit ajouter de nouvelles couches aux théories actuelles, nous garantissant du coup que nous trouverons bientôt quelque chose de nouveau avec le Grand Collisionneur de Hadrons ou LHC. Cet accélérateur étant le plus puissant jamais construit, il nous donne accès aux énergies correspondant au régime où nos équations commencent à défaillir. De nouvelles particules sont reliées à ces raffinements théoriques et  vont donc inévitablement se manifester tôt ou tard. Ces raffinements sont nécessaires pour stabiliser la théorie actuelle qui nous permet de décrire le monde qui nous entoure. On sait qu’il manque quelque chose; il suffit maintenant d’en déterminer la nature.

Plusieurs autres modèles permettent de préserver l’unitarité et expliquent l’origine de la masse sans faire appel au boson de Higgs. Celui-ci n’est que la solution la plus simple. Or, il pourrait s’agir de quelque chose de bien plus complexe, comme la théorie de la « technicouleur » ou la présences de nouvelles dimensions. A nous de déterminer quel modèle correspond à la réalité.

Alors qu’on trouve ou élimine le boson de Higgs nous mettra sur la bonne voie.

— Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline

Share

This story first appeared as a press release on Interactions.org, issued by Brookhaven National Laboratory, the Institute of High Energy Physics, and Lawrence Berkeley National Laboratory. For the full version and contact information, go here.

The Daya Bay Reactor Neutrino Experiment has begun its quest to answer some of the most puzzling questions about the elusive elementary particles known as neutrinos. The experiment’s first completed set of twin detectors is now recording interactions of antineutrinos (antipartners of neutrinos) as they travel away from the powerful reactors of the China Guangdong Nuclear Power Group in southern China.

Neutrinos are uncharged particles produced in nuclear reactions, such as in the sun, by cosmic rays, and in nuclear power plants. They come in three types or “flavors” — electron, muon, and tau neutrinos — that morph, or oscillate, from one form to another, interacting hardly at all as they travel through space and matter, including people, buildings, and planets like Earth.

The start-up of the Daya Bay experiment marks the first step in the international effort of the Daya Bay Collaboration to measure a crucial quantity related to the third type of oscillation, in which the electron-flavored neutrinos morph into the other two flavored neutrinos. (more…)

Share

Some days ago, Freija Descamps, one of the “winter-overs” taking care of IceCube during the months-long South Pole night, sent to us in the North this great picture of the aurora australis, and I wanted to share it with you.

Serving as a background to the aurora is the pale glow of the Milky Way in the region of Sagittarius, with Scorpius hanging on top of it. Behind the curtain of stars, gas, and dust of the Sagittarius region sits the center of our home galaxy, a candidate source of neutrinos and energetic cosmic rays.

Share

The Relativity of Wrong

Friday, August 12th, 2011

– By Byron Jennings, Theorist and Project Coordinator

Isaac Asimov (1920 – 1992) was a prolific writer of science fiction and popular science books. Many people of my generation had their introduction to science through his writings. While not considered a philosopher of science, one of his articles should be required reading for anyone hoping to understand how science works. The article, with the same title as this blog, first appeared on The Skeptical Inquirer (p. 35, v. 14 1989) and later in a book of the same name.  The following is my take on his point.

To clarify the relativity of wrong concept, consider the value of π. A simple approximation is π = 3 (1 Kings 7:23). This is wrong but by less than 5%. A better approximation is π = 3.14. The error here is 0.05%. Strictly speaking both values are wrong. However, the second value is less wrong than the first. As a graduate student (in the olden days, as my daughter would say) I used π = 3.141592653589793 in my computer programs. This is still wrong but much less wrong than the previous approximations. There was no point using a more accurate value of π since the precision of the computer was 15 digits (single precision on a CDC computer). None of these values of π are absolutely correct. That would take an infinite number of digits, so all are wrong. However the initial values are more wrong than the latter values. They all are useful in the appropriate context.  Hence, the relativity of wrong.

The same logic applies to models. Consider the flat earth model. For the person who never travels farther that 100 km from his birthplace, the flat earth model is quite accurate. The curvature of the earth is too small to be detected. However when the person is a sailor, the question of the shape of the earth takes on more urgency. The flat earth model suggests questions like: Where is the edge of the earth? What will happen if I get too close? For the world traveler, the flat earth model is not sufficient. The spherical earth model is more useful, has greater predictive power and suggests a wider range of questions. Questions like: Does the earth rotate? Does it move around the sun or does the sun move around the earth? But it is a wrong statement that the earth is exactly spherical. Not as wrong as the statement the earth is flat but still wrong. However being not exactly correct does not make it useless. A spherical globe allows a much better understanding of airplane routes than a flat map. But the earth is not a perfect sphere. It is flattened at the poles (a quadrupole deformation). Smaller still is its octapole (pear shaped) deformation. The exact shape of earth will never be measured, as that would require, like π, an infinite number of digits but in an infinite number of parameters. It would also be useless. What is needed is a description sufficiently accurate for the purpose it is being used for. Science is the art of the appropriate approximation. While the flat earth model is usually spoken of with derision, it is still widely used. Flat maps, either in atlases or road maps, use the flat earth model (except for my road map it is a crinkled earth model) as an approximation to the more complicated shape.

Classical mechanics — Newton’s law of motion and Maxwell equations of electromagnetism — although superseded by relativity and quantum mechanics, are still useful and taught. The motion of the earth around the sun is still given by Newton’s laws and classical optics still works. However, quantum mechanics has a much wider realm of reliability. It can describe the properties of the atom and the atomic nucleus where classical mechanics fails completely.

Animals reproducing after their kind is the few generations limit of evolution. Thus over the time scale of few human generations we do not see new kinds arising. The offspring resemble their parents. Evolution keeps the successes of the previous model; cats do not give birth to dogs, nor monkeys to people, even in evolution. The continuity between animals-reproducing-after-their-kind and evolution is not sufficiently appreciated by the foes of evolution and perhaps not by its proponents either.

There is a general trend: new models reduce to the previous model for a restricted range of observations. Ideally the new model would contain all the successes of the old model but this is not always the case. But overall the new model must have more predictive power, otherwise its adoption is a mistake. Thus we have the view of science producing a succession of models, each less wrong (none are 100% correct) than the one it replaces. We see progress. Science progresses, new models are constructed with greater and greater predictive power. The ultimate aim is to have a model of everything with a strictly limited number of assumptions. This model would describe or predict all possible observations. Quantum indeterminacy suggests that such a model does not exist. However, progress in science is moving closer to this ultimate, probably-unreachable, yet still enticing goal.

Share