Black swans and other deviations: like evolution, all scientific theories are a work in progress
Discussions about the nature of science and scientific theories are often confused by the outdated view that such theories are rendered false when anomalies arise. The notion of a scientific theory as a static object should be replaced with the more current view that it is part of a living research programme, which can broaden its scope into new areas.
For example, take the hypothesis that all swans are white, which seemed pretty good to Europeans until Dutch explorers found black swans in Australia in 1636. So what happens to our hypothesis? There are a number of options.
1) Redefine swan-ness to include whiteness. Then black swans aren’t really swans, and the hypothesis remains true by definition.
2) It’s been disproved. Discard it.
3) Compare different species of swan the world over, and see how well black swans fit in.
(1) is the least useful. Definitions can only tell us about how we are using words. They tell us nothing about the world that those words attempt to describe. (2) is based on the common-sense idea that hypotheses should be discarded when falsified by observation. This was the idea put forward by philosopher Karl Popper in the 1930s, to distinguish between science and pseudoscience.Read the rest of this entry
Antifragility and Anomaly; Why Science Works
Scientific theories are antifragile; they thrive on anomalies.
Some things are fragile – they break. Some are robust – they can withstand harsh treatment. But the most interesting kind are antifragile, emerging strengthened and enriched from challenges. Whatever does not kill them makes them stronger. Science is as successful as it is, because science as a whole, and even individual scientific theories, are antifragile.
We owe the term “antifragile” to the financier and thinker Nassim Nicholas Taleb, author of Fooled by Randomness and Black Swan. Taleb describes his latest book, Antifragile; Things that Gain from Disorder, as the intellectual underpinning of those earlier works, since it formalises his earlier reflections. Antifragility is the true opposite of fragility. Unlike mere robustness, it is the ability to actually profit from misadventure. A porcelain cup is fragile, and shatters if dropped. A plastic cup, being robust, will not be any the worse for such an experience, but it will not be any the better for it either. Contrast the human immune system. Being antifragile, it is improved by stresses. Having been challenged by an infection, it will be primed to respond more effectively to similar challenges in the future, because it has learned to recognise the infection as an invader. There are deep connections between randomness, uncertainty, novelty, information, and learning, and natural selection in an uncertain world favours antifragile systems because they learn from experience .
Good safety systems are antifragile. Accidents will happen, and of their nature cannot always be foreseen, but each accident can be analysed retrospectively and procedures adjusted to anticipate similar challenges in the future. Moreover, experience shows that experience is more persuasive than foresight, even when the mishap itself has actually been foreseen.
This may not be the very best moment to mention the fact, but air transport safety systems are antifragile. Air travel is far safer than it was a generation ago, because we have learned from past mistakes. The mistakes were part of the process, if only because brutal reality is more effective than prediction at promoting change. Thus locking off the cockpit door securely from the cabin had been discussed earlier, but only became standard practice after the 9/11 hijackings, and after the recent Malaysian Airlines case we can expect the obviously overdue checking of passports against the Interpol list of those stolen.
Number of deaths from airline accidents per year (red line is rolling 5-year average); note steady decline since the 1970s, despite greatly increased traffic. From http://aviation-safety.net/statistics/period/stats.php?cat=A1
Among the things that Taleb lists as fragile are scientific theories. Scientific theories are indeed vulnerable to disproof, since they must be tested against reality. The simplest way to describe this is to say that they must be falsifiable by experience, a criterion associated with the name of Karl Popper. In the popular imagination at least, however well established the theory may be from past experience, it could at any time be refuted in the future by a single observation that differs from what is theoretically predicted. If so, scientific theories would indeed be fragile, since they could not survive a single shock.
But that is not what really happens. Well-established theories have already explained a wide range of observations, and will not readily be destroyed by a single counterexample. On the contrary, they usually emerge all the stronger for accommodating to it. If the theory already has a great deal going for it, we do not regard the counter-example as a refutation, but rather as an anomaly. It is a deviation from regular behaviour (Greek: an-, negation, homalos, even) but not necessarily a sufficient reason to deny that the regularity exists. Although the anomaly seems to be an imperfection, we may still be able to interpret it in a way that deepens and extends our understanding of the theory, and our knowledge of the world itself. When we can do this, the theory has not been damaged by being challenged; quite the reverse. It has emerged stronger, and our confidence in it is enhanced. New challenges cannot be foreseen, whatever scientists may have to pretend when writing their funding proposals, but for that very reason, in the process of responding to them, the theory generates new information. This is exactly the kind of behaviour that Taleb calls antifragile.
A few examples will illustrate the point. Scientists themselves have long recognised the importance of anomalies in discovery; as Isaac Asimov put it,
The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’, but ‘That’s funny …’
Take for instance Newton’s theory of planetary motion (I owe this example to Philip Kitcher’s book, Abusing Science). In this theory, to a first approximation, planets orbit the Sun in elliptical orbits, under the influence of the Sun’s gravitational attraction. But this is not quite what happens, because the planets also exert gravitational attraction on each other. A more exact description of their motion needs to take this additional effect into account, and the theory tells us how to do this, using the inverse square law for gravitational attraction. The orbit of Uranus, for example, is measurably perturbed by the gravitational influences of Jupiter and Saturn. But calculations on this basis did not lead to accurate predictions of its path. A direct conflict between theory and observation, but did this destroyNewton celestial mechanics? Did people throw up their hands in despair and abandon the attempt to predict the next lunar eclipse? Of course not. There was indeed an anomaly, but this was hardly sufficient reason to discard a theory that tied together the motions of the moon, the planets, and even the proverbial apple. Indeed, the theory itself told astronomers what to look for; another planet waiting to be discovered, whose position could itself be calculated from the “error” in the calculated orbit of Uranus. And there it was, a new planet, which we now call Neptune. This was not a refutation of the theory, but a further confirmation. The theory, in other words, emerged stronger from the challenge posed by the deviation from its initial predictions. It had displayed antifragility. The Newtonian description of the planets and their motions had survived, and had gained further information – the existence of a major new planet, no less – in the process. Taleb himself mentions this case, but dismisses it as untypical. Ironically, since the importance of the untypical is central to his own thinking.
The Sun and planets of the Solar System. Sizes are to scale, distances and illumination are not
Now contrast this with the problem posed by the orbit of the planet Mercury. Again, the orbit deviated from the Newtonian prediction. But this time, the search for a new planet to account for the discrepancies was unsuccessful. It was only after the development of Einstein’s general theory of relativity that it became possible to explain the planet’s motion.
So what follows from this latter case? Do we say that Newton’s theory was wrong? No. We say that it was incomplete. It provides an adequate description of celestial mechanics, provided speeds are not too high (compared with the speed of light) and gravitational fields are not too strong. When we say this, we have not subtracted from Newton’s theory. On the contrary, we have added to it, by describing the conditions under which we can expect it to break down, and by subsuming it in a larger, more general, theory. It has been enhanced, as a jewel is enhanced by its setting.
Actually, if we are looking for extreme accuracy, we need to take into account relativistic refinements to Newton even when discussing everyday objects. Otherwise, we could not have a global positioning system good enough to guide a tractor without steering it into a ditch.
My next example comes from chemistry. Put together Lavoisier’s theory of chemical elements with Dalton’s theory of atoms, and you would expect that all the atoms of a particular element, wherever they were found, would have exactly the same properties. In particular, the density of the gas nitrogen, which depends on the mass of the individual nitrogen atoms, should be exactly the same whether the nitrogen is extracted from the atmosphere, or is chemically prepared by the decomposition of a nitrogen-containing compound, such as ammonia.
The densities of some gases, such as nitrogen and oxygen, are tantalisingly close to being whole number multiples of the density of hydrogen, and it was suspected (correctly) that there was a fundamental reason for this. That is why the physicist Lord Rayleigh, in the early 1890s, decided to re-measure the density of nitrogen as accurately as possible. Yet, however much care he took, he found that the density of the gas that he prepared from air was always measurably greater than that of the gas prepared from ammonia. In predicting the densities to be the same, Rayleigh had clearly made a mistake of some kind, but, as Taleb points out,mistakes contain information, which is why they are valuable. The mistake in this case is the assumption that once you have removed oxygen, water vapour, and other minor components from air, nitrogen is the only thing you are left with. The chemist William Ramsay realised that “atmospheric nitrogen” must also contain something else, and that something else turned out to be very interesting indeed. It was the gas argon, which actually makes up 1% of our atmosphere, but had hitherto escaped detection because of its lack of chemical reactivity. And not only was argon a new element, but it was a representative of an entire group of new elements, the noble gases, whose inertness provides a clue to the very nature of chemical bonding.
A further anomaly was discovered in the early years of the 20th century. Different chemically pure samples of one particular element, lead, really did have different densities, depending on the source of the ore. Facts like this were involved in the discovery of isotopes, versions of the same element with different numbers of neutrons in the nucleus, and therefore different atomic mass. We now know that contrary to classical atomic theory, the different isotopes of an element have very slightly different chemical reactivities, and that by examining the isotopic composition of a mineral, with the high accuracy possible in modern mass spectrometers, we can draw inferences about its geological history.
Finally, an example from geology, and more specifically from the radiometric dating of rocks. I chose this example because the anomaly is discussed and explained in the original scientific literature, despite which creationists shamelessly use it as a reason for rejecting the very science that it extends and validates.
The principle of radioactive dating is simple. Some elements are radioactive. They decay at a known rate, and by comparing the amount of decay product in a mineral grain with the amount of parent material remaining, we can infer how long the process has been going on, and hence the time since the formation of that particular grain. This method has been in use for over a century. Since many rocks contain more than one radioactive isotope, it is often possible to obtain more than one date for the same sample, and the fact that such dates are generally in excellent agreement enhances our confidence in the technique. In its simplest form, the method requires that both parent and daughter have been immobile, but more refined arithmetical techniques using non-radiogenic isotopes as internal standards can correct for such movement, and have been in use since the 1940s.
Cardenas basalt, at bottom of Grand Canyon. Photo Don Searls via Wikipedia
Now considers the Cardenas basalt, near the base of the Grand Canyon. This has been carefully dated using two distinct methods, rubidium-strontium (Rb/Sr), and potassium-argon. Rb/Sr is an excellent method for older rocks, because the rubidium parent has a long half life, and because both elements will be firmly bound in their mineral matrix. Potassium-argon is in this latter regard at the other extreme. Potassium occurs in rocks as a component of aluminosilicate minerals, which hold it firmly in place. Argon, on the other hand, is as we have already seen an unreactive gas. When rock is chemically reworked or melted, the argon is able to escape, and as we have seen the argon so formed makes up 1% of our atmosphere.
Heat a rock sufficiently, and some of the argon will be able to escape between the grains, while its parent potassium, like most other components including rubidium and strontium, remains firmly in place. So if we now apply potassium-argon dating, we will get an underestimate of the true age because we will have retained all of the parent, but lost part of the product. By contrast, the Rb/Sr dating is unaffected because both parent and daughter are immobile. This is exactly what was found in the case of the Cardenas basalt. Rb/Sr tells us that this basalt represents a lava flow some 1100 million years ago, and dating by various methods of the rocks above and below, and through which is has penetrated, confirms this. The potassium-argon dates are younger; how much younger depends on the exact chemical composition of the part of the rock sampled, and hence on its viscosity during later heating (see here, p. 255, for details). The Grand Canyon has exposed these ancient rocks, buried elsewhere beneath a mile of sediments, and their detailed examination continues to yield new information about the tectonic forces at work in the distant past.
The elegant ellipses of planetary orbits are perturbed by their mutual interactions, The identical atoms of the early modern atomic theory turn out to be to a mixture of different isotopes. The single date for the formation of a rock must at times be supplemented by other dates from its history. T. H. Huxley may have spoken of “The great tragedy of Science — the slaying of a beautiful hypothesis by an ugly fact,” but mature theories are in general neither as elegant nor as vulnerable as newly coined hypotheses. They will have undergone mutation and Darwinian evolution in the marketplace, and demonstrated their ability to survive, warts and all.
 Such is the central theme of this meandering, often insightful, and frequently infuriating book; for the review that most closely matches my own opinion, see here. Ironically, I found it more convincing at the level of general understanding than at the level of specific application, in direct contrast to the author’s own view of how ideas shape up.
An earlier version of this post was published at: http://www.3quarksdaily.com/3quarksdaily/2014/03/antifragility-and-anomaly-why-science-works.html