Monthly Archives: April 2014

What’s up at the Centre for Unintelligent Design?

This picture shows a German postage stamp featuring the Tyrannosaurus rex. Tyrannosaurus rex was a large Theropod dinosaur that lived during the late Cretaceous Period (around 66 million years ago).

T. rex had arms too short to carry food to its mouth. Intelligent design?

Is the AIDS virus intelligently designed? Casey Luskin, Program Officer of the DiscoveryInstitute’s Center for Science and Culture, certainly seems to think so. Indeed, he could hardly think otherwise, since the virus is full of what he calls Complex Specified Information. But let him speak for himself:

It seems to me to be a virus which is finely-tuned for killing humans. You might not like its function, but that doesn’t mean it wasn’t designed. Same goes for guns, nuclear bombs, and genetically engineered viruses. All kill because they were intelligently designed to efficiently carry out that mission.

File:HIV-budding-Color.jpg

Scanning electron micrograph of HIV-1 (in green) budding from cultured lymphocyte. Multiple round bumps on cell surface represent sites of assembly and budding of virions.

This is part of his more general argument, reiterated in the UK by the Discovery Institute’s local franchise, Glasgow’s Centre for Intelligent Design. Casey’s formulation runs as follows:

 We detect design by finding features in nature which contain the type of information which in our experience comes from intelligence. This is generally called complex and specified information (CSI). In our experience, CSI only comes from a goal-directed process like intelligent design. Thus, when we detect high levels of complex and specified information in nature, we can infer that intelligent design.

A powerful syllogism, which can be summarised as follows:

All complex things are either (a) biological or (b) intelligently designed. Therefore all biological things are intelligently designed.

Notice that this is a purely scientific argument, free from religious or other metaphysical overtones. Casey’s Center, like the Discovery Institute that hosts it, is concerned only with science and not at all with religion, which is why it seeks “To replace materialistic explanations with the theistic understanding that nature and human beings are created by God.” I came across these masterpieces of reasoning when revisiting the Centre for Unintelligent Design, curated by my good friend Keith Gilmour, Religious and Moral Education teacher here in Glasgow, and which houses, among other examples of lack of intelligence, correspondence with Keith from several Intelligent Design advocates.

The star-nosed mole, unintelligently and thus self-referentially selected as the mascot of the Centre for Unintelligent Design. The description of this creature’s foraging reads like an April Fool’s joke, but isn’t.

 The Centre itself is, self-referentially, unintelligently designed. In fact, the examples displayed there have little in common, except that they describe things that one could have imagined better otherwise. Some entries reflect completely unreasonable demands on Nature. Knees that wear out, for example. One could hardly expect them to last forever. Or teeth that rot; but they probably wouldn’t if we would only stick to a palaeolithic diet. Others reflect conflicts of interest between us and other organisms, such as those that cause potato blight (yes, Casey, the blight

File:TrematodesFig9 EncBrit1911.png

Assorted trematode parasites, from 1911 Encyclopedia Britannica

 

virus is chock-a-block with CSI), herpes, legionnaire’s disease, malaria, leprosy, tuberculosis, smallpox, cholera, tapeworm, athlete’s foot, bubonic plague and countless other highly successful infections. Here, as with the AIDS virus, I have to agree with Casey rather than Keith. The designer, if such there be, has shown no lack of creative intelligence. To take another item from Keith’s list, the intelligence required to design the liver fluke, which goes through seven separate phases and three different hosts, must be very creative indeed.

Most interesting in the present context are those that make sense in terms of evolutionary history, but not otherwise. Of these the most comical (except, perhaps, to a giraffe) is the recurrent laryngeal nerve, which happened to go behind the sixth gill arch in our fishy ancestors. No great harm in that. Unfortunately, in mammals the sixth gill arch gives rise to the aorta, so that the combined vagus – laryngeal nerve branch has to loop down to the level of the heart in its path from cranium back up to voice box. A distance, in the case of the giraffe, of some 20 feet, corresponding to a transmission time for a motor nerve of up to a quarter of a second. Fortunate, perhaps, that giraffes are rarely called on to sing Mozart arias. The most ambiguous is the human vermiform appendix, whose main function was at one time thought to be the provision of gainful employment for surgeons. Related to the caecum of our more herbivorous ancestors, it has shrunk to the point where it could not narrow any further without becoming prone to dangerous blockages. However, it is now known to have a function, as a refuge in case of infection for the bowel bacteria on which we depend for digestion. Typical of how a so-called vestigial organ can acquire a secondary role. No plan, just messing around, and what works, works.

Another organ that may have been on the way to becoming vestigial is T. rex’s forearms, too short even to lift food into its mouth. A very recent paper suggests why. Matching up the attachment points on T. rex’s skull to the corresponding muscles in a modern bird (you do admit, don’t you Casey, that birds are descended from dinosaurs?) gives a neck so flexible that the mouth could be used for grasping, making the hands unnecessary for this purpose. As numerous commentators have pointed out, this would have had the further advantage of protecting the tyrannosaur’s moral purity, by preventing it from masturbating.

There are more poignant examples on Keith’s list. I have a son and granddaughter strongly affected by the coeliac condition, in which malabsorption caused by sensitivity to wheat gluten can, unless the diet is modified as necessary, lead to seriously arrested physical and mental development. Such an outcome must have been commonplace during the 10,000 years or so between when wheat became a major part of our diets, and when in the 1940s the cause for the condition was understood.

Even more poignant, how animals, including us, give birth. Through the pelvic girdle, no problem, until brains evolve to the point that heads can only squeeze through with difficulty. An evolutionary process that, of course, happened to coincide with the development of bipedalism, making it impossible to expand the pelvic girdle indefinitely to cope with an enlarging skull. And so we end up with a typical evolutionary compromise; giving birth is an ordeal, but usually takes place without permanent damage. Some difficult births, however, lead to perinatal anoxia, or other forms of brain damage during delivery. To say nothing of death in childbirth, a major hazard to both mother and child until relatively recently. All of which could have been avoided by an intelligent designer simply placing the birth canal above, rather than below, the pubic bone. And anyone who quotes Genesis 3:16 as justification is a moral monster.

HIV image Photo Credit: C. Goldsmith, from Centers for Disease Control and Prevention‘s Public Health Image Library. T. rex stamp through Sciencekids

Antifragility and Anomaly; Why Science Works

Scientific theories are antifragile; they thrive on anomalies.

Some things are fragile – they break. Some are robust – they can withstand harsh treatment. But the most interesting kind are antifragile, emerging strengthened and enriched from challenges. Whatever does not kill them makes them stronger. Science is as successful as it is, because science as a whole, and even individual scientific theories, are antifragile.

AntifragileWe owe the term “antifragile” to the financier and thinker Nassim Nicholas Taleb, author of Fooled by Randomness and Black Swan. Taleb describes his latest book, Antifragile; Things that Gain from Disorder, as the intellectual underpinning of those earlier works, since it formalises his earlier reflections. Antifragility is the true opposite of fragility. Unlike mere robustness, it is the ability to actually profit from misadventure. A porcelain cup is fragile, and shatters if dropped. A plastic cup, being robust, will not be any the worse for such an experience, but it will not be any the better for it either. Contrast the human immune system. Being antifragile, it is improved by stresses. Having been challenged by an infection, it will be primed to respond more effectively to similar challenges in the future, because it has learned to recognise the infection as an invader. There are deep connections between randomness, uncertainty, novelty, information, and learning, and natural selection in an uncertain world favours antifragile systems because they learn from experience [1].

Good safety systems are antifragile. Accidents will happen, and of their nature cannot always be foreseen, but each accident can be analysed retrospectively and procedures adjusted to anticipate similar challenges in the future. Moreover, experience shows that experience is more persuasive than foresight, even when the mishap itself has actually been foreseen.

This may not be the very best moment to mention the fact, but air transport safety systems are antifragile. Air travel is far safer than it was a generation ago, because we have learned from past mistakes. The mistakes were part of the process, if only because brutal reality is more effective than prediction at promoting change. Thus locking off the cockpit door securely from the cabin had been discussed earlier, but only became standard practice after the 9/11 hijackings, and after the recent Malaysian Airlines case we can expect the obviously overdue checking of passports against the Interpol list of those stolen.

Number of deaths from airline accidents per year (red line is rolling 5-year average); note steady decline since the 1970s, despite greatly increased traffic. From http://aviation-safety.net/statistics/period/stats.php?cat=A1

Among the things that Taleb lists as fragile are scientific theories. Scientific theories are indeed vulnerable to disproof, since they must be tested against reality. The simplest way to describe this is to say that they must be falsifiable by experience, a criterion associated with the name of Karl Popper. In the popular imagination at least, however well established the theory may be from past experience, it could at any time be refuted in the future by a single observation that differs from what is theoretically predicted. If so, scientific theories would indeed be fragile, since they could not survive a single shock.

But that is not what really happens. Well-established theories have already explained a wide range of observations, and will not readily be destroyed by a single counterexample. On the contrary, they usually emerge all the stronger for accommodating to it. If the theory already has a great deal going for it, we do not regard the counter-example as a refutation, but rather as an anomaly. It is a deviation from regular behaviour (Greek: an-, negation, homalos, even) but not necessarily a sufficient reason to deny that the regularity exists. Although the anomaly seems to be an imperfection, we may still be able to interpret it in a way that deepens and extends our understanding of the theory, and our knowledge of the world itself. When we can do this, the theory has not been damaged by being challenged; quite the reverse. It has emerged stronger, and our confidence in it is enhanced. New challenges cannot be foreseen, whatever scientists may have to pretend when writing their funding proposals, but for that very reason, in the process of responding to them, the theory generates new information. This is exactly the kind of behaviour that Taleb calls antifragile.

A few examples will illustrate the point. Scientists themselves have long recognised the importance of anomalies in discovery; as Isaac Asimov put it,

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’, but ‘That’s funny …’

Take for instance Newton’s theory of planetary motion (I owe this example to Philip Kitcher’s book, Abusing Science). In this theory, to a first approximation, planets orbit the Sun in elliptical orbits, under the influence of the Sun’s gravitational attraction. But this is not quite what happens, because the planets also exert gravitational attraction on each other. A more exact description of their motion needs to take this additional effect into account, and the theory tells us how to do this, using the inverse square law for gravitational attraction. The orbit of Uranus, for example, is measurably perturbed by the gravitational influences of Jupiter and Saturn. But calculations on this basis did not lead to accurate predictions of its path. A direct conflict between theory and observation, but did this destroyNewton celestial mechanics? Did people throw up their hands in despair and abandon the attempt to predict the next lunar eclipse? Of course not. There was indeed an anomaly, but this was hardly sufficient reason to discard a theory that tied together the motions of the moon, the planets, and even the proverbial apple. Indeed, the theory itself told astronomers what to look for; another planet waiting to be discovered, whose position could itself be calculated from the “error” in the calculated orbit of Uranus. And there it was, a new planet, which we now call Neptune. This was not a refutation of the theory, but a further confirmation. The theory, in other words, emerged stronger from the challenge posed by the deviation from its initial predictions. It had displayed antifragility. The Newtonian description of the planets and their motions had survived, and had gained further information – the existence of a major new planet, no less – in the process. Taleb himself mentions this case, but dismisses it as untypical. Ironically, since the importance of the untypical is central to his own thinking.

Planets2013.jpg

The Sun and planets of the Solar System. Sizes are to scale, distances and illumination are not
(source: Wikipedia)

Now contrast this with the problem posed by the orbit of the planet Mercury. Again, the orbit deviated from the Newtonian prediction. But this time, the search for a new planet to account for the discrepancies was unsuccessful. It was only after the development of Einstein’s general theory of relativity that it became possible to explain the planet’s motion.

So what follows from this latter case? Do we say that Newton’s theory was wrong? No. We say that it was incomplete. It provides an adequate description of celestial mechanics, provided speeds are not too high (compared with the speed of light) and gravitational fields are not too strong. When we say this, we have not subtracted from Newton’s theory. On the contrary, we have added to it, by describing the conditions under which we can expect it to break down, and by subsuming it in a larger, more general, theory. It has been enhanced, as a jewel is enhanced by its setting.

Actually, if we are looking for extreme accuracy, we need to take into account relativistic refinements to Newton even when discussing everyday objects. Otherwise, we could not have a global positioning system good enough to guide a tractor without steering it into a ditch.

My next example comes from chemistry. Put together Lavoisier’s theory of chemical elements with Dalton’s theory of atoms, and you would expect that all the atoms of a particular element, wherever they were found, would have exactly the same properties. In particular, the density of the gas nitrogen, which depends on the mass of the individual nitrogen atoms, should be exactly the same whether the nitrogen is extracted from the atmosphere, or is chemically prepared by the decomposition of a nitrogen-containing compound, such as ammonia.

The densities of some gases, such as nitrogen and oxygen, are tantalisingly close to being whole number multiples of the density of hydrogen, and it was suspected (correctly) that there was a fundamental reason for this. That is why the physicist Lord Rayleigh, in the early 1890s, decided to re-measure the density of nitrogen as accurately as possible. Yet, however much care he took, he found that the density of the gas that he prepared from air was always measurably greater than that of the gas prepared from ammonia. In predicting the densities to be the same, Rayleigh had clearly made a mistake of some kind, but, as Taleb points out,mistakes contain information, which is why they are valuable. The mistake in this case is the assumption that once you have removed oxygen, water vapour, and other minor components from air, nitrogen is the only thing you are left with. The chemist William Ramsay realised that “atmospheric nitrogen” must also contain something else, and that something else turned out to be very interesting indeed. It was the gas argon, which actually makes up 1% of our atmosphere, but had hitherto escaped detection because of its lack of chemical reactivity. And not only was argon a new element, but it was a representative of an entire group of new elements, the noble gases, whose inertness provides a clue to the very nature of chemical bonding.

A further anomaly was discovered in the early years of the 20th century. Different chemically pure samples of one particular element, lead, really did have different densities, depending on the source of the ore. Facts like this were involved in the discovery of isotopes, versions of the same element with different numbers of neutrons in the nucleus, and therefore different atomic mass. We now know that contrary to classical atomic theory, the different isotopes of an element have very slightly different chemical reactivities, and that by examining the isotopic composition of a mineral, with the high accuracy possible in modern mass spectrometers, we can draw inferences about its geological history.

Finally, an example from geology, and more specifically from the radiometric dating of rocks. I chose this example because the anomaly is discussed and explained in the original scientific literature, despite which creationists shamelessly use it as a reason for rejecting the very science that it extends and validates.

The principle of radioactive dating is simple. Some elements are radioactive. They decay at a known rate, and by comparing the amount of decay product in a mineral grain with the amount of parent material remaining, we can infer how long the process has been going on, and hence the time since the formation of that particular grain. This method has been in use for over a century. Since many rocks contain more than one radioactive isotope, it is often possible to obtain more than one date for the same sample, and the fact that such dates are generally in excellent agreement enhances our confidence in the technique. In its simplest form, the method requires that both parent and daughter have been immobile, but more refined arithmetical techniques using non-radiogenic isotopes as internal standards can correct for such movement, and have been in use since the 1940s.

Cardenas basalt, at bottom of Grand Canyon. Photo Don Searls via Wikipedia

Now considers the Cardenas basalt, near the base of the Grand Canyon. This has been carefully dated using two distinct methods, rubidium-strontium (Rb/Sr), and potassium-argon. Rb/Sr is an excellent method for older rocks, because the rubidium parent has a long half life, and because both elements will be firmly bound in their mineral matrix. Potassium-argon is in this latter regard at the other extreme. Potassium occurs in rocks as a component of aluminosilicate minerals, which hold it firmly in place. Argon, on the other hand, is as we have already seen an unreactive gas. When rock is chemically reworked or melted, the argon is able to escape, and as we have seen the argon so formed makes up 1% of our atmosphere.

Heat a rock sufficiently, and some of the argon will be able to escape between the grains, while its parent potassium, like most other components including rubidium and strontium, remains firmly in place. So if we now apply potassium-argon dating, we will get an underestimate of the true age because we will have retained all of the parent, but lost part of the product. By contrast, the Rb/Sr dating is unaffected because both parent and daughter are immobile. This is exactly what was found in the case of the Cardenas basalt. Rb/Sr tells us that this basalt represents a lava flow some 1100 million years ago, and dating by various methods of the rocks above and below, and through which is has penetrated, confirms this. The potassium-argon dates are younger; how much younger depends on the exact chemical composition of the part of the rock sampled, and hence on its viscosity during later heating (see here, p. 255, for details). The Grand Canyon has exposed these ancient rocks, buried elsewhere beneath a mile of sediments, and their detailed examination continues to yield new information about the tectonic forces at work in the distant past.

The elegant ellipses of planetary orbits are perturbed by their mutual interactions, The identical atoms of the early modern atomic theory turn out to be to a mixture of different isotopes. The single date for the formation of a rock must at times be supplemented by other dates from its history. T. H. Huxley may have spoken of “The great tragedy of Science — the slaying of a beautiful hypothesis by an ugly fact,” but mature theories are in general neither as elegant nor as vulnerable as newly coined hypotheses. They will have undergone mutation and Darwinian evolution in the marketplace, and demonstrated their ability to survive, warts and all.

[1] Such is the central theme of this meandering, often insightful, and frequently infuriating book; for the review that most closely matches my own opinion, see here. Ironically, I found it more convincing at the level of general understanding than at the level of specific application, in direct contrast to the author’s own view of how ideas shape up.

An earlier version of this post was published at: http://www.3quarksdaily.com/3quarksdaily/2014/03/antifragility-and-anomaly-why-science-works.html

Rethinking the earliest mammals

In a recent Earth-Pages post, Steve Drury, of the Open University, reports on latest developments in the ever-expanding tail of the giant Miocene Sciuridae of the Western Ghats  of Karnataka. I cannot attempt to do justice to the surprising and revolutionary implications of these discoveries, not only for the squirrels themselves but for the primitive hamsters that appear to have been their pray. More, much more, can be expected from the laterite

Artist's impression of the Sringeri carnivorous squirrel (credit: network54.com)

Artist’s impression of T. sringeriensis (credit:network54.com). From Drury, op.cit.

deposits now being unearthed, with major scatological and eschatological implications.  My only concern is that these deposits may be insufficiently collateralised, and therefore liable to subsidence and enforced repossession before exploitation is complete. As for the hamsters themselves, the present author (me; not Stephen Drury who has not authorised and is not likely to authorise this account) suspects that although they may be distantly related to Felis domesticus cheshirensis, they do indeed belong to an early intelligently designed form of Cricetinae. If so,  they would have been particularly nutritious because of their cheek-pouch contents, thus providing a balanced diet of carbohydrate and protein, in accord with current dietary guidelines, in a single meal.

Embargoed until 00:00:o5, 01/04/2014 BST