Blog Archives

Atoms Old and New, 2: From Newton to Einstein

Part 1 of this series, “Atoms Old and New: Atoms in Antiquity” can be read here.

The transition to modern thinking

“It seems probable to me, that God in the beginning formed matter in solid, massy, hard, impenetrable, movable particles… even so very hard, as never to wear or break in pieces; no ordinary power being able to divide what God Himself made one in the first creation.” So wrote Sir Isaac Newton in his 1704 work, Opticks. Apart from the reference to God, there is nothing here that Democritus would have disagreed with. There is, however, very little that the present-day scientist would fully accept. In this and later posts, I discuss how atoms reemerged as fundamental particles, only to be exposed, in their turn, as less than fundamental.

The scientific revolution and the revival of corpuscular theory – 1543–1687

DeRevolutionibusIn 1543, on his death-bed, Nicholas Copernicus received a copy of the first edition of his book, On the Revolutions of the Heavenly Bodies, in which he argued that the Sun, not the Earth, was thecentre of what we now call the Solar System. In 1687, Isaac Newton published his Mathematical Principles of Natural Philosophy, commonly known as the “Principia”. With hindsight, we can identify the period between these events as a watershed in the way that educated people in the West thought about the world, and number the political revolutions in America and France, and the economic revolutions in agriculture and industry, among its consequences.

Before this scientific revolution, European thinking about nature still followed that of Aristotle. The Earth lay at the centre of the Universe. Objects on Earth moved according to their nature; light bodies, for instance, containe, air or fire in their makeup, and these had a natural tendency to rise. Earth was corrupt and changeable, while the heavens were perfect and immutable, and the heavenly bodies rode around the centre on spheres within spheres because the sphere was the most perfect shape. By its end, Earth was one of several planets moving round the Sun in elliptical orbits, the movements of objects were the result of forces acting on them, the laws of Nature were the same in the heavens as they were on Earth, and all objects tended to move in straight lines unless some force deflected them from this path. The Universe ran, quite literally, like clockwork. This mechanical world-view was to last in its essentials until the early 20th century, and still remains, for better or worse, what many non-scientists think of as the “scientific” viewpoint.

Galileo_manuscriptLeft: manuscript where Galileo records his observations of the motion of the moons of Jupiter, dethroning Earth from its special position as centre of celestial motion. Below right, Gallileo demonstrates the telescope to the Doge of Venice, fresco by Bertini. Click to enlarge

Bertini_fresco_of_Galileo_Galilei_and_Doge_of_VeniceIn 1611, Galileo turned the newly-invented telescope on the heavens, discovered sunspots, and moons round Jupiter, and realised that the belief in a perfect and unchanging1 celestial realm was no longer sustainable. Earlier, he had studied the motion of falling bodies. In work that he started in 1666, Newton showed how the laws of falling bodies on Earth, and the movement of heavenly bodies in a Copernican solar system, could be combined into a single theory. To use present-day language, the Moon is in free fall around the Earth, pulled towards it by the same force of gravity as a falling apple. This force gets weaker as we move away from Earth, according to the famous inverse square law, which says that if we double the distance, the force falls to a quarter of its value. Then with a certain amount of intellectual effort (involving, for example, the invention of calculus), Newton was able to work out, from the acceleration of falling bodies on Earth, and from the Earth-Moon distance, just how long it should take the Moon to go round the Earth, and came up with the right answer. He was also able to work out just how long it would take satellites at different distances to go through one complete orbit. Of course, at that time, Earth only had one satellite (the Moon), but six were known for the Sun (Mercury, Venus, Earth, Mars, Jupiter, Saturn), and his theory correctly predicted how the length of the year of these different planets would vary with their distance from the Sun (the answer is a 2/3 power law; an eight-fold increase in distance gives a fourfold increase in time). Celestial and terrestrial mechanics were united.

It was around this time that a Dutchman, Anthony van Leeuwenhoek, began an extensive series of microscope studies, using single lens instruments of his own devising. Among the first to observe spermatozoa, he also described bacteria, yeast, the anatomy of the flea, and the stem structure of plants. He communicated his results to the Royal Society in London. Formally established around 1660, under the patronage of Charles II, this was and remains

Illus from Arcana after p.356

             Image from Arcana Naturae Detecta, 1695, Leeuwenhoek’s collected letters              to The Royal Society. Click to enlarge

among the most prestigeful of learned societies. Here they caught the attention of Robert Boyle (of Boyle’s Law for gases). Boyle tried to explain such properties of matter as heat, and the pressure of gases, in terms of the mechanics of small particles, or “corpuscles”, and hoped that the other aspects of matter could be explained in the same kind of way. This was, after all, simply an extension downwards of the mechanical system that Newton had so successfully extended upwards. It is instructive to consider how far this hope was fulfilled. Atoms and molecules are in some ways similar in their behavior to small objects obeying the everyday laws of mechanics, but in others they are very different, and it is these differences that must be invoked if we are to understand the forces involved in the chemical bonding.

Early modern theory – 1780-1840

Between 1780 and 1840, chemistry underwent a revolution, that transformed it into the kind of science that we would recognise today. It is no accident that this was the same period as the beginning of the industrial revolution in Europe. Materials were being mined, and iron and steel produced and worked, on a larger scale than ever before. By the end of the period, mineral fertilisers were already in large scale use to feed the growing population. Demand for machinery led to improvements in engineering, and this in turn made possible improvements in the precision of scientific instruments. Much of the new interest in chemistry grew out of mining, mineralogy, and metallurgy, while improvements in manufacture and glass-blowing led to the precision balance, and to new apparatus for handling gases.

Here I will summarise some of the most important discoveries, as seen from our present point of view, and using today’s language. This means running the risk of creating a misleading impression of smoothness and inevitability. Inevitability, perhaps yes; the world really is what it is, and once certain questions had been asked, it was inevitable that we would eventually find the right answers. Smoothness, no; the very concept of atoms, let alone bonding between atoms, remained controversial in some circles way into the 20th century. Outsiders sometimes criticise scientists for taking their theories too seriously, but more often they are reluctant to take them seriously enough.

Overall, mass is conserved; the mass of the products of a reaction is always the same as the mass of the reactants. This is because atoms are not created or destroyed in a chemical reaction.2 Single substances can be elements or compounds, and the enormous number of known compounds can be formed by assembling together the atoms of a much smaller number of different elements. We owe our distinction between elements and compounds to Lavoisier (“The banker who lost his head“). Boyle had come close a hundred years earlier, but was so taken with the transformations of matter that he rejected the notion that its fundamental constituents were immutable.3

The combustion of carbon (its reaction with oxygen) gives a gas, the same gas as is formed when limestone is heated. But there is no chemical process that gives carbon on its own, or oxygen on its own, by reaction between two other substances. So we regard carbon and oxygen as elements, whereas the gas formed by burning carbon (what we now call carbon dioxide) is a compound of these two elements. The production of this same gas, together with a solid residue, by the heating of limestone, shows that limestone is a compound containing carbon, oxygen, and some other element.4 To us, using today’s knowledge, limestone is calcium carbonate, and decomposes on heating to give carbon dioxide and lime (calcium oxide). In Lavoisier’s time, there was no way of breaking down calcium oxide into simpler substances, so he considered it to be an element.

A short philosophical digression (and every scientist has a working philosophy, whether they realise it or not): Lavoisier could make as much progress as he did because he had introduced an operational definition of an element, referring not to some inner essence but to observationally defined properties. And implicit in this was the principle of fallibilism; conclusions are always in principle revisable in the light of further observation, as the example of calcium oxide shows.

Air is a mixture, and burning means reacting with one of its components, which we call oxygen. Metals in general become heavier when they burn in air. This is because they are removing oxygen from the air, and the weight (more strictly speaking, the mass) of the compound formed is equal to that of the original metal plus the weight of oxygen. (Mass is an amount of matter; weight is the force of gravity acting on that matter. Atoms are weightless when moving freely in outer space, but not massless.)

Different elements combine with different amounts of oxygen; these relative amounts are a matter of experiment. In modern language, when some typical metals (magnesium, aluminium, titanium, none of which were known when Lavoisier was developing his system) react with oxygen, they form oxides with the formulas MgO, Al2O3, TiO2.

About one fifth of the air is oxygen, and if we burn anything in a restricted supply of air, the fire will go out when the oxygen has been used up. Nothing can burn in (or stay alive by breathing) the remaining air. Some materials, like wood and coal, appear to lose weight when they burn, but this is because they are in large measure converted to carbon dioxide and water vapour, which are gases, and we need to take the weight of these gases into account.

It was also shown during this period that the relative amounts of each element in a compound are fixed (Law of Definite Proportions). For instance, water always contains 8 grams of oxygen for each gram of hydrogen. Moreover, when the same elements form more than one different compound, there is always a simple relationship between the amounts in these different compounds (Law of Multiple Proportions). Thus hydrogen peroxide, also a compound of hydrogen and oxygen, contains 16 grams of oxygen for each gram of hydrogen. Similarly, the gas (carbon dioxide to us) formed by burning carbon in an ample supply of oxygen contains carbon and oxygen in the weight ratio 3:8, but when the supply of oxygen is restricted, another gas (carbon monoxide) is formed, in which the ratio is 3:4. Carbon monoxide is intermediate in composition between between carbon and carbon dioxide, but it is not intermediate in its properties. For a start, it is very poisonous; it sticks to the oxygen-carrying molecules in the blood even more strongly than oxygen itself, thus putting them out of action. It is formed when any carbon-containing fuel, not just carbon itself, burns in an inadequate supply of air, That is why car exhaust fumes are poisonous, and why it is so important to make sure that gas-burning appliances are properly vented. It is also one of the components of cigarette smoke, which helps explain why cigarettes cause heart disease and reduce fitness.

JdsymbolsLeft: Dalton’s table of the elements, with relative weights, based on H = 1. The correct value for oxygen is 16. Dalton’s value is based on an assumed formula HO for water, together with experimental error; likewise for other elements

All these facts can be explained if the elements are combined in molecules that are made out of atoms, the atoms of each element all have the same mass,5 and each compound has a constant composition in terms of its elements. For instance, each molecule of water contains two atoms of hydrogen and one of oxygen (hence the formula H2O); hydrogen peroxide is H2O2; carbon dioxide is CO2; carbon monoxide is CO; and the masses of atoms of hydrogen, oxygen, and carbon are in the ratio 1:16:12. Using these same ratios, we can also explain the relative amounts of the elements in more complicated molecules, such as those present in octane (a component of gasoline), C8H18, and sucrose (table sugar), C12H22O11. Why C8H18 and not C4H9, which would have the same atomic ratio? This can be inferred from the density of the vapour, using Avogadro’s hypothesis (see below).

Thus, by the early 19th century, chemists were in the process of developing consistent sets of relative atomic weights (sometimes known as relative molar masses). However, there was more than one way of doing this. For instance, John Dalton, the first to explain chemical reactions in terms of atoms, thought that water was HO and that the relative weight of hydrogen to oxygen was one to eight. This uncertainty even led some of the most perceptive to question whether atoms were real objects, or merely book-keeping devices to describe the rules of chemical combination.

Evidence from the behavior of gases (to around 1860)

A French chemist, Joseph Gay-Lussac, noticed that the volumes of combining gases and of their gaseous products, were in simple ratios to each other. In 1811, the Italian Count Amadeo Avogadro explained this by a daring hypothesis, that under the same conditions of temperature and pressure equal volumes of gases contain equal numbers of molecules. We now know this to be (very nearly) true, except at high pressures or low temperatures.

Avogadro’s Hypothesis, as we still call it, gives us a way of directly comparing the relative weights of different molecules, and of inferring the relative weights of different atoms. For example, if we compare the weights of a litre of oxygen and a litre of hydrogen at the same temperature and pressure, we find that the oxygen gas weighs sixteen times as much as the hydrogen. (This is not a difficult experiment. All we need to do is to pump the air out of a one litre bulb, weigh it empty, and then re-weigh it full of each of the gases of interest in turn.) But Avogadro tells us that they contain equal number of molecules. It follows that each molecule of oxygen weighs sixteen times as much as each molecule of hydrogen.

One litre of hydrogen will react with one litre of chlorine to give two litres of the gas we call hydrogen chloride. Thus, by Avogadro’s Hypothesis, one molecule of hydrogen will react with one molecule of chlorine to give two molecules of hydrogen chloride. So one molecule of hydrogen chloride contains half a molecule of hydrogen, and half a molecule of chlorine. It follows that the molecules of hydrogen and of chlorine are not fundamental entities, but are capable of being split in two. Making a distinction between atoms and molecules that is obvious to us now but caused great confusion at the time, each molecule of chlorine, must contain (at least) two separate atoms.6 By similar reasoning, since 2 litres of hydrogen react with 1 litre of oxygen to give 2 litres of steam, water must have the familiar formula H2O, and not HO as Dalton had assumed for the sake of simplicity.

Avogadro’s hypothesis was put forward in 1811, but it was not until 1860 or later that his view was generally accepted. Why were chemists so slow to accept his ideas? Probably because they could not fit it into their theories of bonding. We now recognise two main kinds of bonding that hold compounds together – ionic bonding and covalent bonding. Ionic bonding takes place between atoms of very unlike elements, such as sodium and chlorine, and was at least partly understood by the early 19th century, helped by the excellent work of Davy and Faraday in studying the effect of electric currents on dissolved or molten salts. They showed that sodium chloride contained electrically charged particles, and inferred, correctly, that the bonding in sodium chloride involved transfer of electrical charge (we would now say transfer of electrons) from one atom to another. But, as we have seen, Avogadro’s hypothesis implies that many gases, hydrogen and chlorine for instance, each contain two atoms of the same kind per molecule, which raises the question of what holds them together. These are examples of what we now call covalent bonding or electron sharing, a phenomenon not properly understood until the advent of wave mechanics in the 1920s.

Physicists, meanwhile, were developing the kinetic theory of gases, which treats a gas as a collection of molecules flying about at random, bouncing off each other and off the walls of their container. This theory explains the pressure exerted by a gas against the walls of its container in terms of the impact of the gas molecules, and explains temperature as a measure of the disorganised kinetic energy (energy of motion) of the molecules. The theory then considers that this energy is spread out in the most probable (random) way among large numbers of small colliding molecules. It can be shown that molecules of different masses but at the same temperature will then end up on average with the same kinetic energy, and it is this energy that at a fundamental level defines the scale of temperature. This is a statistical theory, where abandoning the attempt to follow any one specific molecule allows us to make predictions about the total assemblage.

The kinetic theory explains the laws (Boyle’s law, Charles’ law) describing how pressure changes with volume and temperature. Avogadro’s hypothesis can also be shown to follow from this treatment. Many other physical properties of gases, such as viscosity (which is what causes air drag) and heat capacity (the amount of heat energy needed to increase temperature), are quantitatively explained by the kinetic theory, and by around 1850 the physicists at least were fully persuaded that molecules and, by implication, atoms, were real material objects.

Structural chemistry, 1870 on

Isomerism.svgKinds of isomer. The nature of optical isomers was established by Pasteur. Simple rotamers, such as the pair shownbottom right in diagram, readily interconvert at room temperature, giving an equilibrium mixture. The other kinds shown generally do not

Chemists were on the whole harder to convince than the physicists, but were finally won over by the existence of isomers, chemical substances whose molecules contain the same number of atoms of each element, but are nonetheless different from each other, with different boiling points and chemical reactivity is. This only made sense if the atoms were joined up to each other in different ways in these different substances. So atoms were real, as were molecules, and the bonding between the atoms in a molecule controlled its properties. This is what we still think today.

Einstein and Lucretius The piece of evidence that finally convinced even the most skeptical scientists came from an unexpected direction, from botany. In 1827, a Scottish botanist called Robert Brown had been looking at some grains of pollen suspended in water under the microscope, and noticed that they were bouncing around, although there was no obvious input of energy to make them do so. This effect, which is shown by any small enough particle, is still known as Brownian motion. Brown thought that the motion arose because the pollen grains were alive, but it was later discovered that dye particles moved around in the same way. The source of the motion remained a mystery until Albert Einstein explained it in 1905. (This was the same year that he developed the theory of Special Relativity, and explained the action of light on matter in terms of photons). Any object floating in water is being hit from all sides by the water molecules. For a large object, the number of hits from different directions will average out, just as if you toss an honest coin a large number of different times the ratio of heads to tails will be very close to one. But if you toss a coin a few times only, there is a reasonable chance that heads (or tails) will predominate. and if you have a small enough particle there is a reasonable chance that it will be hit predominantly from one side rather than the other. Pollen grains are small enough to show this effect. But this is only possible if the molecules are real objects whose numbers can fluctuate; if they were just a book-keeping device for a truly continuous Universe, the effects in different directions would always exactly cancel out. And if molecules are real, then so are atoms. It is just as Lucretius said, looking at dust in the air two thousand years earlier:

So think about the particles that can be seen moving to and fro in a sunbeam, for their disordered motions are a sign of underlying invisible movements of matter.

1 In fact (see earlier post), the Arabs had already recognized the variability of the star Algol

2 We cheat. There are, of course, processes (radioactive decay, nuclear fusion) where the number of atoms of each kind is not conserved because one element is transformed into another. We simply decide to call these physical processes, so that our statement remains true by definition. Nonetheless, it is useful, because it is usually pretty obvious whether a process should be called “chemical” or “physical”, on other grounds, such as whether or not it involves the formation of new bonds between atoms.

3 The Architecture of Matter, S. Toulmin and J. Goodfield, Hutchinson, 1962

4 In present-day notation,

C + O2 = CO2 and CaCO3 = CaO + CO2

5 This is not quite true. Most elements are a mixture of atoms of slightly different mass but very similar properties. The relative atomic masses of the elements as they occur in nature are an average of the masses of these chemically identical isotopes

6 So we can write the reactions as H2 + Cl2 = 2HCl and 2H2 + O2 = 2H2O

An earlier version of some of this material appeared in my From Stars to Stalagmites, World Scientific. Leeuwenhoek material via Buffalo Library. Dalton’s table of elements and their symbols via Chemogenesis. Isomers image by Vladsinger via Wikipedia

This post originally appeared on 3 Quarks Daily.

Eight basic laws of physics, and one that isn’t

Reposted from 3 Quarks Daily:

GodfreyKneller-IsaacNewton-1689

Isaac Newton, 1689

Michael Gove (remember him?), when England’s Secretary of State for Education, told teachers

“What [students] need is a rooting in the basic scientific principles, Newton’s Laws of thermodynamics and Boyle’s law.”

Never have I seen so many major errors expressed in so few words. But the wise learn from everyone, [1] so let us see what we can learn here from Gove.

From the top: Newton’s laws. Gove most probably meant Newton’s Laws of Motion, but he may also have been thinking of Newton’s Law (note singular) of Gravity. It was by combining all four of these that Newton explained the hitherto mysterious phenomena of lunar and planetary motion, and related these to the motion of falling bodies on Earth; an intellectual achievement not equalled until Einstein’s General Theory of Relativity.

GoveTelegraphhaswarned

Michael Gove, 2013

In Newton’s physics, the laws of motion are three in number:

1) If no force is acting on it, a body will carry on moving at the same speed in a straight line.

2) If a force is acting on it, the body will undergo acceleration, according to the equation

Force = mass x acceleration

3) Action and reaction are equal and opposite

So what does all this mean? In particular, what do scientists mean by “acceleration”? Acceleration is rate of change of velocity. Velocity is not quite the same thing as speed; it is speed in a particular direction. So the First Law just says that if there’s no force, there’ll be no acceleration, no change in velocity, and the body will carry on moving in the same direction at the same speed. And, very importantly, if a body changes direction, that is a kind of acceleration, even if it keeps on going at the same speed. For example, if something is going round in circles, there must be a force (sometimes, confusingly, called centrifugal force) that keeps it accelerating inwards, and stops it from going straight off at a tangent.

Then what about the heavenly bodies, which travel in curves, pretty close to circles although Kepler’s more accurate measurement had already shown by Newton’s time that the curves are actually ellipses? The moon, for example. The moon goes round the Earth, without flying off at a tangent. So the Earth must be exerting a force on the moon.

Solar-system

Solar system (schematic, not to scale), showing orbits of inner planets

And finally, the Third Law. If the Earth is tugging on the moon, then the moon is tugging equally hard on the Earth. We say that the moon goes round the Earth, but it is more accurate to say that Earth and moon both rotate around their common centre of gravity.

All of this describes the motion of single bodies. Thermodynamics, as we shall see, only comes into play when we have very large numbers of separate objects.

The other thing that Gove might have meant is Newton’s Inverse Square Law of gravity, which tells us just how fast gravity decreases with distance. If, for instance, we could move the Earth to twice its present distance from the Sun, the Sun’s gravitational pull on it would drop to a quarter of its present value.

Now here is the really beautiful bit. We can measure (Galileo already had measured) how fast falling bodies here on Earth accelerate under gravity. Knowing how far we are from the centre of the Earth, and how far away the moon is, we can work out from the Inverse Square Law how strong the Earth’s gravity is at that distance, and then, from Newton’s Second Law, how fast the moon ought to be accelerating towards the Earth. And when we do this calculation, we find that this exactly matches the amount of acceleration needed to hold the moon in its orbit going round the Earth once every lunar month. Any decent present-day physics student should be able to do this calculation in minutes. For Newton to do it for the first time involved some rather more impressive intellectual feats, such as clarifying the concepts of force, speed, velocity and acceleration, formulating the laws I’ve referred to, and inventing calculus.

But what about the laws of thermodynamics? These weren’t discovered until the 19th century, the century of the steam engine. People usually talk about the three laws of thermodynamics, although there is actually another one called the Zeroth Law, because people only really noticed they had been assuming it long after they had formulated the others. (This very boring law says that if two things are in thermal equilibrium with a third thing, they must be in thermal equilibrium with each other. Otherwise, we could transform heat into work by making it go round in circles.)

SteamTurbine

the rotor of a turbine is a device for converting heat energy into electrical energy, in accord with the First Law. But the Second Law (see below) places a limit on how efficiently we can do this.

The First Law of Thermodynamics is, simply, the conservation of energy. That’s all kinds of energy added up together, including for example heat energy, light energy, electrical energy, and the “kinetic energy” that things have because they’re moving. [2] One very important example of the conservation of energy is what happens inside a heat engine, be it an old-fashioned steam engine, an internal combustion engine, or the turbine of a nuclear power station. Here, heat is converted into other forms of energy, such as mechanical or electrical. This is all far beyond anything Newton could have imagined. Newton wrote in terms of force, rather than energy, and he had been dead for over a century before people realized that the different forms of energy include heat.

There are many ways of expressing the Second Law, usually involving rather technical language, but the basic idea is always the same; things tend to get more spread out over time, and won’t get less spread out unless you do some work to make them. (One common formulation is that things tend to get more disordered over time, but I don’t like that one, because I’m not quite sure how you define the amount of disorder, whereas there are exact mathematical methods for describing how spread out things are.)

Diffusion

dye becoming more, not less, spread out over time, in accord with the Second Law

For example, let a drop of food dye fall into a glass full of water. Wait, and you will see the dye spread through the water. Keep on waiting, and you will never see it separating out again as a separate drop. You can force it to, if you can make a very fine filter that lets the water through while retaining the dye, but it always takes work to do this. To be precise, you would be working against osmotic pressure, something your kidneys are doing all the time as they concentrate your urine.[3]

This sounds a long way from steam engines, but it isn’t. Usable energy (electrical or kinetic, say) is much less spread out than heat energy, and so the Second Law limits how efficiently heat can ever be converted into more useful forms.

The Second Law also involves a radical, and very surprising, departure from Newton’s scheme of things. Newton’s world is timeless. Things happen over time, but you would see the same kinds of things if you ran the video backwards. We can use Newton’s physics to describe the motion of planets, but it could equally well describe these motions if they were all exactly reversed.

Now we have a paradox. Every single event taking place in the dye/water mixture can be described in terms of interactions between particles, and every such interaction can, as in Newton’s physics, be equally well described going forwards or backwards. To use the technical term, each individual interaction is reversible. But the overall process is irreversible; you can’t go back again. You cannot unscramble eggs. Why not?

In the end, it comes down to statistics. There are more ways of being spread out than there are of being restricted. There are more ways of moving dye molecules from high to low concentration regions than there are of moving them back again, simply because there are more dye molecules in the former than there are in the latter. There is an excellent video illustration of this effect, using sheep, by the Princeton-based educator Aatish Bhatia.

The Third Law is more complicated, and was not formulated until the early 20th century. It enables us to compare the spread-out-ness of heat energy in different chemical substances, and hence to predict which way chemical reactions tend to go. We can excuse Gove for not knowing about the Third Law, but the first two, as C. P. Snow pointed out a generation ago, should be part of the furniture of any educated mind.

ships_fluytSo if you don’t immediately realize that Newton’s laws and the laws of thermodynamics belong to different stages of technology, the age of sail as opposed to the age of steam, and to different levels of scientific understanding, the individual and macroscopic as opposed to the statistical and submicroscopic, then you don’t know what you’re talking about. Neither the science, nor its social and economic context.

R, a fluyt, typical ocean-going vesselof Newton’s time. Below, L, the Great Western, first trans-Atlantic steamship, designed by Isambard Kingdom Brunel, on its maiden voyage

Great_Western_maiden_voyageThat’s bad enough. But the kind of ignorance involved in describing Boyle’s Law as a “basic scientific principle” is even more damaging.

(Disclosure: I taught Boyle’s Law for over 40 years, and it gets three index entries in my book, From Stars to Stalagmites.)

Bottom line: Boyle’s Law is not basic. It is a secondary consequence of the Kinetic Theory of Gases, which is basic. The difference is enormous, and matters. Anyone who thinks that Boyle’s Law is a principle doesn’t know what a principle is. (So a leading Westminster politician doesn’t know what a principle is? That figures.)

363px-Boyles_Law.svgMathematically, the Law is simply stated, which may be why Mr Gove thinks it is basic: volume is inversely proportional to pressure, which gives you a nice simple equation, as in the graph on the right:

P x V = a constant

that even a Cabinet Minister can understand. But on its own, it is of no educational value whatsoever. It only acquires value if you put it in its context, but this appeal to context implies a perspective on education beyond his comprehension.

Now to what is basic; the fundamental processes that make gases behave as Boyle discovered. His Law states that if you double the pressure on a sample of gas, you will halve the volume. He thought this was because the molecules of gas repel each other, so it takes more pressure to push them closer together, and Newton even put this idea on a mathematical footing, by suggesting an inverse square law for repulsion, rather like his Inverse Square Law for gravitational attraction. They were wrong.

Kinetic_theoryThe Law is now explained using the Kinetic Theory of Gases. This describes a gas as shown on the right; as a whole lot of molecules, of such small volume compared to their container that we can think of them as points, each wandering around doing their own thing, and, from time to time, bouncing off the walls. It is the impact of these bounces that gives rise to pressure. If you push the same number of molecules (at the same temperature) into half the volume, each area of wall will get twice as many bounces per second, and so will experience twice the pressure. Pressure x volume remains constant; hence Boyle’s Law.

Actually, Boyle’s Law isn’t even true. Simple kinetic theory neglects the fact that gas molecules attract each other a little, making the pressure less than what the theory tells you it ought to be. And if we compress the gas into a very small volume, we can no longer ignore the volume taken up by the actual molecules themselves.

So what does teaching Boyle’s Law achieve? Firstly, a bit of elementary algebra that gives clear answers, and that can be used to bully students if, as so often happens, they meet it in science before they have been adequately prepared in their maths classes. This, I suspect, is the aspect that Gove finds particularly appealing. Secondly, some rather nice experiments involving balancing weights on top of sealed-off syringes. Thirdly, insight into how to use a mathematical model and, at a more advanced level, how to allow for the fact that real gases do not exactly meet its assumptions. Fourthly, a good example of how the practice of science depends on the technology of the society that produces it. In this case, seventeenth century improvements in glassmaking made it possible to construct tubes of uniform cross-section, which are needed to compare volumes of gas accurately. Fifthly … but that’s enough to be going on with. Further elaboration would, ironically, lead us on to introductory thermodynamics. Ironically, given the interview that started this discussion. The one thing it does not achieve is the inculcation of a fundamental principle.

There are mistakes like thinking that Shakespeare, not Marlowe, wrote Edward II. There are mistakes like thinking that Shakespeare wrote War and Peace. And finally, there are mistakes like thinking that Shakespeare wrote War and Peace, that this is basic to our understanding of literature, and that English teachers need to make sure that their pupils know this. Then Education Secretary Gove’s remarks about science teaching fall into this last category. Such ignorance of basic science (and education) at the highest levels of government is laughable. But it is not funny.

1] Ben Zoma, MishnahChapters of the Fathers, 4a. “Chapters of the Fathers” may also be interpreted to mean “Fundamental Principles”.

2] It is often said that Einstein’s famous equation,

E = mc2

means that we can turn mass into energy. That puts it back to front. The equation is really telling us that energy itself has mass.

3] There are lots of situations (steam condensing to make water, living things growing, or indeed urine becoming more concentrated in the kidney) where a system becomes less spread out, but this change is always accompanied by something in the surrounds, usually heat energy, becoming more spread out to compensate.

Newton as painted by Godfrey Keller, via Wikipedia. Gove image via Daily Telegraph, under headline “Michael Gove’s wife takes a swing at ageing Education Secretary”. Solar system image from NASA. Steam turbine blade Siemens via Wikipedia. Dye diffusing in water from Royal Society of Chemistry. Fluyt imge from Pirate King websiteGreat Western on maiden voyage, 1938, by unknown artist, via Wikipedia. Boyle’s Law curve from Krishnavedala repllot of Boyle’s own data, via Wikipedia. Kinetic theory image via Chinese University of Hong Kong

Science and the Supernatural (II); Why We Get It Wrong and Why It Matters

Paul1

“I have no need of that hypothesis.” So, according to legend, said the great astronomer and mathematician Piere-Simon, marquis de Laplace, when asked by Napoleon why he had not mentioned God in his book. If so, Laplace was not referring to the hypothesis that God exists, but to the much more interesting hypothesis that He intervenes in the material world. And Laplace’s point was not, fundamentally, philosophical or theological, but scientific.

The planets do not move round the Sun in circular orbits, but in elliptical pathways, moving fastest when closest. All this and more Newton had explained using his laws of motion, combined with his inverse square law for gravitational attraction. There is one small problem, however. The planets are attracted, not only to the Sun, but to each other, perturbing each other’s pathways away from a perfect ellipse. These perturbations are not trivial, and in fact it was the perturbation of the orbit of Uranus that would lead to the discovery of Neptune. Newton himself surmised that they could, eventually, render the entire system unstable so that God would need, from time to time, to intervene and correct it. Laplace devoted much of his career to developing the mathematical tools for estimating the size of the perturbations, and concluded that the Solar System was in fact stable. So Newton’s hypothesis of divine intervention was redundant, and it was this hypothesis that Laplace was supposedly referring to.

There is an irony here. Laplace’s calculation that the solar system is stable is true only in the short term, (say a few tens or hundreds of millions of years). In the long enough term, the situation is much more uncertain. As Henri Poincaré was to show a century later, a system of three or more gravitationally interacting bodies is potentially chaotic. Under certain circumstances, an initially minute difference in starting conditions can lead to an ever increasing divergence of outcomes, so that eventually planets can adopt highly elongated orbits, or even be thrown out of their solar systems altogether. Modern computer simulations show (see here and here) that the solar system is indeed chaotic, that Mercury is vulnerable to extreme change or even ejection from the Solar System, and that it is possible that in some 3.5 billion years Mercury’s instability could be transferred to the other inner planets, including Earth, leading to the possibility of collision.

Science, some say, rejects supernatural explanations on principle; this is called intrinsic methodological naturalism (IMN). In Part I I argued, following the work of Boudry et al. (herehere , and here), that this strategy is misguided. Here I go into more detail, using this example, and other past and present controversies, to illustrate the point.

Stephen Hawking has commented on Laplace’s remark, in much the same spirit as I am suggesting regarding the God question, but assumes for him a much more absolute position:

I don’t think that Laplace was claiming that God didn’t exist. It is just that He doesn’t intervene, to break the laws of Science. That must be the position of every scientist. A scientific law is not a scientific law if it only holds when some supernatural being decides to let things run, and not intervene.

Paul2A similar point of view had been put forward by Richard Lewontin, in his uncomfortably perceptive review, available here, of Sagan’s Demon Haunted World); I consider this review required reading for those defending science because of its all too rare recognition of creationism as a complex social problem:

Perhaps we ought to add to the menu of Saganic demonology, just after spoon-bending, ten-second seat-of-the-pants explanations of social realities.

I cannot do justice to Lewontin’s reasoning by brief truncated quotations from his complex argument. It is clear, however, that he uses two very different arguments in rapid succession:

Nearly every present-day scientist would agree with Carl Sagan that our explanations of material phenomena exclude any role for supernatural demons, witches, and spirits of every kind, including any of the various gods from Adonai to Zeus…. We also exclude from our explanations little green men from Mars riding in spaceships, although they are supposed to be quite as corporeal as you and I, because the evidence is overwhelming that Mars hasn’t got any…

We take the side of science … because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. … To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.

The first paragraph is one that I can accept and advocate in its entirety. We reject supernatural causes in the same way that we reject implausible material explanations, because the evidence tells us that they don’t exist. The second, intertwined with observations that I have had to omit for brevity regarding the tenuousness of the pretensions of science and what he calls the patent absurdity of some of its constructs, is of a very different kind. Science, he says, is committed in principle to material causes, and the reason for doing so is, again, to exclude divine intervention.

Leave aside for now the problem of defining “materialism”; at a time when our concept of the material includes dark energy, particle entanglement, and quantum fluctuations in nothingness of which our entire Universe may be but one example, this is much the same as the problem of defining “naturalism” that I mentioned in Part I. Leave aside also the deliberately provocative antireligious language, inconvenient though that be for coalition builders. After all, Lewontin has, and is entitled to, his own agenda here. Leave aside even the possibility that miracles need not disrupt the normal business of science, as long as they are sufficiently rare. Hawking has followed Lewontin into the trap that awaits all those who would legislate the metaphysical out of existence. They lay themselves open to the charge that they are, themselves, arbitrarily introducing yet another metaphysical rule.

Paul3So, alas, does the National Science Teachers Association, whose commitment to IMN is quoted with approval by the National Academy of Sciences (Teaching About Evolution and the Nature of Science, 1998 but still current, and freely available here, p. 124):

 Science is a method of explaining the natural world. It assumes the universe operates according to regularities and that through systematic investigation we can understand these regularities. The methodology of science emphasizes the logical testing of alternate explanations of natural phenomena against empirical data. Because science is limited to explaining the natural world by means of natural processes, it cannot use supernatural causation in its explanations. Similarly, science is precluded from making statements about supernatural forces because these are outside its provenance. Science has increased our knowledge because of this insistence on the search for natural causes. [Emphasis added]

This is very bad. We slide from an innocent-seeming description of the domain of science as the “natural” world, through the uncontroversial idea of testing explanations against each other, to the non sequitur of the sentence I have highlighted. There is an illusion of logic, based on an assumed dichotomy between the natural and the supernatural, but this is mere wordplay. We are given no other reason for this leap, even though it could have been justified, as Hawking and Lewontin justify their own exclusion of the supernatural, by reference to the assumption of regularity. As we saw in Part I, the claim that “science is precluded from making statements about supernatural forces” is simply untrue. Time and again, science has refuted the appeal to the supernatural by providing alternatives – if this is not “making statements about supernatural forces”, what is?

Present-day science does indeed make statements highly relevant to the existence or otherwise of supernatural forces. To raise the stakes to their utmost, some consider the Universe to be fine tuned for life, and regard this as scientific evidence for a purposeful Creator.[1] Others regard this as yet another argument from ignorance, since it may well be that the Universe is not really all that special, or that there are as yet unknown constraints of some kind on the relevant physical constants, or that quantum fluctuations will generate such a superabundance of Universes that some, statistically, are bound to have the required properties. While it may be premature to test these suggestions, they are part of a clearly scientific agenda. The suggested causes would be “natural.” by any standards, but if established would have the effect of making the appeal to a supernatural Creator unnecessary. Science would then have made a clear statement about the purported supernatural force responsible for fine tuning, exactly as it did about the purported supernatural force responsible for the stability of the Solar System, namely that there was, in Lavoisier’s words, no need for that hypothesis.

Two other examples spring to mind. First, the argument from Intelligent Design as applied to the mammalian eye. This fails, because the mammalian eye is in one crucial detail very poorly designed. The nerve endings, and the blood supply, run in front of, rather than behind, the photosensors, partly occluding them and giving rise to each eye’s blind spot. It does not have to be that way, since the octopus eye is built the right way round. At this point, the defender of design has two options. He can admit defeat, or at least accept that the Designer’s options are restricted by our evolutionary history. So in this case the argument from design is refuted or, at any rate, enfeebled. Or he can argue, as Behe does in Darwin’s Black Box, that the refutation fails because we do not know the Designer’s full intent. At this point, we lose interest because the argument from design has become so well immunized against observation, to borrow a term from Maarten Boudry’s PhD thesis Here be Dragons, that it has ceased to be science. In neither case have we referred to the supernatural nature of the argument as the reason for dismissing it.

Secondly, there is a version of theistic evolution in which the Creator intervenes at the level of quantum mechanical indeterminacy to set in course one mutation rather than another, and used this to ensure the evolution of intelligent humans. I first heard this suggestion from Alvin Plantinga,[2] and if I understand Ken Miller’s Finding Darwin’s God correctly I think that on this topic, for once, he and Plantinga would agree. Certainly there is nothing here that violates the laws of physics and chemistry, since the chance breakdown of one single radioactive atom at one moment rather than the next may well disrupt a growing chain of DNA, and a single mutation may well have far-reaching consequences.[3] Were such a mutation to have happened under the Creator’s guidance, that would be supernatural causation par excellence.

I would argue against this on the grounds that there is little or no evidence of a bias towards beneficial mutations, and that since intelligence has emerged independently in cephalopods, cetaceans, parrots, velociraptors (if cerebral capacity is anything to go by), and simians, the emergence of such little intelligence as we have requires no special explanation. Now you may regard my argument as mistaken, banal, or ill-informed, but I do not see how you can describe it as outside the domain of science.

Thus we do, as I just did, use scientific reasoning to discuss the claims of supernaturalists, so IMN is untrue. It was untrue in the 18th century when science explored solar system stability; it was untrue in the 19th when natural selection rendered Paley’s watchmaker redundant; it was untrue in the 20th when claims of extrasensory perception were scientifically examined and found wanting; and it is untrue in the present century as we prepare to grapple with such problems as the origin of our Universe and its appearance of being fine-tuned for the emergence of life. To propagating IMN is to propagate a falsehood.

Paul4Does this matter? Yes, very much indeed. There is a war on, between the supporters of science as we know it, and the creationists and endarkeners who wish to replace it with what the Discovery Institute’s Institute for Science and Culture calls in its notorious “wedge” document “the theistic understanding that nature and human beings are created by God.”

The unwarranted and inaccurate grafting onto the methods of science the arbitrary rule that it must not traffic in the supernatural exposes a flank to its enemies, which they have been quick to exploit. The central argument of Phillip Johnson’s Defeating Darwinism by Opening Minds, which predates the Wedge Document, is that mainstream science (including, crucially, the study of human origins) is illegitimate because it arbitrarily excludes explanations that lie outside the limits of naturalism. His disciple Alastair Noble, director of the Glasgow-based, Centre for Intelligent Design, says in the Centre’s introductory video

One of the key questions posed by the world around us is whether we are here by chance or by design.  There is a strident strain of science which insists that all the design in the world is apparent, not real, and that natural selection acting on random mutations is sufficient to explain it all.  That kind of science is derived from a view that the only explanations which are acceptable are those which depend purely on physical or materialist processes.  That is not a scientific finding that is derived from the evidence.  It is, in fact a philosophical position, and a biased one at that, which is brought to the actual evidence.  It excludes other types of explanation which the evidence may merit.

Here the claim that mainstream science excludes design-based explanations a priori is used to bolster the common creationist tactic of misrepresenting the outcomes of its investigations, including evolution, as inputs. Going further downmarket, we come to the creationist claims that evolution science is a religion like any other, or that evolution and creationism differ only in their starting assumptions, and as long as the scientific community itself presents the rejection of the supernatural as an input rather than an output, we have scant grounds for complaint against such vulgarizations.

Why do we persist in exposing ourselves in this way? Boudry and colleagues (2010) suggest several reasons. One of these we have already demolished; the claim that IMN is built in to the definition of science. There is a large literature (see e.g. here) on how and indeed if science should be defined, and I have nothing to add to this, beyond reminding readers that “supernatural” is itself difficult to define, and repeating my earlier point that insisting on IMN would exclude much activity that we generally consider scientific. A second, mentioned by both Hawking and Lewontin in the essays I quoted, is that it would undermine the assumption of regularity that underpins science. Miracles such as those claimed for Jesus would indeed undermine that assumption, but only in rare and very special cases; so rare and special that they can hardly constitute a serious threat to our business.[4]

Our faith in the regularity of nature derives from our having lived and evolved in a world where it holds good, not from some special rule about the nature of science. It is confirmed, over huge reaches of space and time, by observation. We can interpret the spectra of galaxies whose light has taken 12 billion years to reach us, and the suggestion (since subjected to highly critical scrutiny) that the constants of physics might have changed even in the fourth decimal place was enough to arouse the interest of The Economist.

From constancy to change. On current thinking, the early Universe underwent a period of rapid inflation, in which space expanded at such a rate that the distance between points initially close together grew at a rate faster than the speed of light. Thus during this expansionary stage the laws of nature were very different from what they are today. And the state of the Universe before this stage may be to us in principle unknowable.

All the conclusions of the last two paragraphs may be subject to revision. But this very fact reinforces my claim, that our faith in the constancy of nature is testable by science, and that science can (and currently does) tell us that the domain of this faith is wide, but not unlimited. Thus it is as much an outcome of our experience as a methodological input.

A third argument for IMN is that in its absence the possibility of invoking supernatural explanations may discourage the search for natural ones. This is a purely pragmatic argument, and I cannot imagine it having any real effect. Those who prefer supernatural explanations invoke them anyway. Millions of Americans believe humans to have arisen through a special supernatural act, but this is not for lack of a naturalistic explanation. Intelligent Design creationists argue that undirected evolution cannot possibly generate new information, or that protein sequences are too improbable to have arisen naturally. Young Earth creationists, a separate group (although in the UK the two groups strongly overlap) point to anomalies in radiometric dating, or to polonium halos in rocks that did not contain polonium’s ultimate parent, uranium, and claim that these somehow cause the naturalistic account of earth’s geological history to unravel. This they do because of their prior commitment to mystification. Debunking their nonsense is a proper matter for science, and the talkorigins website has a very useful page listing numerous such claims and their rebuttals[5] although experience shows that mere refutation will not stop their proponents from repeating them. And there are important unsolved problems, such as the origin of life, which some claim as evidence for supernatural intervention, but I do not think that any scientist interested in the topic would be so easily fobbed off. In any case, defining their activities as unscientific would not make the supernaturalists disappear. On the contrary; they would (and do; see above) triumphantly hail such definitions as proof that we impose arbitrary limitations on our science.

There are more technical arguments, which boil down to the untestability of supernatural explanations. But we already have the rule that science deals with the (in principle) testable, so that there is no need to invoke IMN. And finally there is the argument from legal expediency, which I maintain is both unnecessary and two-edged.

Unnecessary. Judge Jones famously ruled in Kitzmiller vs Dover School District that Intelligent Design (ID) is not science, but a form of religiously motivated creationism, thus barring it from publicly funded schools in the US. What is primary here is the ruling that is religion; the finding that it is not science is secondary. The ID argument from the design of the eye is not science, because it is immunized against scientific examination. But the ID argument from the irreducible complexity of the bacterial flagellum is science. Just hopelessly wrong science,[6] as shown by the piles of scientific documents produced in court, and the persistence in this wrongheadedness was also accepted as evidence that ID’s agenda is religious. The distinction, if there is one, between bad science and not science was immaterial.

Double edged. There is a real cost to the ruling that ID is religion, and schools in the UK are paying that cost right now. While ID is officially shut out of the science lab, at least in state funded schools in England (the situation in Scotland is less clear), it is in the process of infiltrating itself into the Religious Education classroom, by way of such materials as The God Question, and RE teachers will be less able (and in some cases less willing) than their Science colleagues to dispose of its pretensions.

In short, IMN is untrue and carries a heavy rhetorical cost to science. But everything that can be accomplished by including IMN in our definition of science and then appealing to that definition as criterion, can be accomplished on its own merits by less circuitous means. So let’s cut out the middleman.

Instead, I would appeal once more to Laplace, who took as examples such purported phenomena as animal magnetism, dowsing, and solar and lunar influences on mood, and concluded:

Paul5

We are so far from knowing all the agents of nature and their diverse modes of action that it would not be philosophical to deny phenomena solely because they are inexplicable in the actual state of our knowledge. But we ought to examine them with an attention all the more scrupulous as it appears more difficult to admit them, and it is here that the calculation of probabilities becomes indispensable, to decide to what point one must multiply observations or experiments, in order to obtain for the agents that they indicate a probability that outweighs the reasons we would otherwise have against admitting them.


Or, in the abbreviated form that has come down to us to us through Carl Sagan,

Extraordinary claims require extraordinary evidence.

That’s all we need.

I thank Maarten Boudry and Stephen Law for helpful discussions. Posthumous portrait of Laplace by Guérin through here. An earlier version of this piece originally appeared in 3 Quarks Daily

1] Capital letters for Creator and Designer because I do not wish to collude in the polite fiction that the Intelligent Design programme is anything other than an argument for the existence of God. Separate technical questions have been raised about the validity of the statistical argument from fine-tuning, but these do not affect my argument.

2] Personal communication, ca. 2006

3] Consider the mutation that made Queen Victoria, grandmother of the last Tsarevich, a carrier of haemophilia, and what difference this might have made to Russian history.

4] For some believers, the Mass might be a counter-example. But since the claimed miraculous trans-substantiation changes no accidental (i.e. observable) properties, it is irrelevant to science.

5] Alternating mutation and selection can and demonstrably does generate new information, protein sequences have considerable flexibility and do not arise in a single step, polonium halos in uranium-free rocks can be traced to the diffusion of radon, dating anomalies are exceptional and indeed informative, since they can be traced to heating episodes and other post-depositional events, and so on.

6] Since there is an excellent scientific, indeed Darwinian, explanation in terms of exaptation. Although if this is excluded by moving the goalposts, a typical ID ploy, perhaps we have again moved into the domain of non-science.

Boyle’s Law is not a principle; so does the UK Education Secretary know what a principle is?

What [students] need is a rooting in the basic scientific principles, Newton’s laws of thermodynamics and Boyle’s law. [Education Secretary Michael Gove, reported here].

He has been justly mocked for confusing Newton’s laws with the laws of thermodynamics (e.g. here and here and, by me, here). But the kind of ignorance involved in describing Boyle’s Law as a “basic scientific principle” is far more damaging.

Disclosure: I taught Boyle’s Law for over 40 years, and it gets three index entries in my book, From Stars to Stalagmites.

Bottom line: Boyle’s Law is not basic. It is a secondary consequence of the kinetic theory of gases, which is basic. The difference is enormous, and matters. Anyone who thinks that Boyle’s Law is a principle doesn’t know what a principle is. (So Gove doesn’t know what a principle is? That figures.)

Reasoning: Boyle’s Law states that if you double the pressure on a sample of gas, you will halve the volume. Boyle thought this was because the molecules of gas repel each other, so it takes more pressure to push them closer together, and Newton put this idea on a mathematical footing, by suggesting an inverse square law for repulsion, rather like his inverse square law for gravitational attraction. They were wrong.

Mathematically, the Law is simply stated, which may be why Mr Gove thinks it is basic: volume is inversely proportional to pressure, which gives you a nice simple equation (P x V = a constant) that even a Cabinet Minister can understand. But on its own, it is of no educational value whatsoever. It only acquires value if you put it in its context, but this involves a concept of education that seems to be beyond his understanding.

Now to what is basic. Boyle’s Law is now explained using the kinetic theory of gases. This describes a gas as a whole lot of molecules, of such small volume compared to their container that we can think of them as points, each wandering around doing their own thing, and, from time to time, bouncing off the walls. It is the impact of these bounces that gives rise to pressure. If you push the same number of molecules (at the same temperature) into half the volume, each area of wall will get twice as many bounces per second, and so will experience twice the pressure. Pressure x volume remains constant.

Actually, Boyle’s Law isn’t even true. Simple kinetic theory neglects the fact that gas molecules attract each other a little, making the pressure less than what the theory tells you it ought to be. And if we compress the gas into a very small volume, we can no longer ignore the volume taken up by the actual molecules themselves.

So what does teaching Boyle’s Law achieve? Firstly, a bit of elementary algebra that gives clear answers, and that can be used to bully students if, as so often happens, they meet it in science before they have been adequately prepared in their maths classes. This, I suspect, is the aspect that Gove finds particularly appealing. Secondly, some rather nice experiments involving balancing weights on top of sealed-off syringes. Thirdly, insight into how to use a mathematical model and, at a more advanced level, how to allow for the fact that real gases do not exactly meet its assumptions. Fourthly, a good example of how the practice of science depends on the technology of the society that produces it. In this case, seventeenth century improvements in glassmaking made it possible to construct tubes of uniform cross-section, which were needed to measure volumes of gas accurately. Fifthly … but that’s enough to be going on with. Further elaboration would, ironically, lead us on to introductory thermodynamics. Ironically, given the interview that started this discussion.

Educationally, context is everything, the key to understanding and to making that understanding worthwhile. A person who decries the study of context is unfit for involvement with education.

Even at Cabinet level.

UK Education Secretary says students need to know how Newton invented thermodynamics [!]

I would like to say that Michael Gove shows a knowledge of what counts as basic science that is some 300+ years out of date, but that would be too kind.

Gove said there had been previous attempts to make science relevant, by linking it to contemporary concerns such as climate change or food scares. But he said: “What [students] need is a rooting in the basic scientific principles, Newton’s laws of thermodynamics and Boyle’s law.” [Times interview, reported here]

As many readers will know, but the Education Secretary clearly doesn’t, Newton’s laws describe the motion of individual particles. Thermodynamics is intrinsically statistical, and was developed over a century after Newton’s death. Boyle’s Law is not a basic scientific principle, although it is a corollary of the basic principles followed by (ideal) gases. And here we have someone ignorant of these elementary facts, in a position of enormous power, telling the schools how to teach, and the examination boards how to examine.

And in this same interview, he says he wants schools to form chains and brands, like businesses. Satire falls silent.

%d bloggers like this: