Friday, September 26, 2014

Science needn't hide its mistakes

An ex-Nature editor says that peer review is dead? I hope that isn’t what you’d be left thinking from my Comment in the Guardian (below). We need peer review. It is flawed in all kinds of ways – I think, with the “replication crisis” in the scientific literature, we’re starting to appreciate just how flawed – but it remains a valuable way of determining what is publishable. My point is that there are also other valid ways of presenting science these days, arXiv certainly being one of them. Some folks worry about how bad science might get an airing if peer review isn’t seen as an obligatory gatekeeper – but my God, have you ever looked at what gets published already? Peer review is mostly a good way of making mediocre science less error-prone, not of preventing the dissemination of grandiose dross. I’d prefer to think that people can be taught to understand that, if something hasn’t been peer-reviewed, it should be approached with a pinch of salt and with the knowledge that one needs to hear the assessments of other experts too. It’s possible that a blanket insistence on only announcing peer-reviewed work would make people less likely to get taken in, not more so, because they would come to realise that science is always contingent and liable to be wrong, whether it is peer-reviewed or not, and that peer review isn’t a guarantee of veracity. The filtering process in science is a many-staged one, which includes the post-publication assessment of peers and the longer-term sieve of history – it’s not something that happens, or ought to happen, all at once. With blogs, preprint servers, social media and so forth, this is much more true now than it was 20 years ago, and we need to recognize that.

As for the BICEP2 results themselves, it does seem that the team was rather hasty and sloppy in not waiting for the Planck data but apparently basing their assessment of the dust issue on preliminary findings presented at a conference. But this is no great sin. I’m pleased that people have been able to see that scientists like this are grappling with these huge and difficult questions, and that there are ways we can look for answers, and that sometimes we’ll get wrong ones. Our best protection against oversold and misleading claims is to admit that scientists can make blunders, because they are just people doing their best to figure out these difficult and amazing questions, not priests handing down answers written in stone. So anyway: here it is.


It was announced in headlines worldwide as one of the biggest scientific discoveries for decades, sure to garner Nobel prizes. But now it looks very likely that the alleged evidence of both gravitational waves and the ultra-fast expansion called inflation in the Big Bang has literally turned to dust. Last March a team using a telescope called BICEP2 at the South Pole claimed to have read the signatures of these two elusive phenomena in the twisting patterns of the cosmic microwave background radiation, the afterglow of the Big Bang. The latest results from an international consortium using a space telescope called Planck show that BICEP2’s data is very likely to have come not from the microwave background at all, but from warm dust scattered through our own galaxy.

Some will regard this as a huge embarrassment, not only for the BICEP2 team but for science itself. As the evidence against the earlier claims has been mounting over the past months, already some researchers have criticized the team for making a premature announcement to the press before their work had been properly peer-reviewed.

But there’s no shame here. On the contrary, this episode is good for science. This sequence of excitement followed by deflation, debate and controversy is perfectly normal – it’s just that in the past it would have happened out of the public gaze. Only when the dust had settled would a sober and sanitized version of events have been reported, if indeed there was anything left to report.

That has been the Standard Model of science ever since the media first acknowledged it. A hundred years ago, headlines in the New York Times had all the gravitas of a papal edict: “Men of Science Convene” and so forth. They were authoritative, decorous, and totally contrived. That image started to unravel after James Watson published The Double Helix, his scurrilous behind-the-scenes account of the pursuit of the structure of DNA. But even now, some scientists would prefer the mask to remain, insisting that results are only announced after they have passed “peer review”: that is, been checked by experts and published in a reputable journal.

There are many reasons why this will no longer wash. Those days of deference to patrician authority are over, probably for the better. We no longer take on trust what we are told by politicians and leaders, experts and authorities. There are hazards to such skepticism, but good motivations too. Few regret that the old “public understanding of science” model – spoon-feeding facts to the ignorant masses – has been replaced with attempts to engage and include the public.

But science itself has changed too. Information and communications technologies mean that, not only is it all but impossible to keep hot findings under wraps, but few even try. In physics in particular, researchers put their papers on publicly accessible preprint servers before formal publication so that they can be seen and discussed, while specialist bloggers give new claims an informal but often penetrating analysis. This enriches the scientific process, and means that problems can be spotted and debated that “peer reviewers” for journals might not notice. Peer review is highly imperfect anyway – a valuable check, but far from infallible and notoriously conservative.

It is because of these new models of dissemination that we were all able to enjoy the debate in 2011 about particles called neutrinos that were alleged to travel faster than light, in defiance of the theory of special relativity. Those findings were announced, disputed, and finally rejected, all without any papers being formally published. The arguments were heated but never bitter, and the public got a glimpse of science at its most vibrant: astonishing claims mixed with careful deliberation, leading ultimately to a clear consensus. How much more informative it was than the tidy fictions that published papers often become.

Of course, there will always be dangers in “publication by press conference”, especially if the findings relate to, say, human health. All the more reason for us to become more realistic, informed and grown-up in assessing science: to listen to what other experts say, to grasp the basic arguments, and not just to be seduced by headlines. Researchers who abuse the process will very quickly feel the heat.

Aren’t some premature announcements just perfidious attempts to grab priority, and thereby fame and prizes? Probably – and this exposes how distorted the reward systems of science can be. It’s time we stopped awarding special status to people who, having more resources or leverage with editors or just plain luck, are first past a post that everyone else is stampeding towards. Who cares? Rewards in science should be for sustained creative thinking, insight, experimental ingenuity, not for being in the right place at the right time. A bottle of bubbly will suffice for that.

What, then, of gravitational waves? If, as it seems, BICEP2 never saw them bouncing from the repercussions of the Big Bang, then we’re back to looking for them the hard way, by trying to detect the incredibly tiny distortions they should introduce in spacetime as they ripple past. Now the BICEP2 and Planck teams are pooling their data to see if anything can be salvaged. Good on them. Debate, discussion, deliberation: science happening just as it should.

Thursday, September 25, 2014

Whatever happened to the heroes?

Here for the record is my article published yesterday on the Guardian History of Science blog (The H Word). Seems you get a better class of commenter there than on Comment is Free, which is nice – some thoughtful responses. This piece is a kind of trailer for the paperback publication of Serving the Reich by Vintage.


Scientists’ historical opposition to ideological manipulation has mostly been feeble at best. The failings are not individual but institutional.

“Unhappy is the land that needs a hero”, Galileo tells his disillusioned former student Andrea in Bertold Brecht’s Life of Galileo, after he has recanted on his heliocentric theory of the cosmos. Andrea thought that Galileo would martyr himself, but faced with the rack and thumbscrews the astronomer didn’t hesitate to sign a recantation. “I was afraid of physical pain”, he admits.

Galileo’s reputation hasn’t suffered for that weakness. Heedless of Brecht’s admonition, science makes Galileo a hero and martyr persecuted by the cruel and ignorant Church. What’s more, it’s often implied that his fate might have been shared by anyone who, from the Middle Ages to the early Enlightenment, dared to advocate Copernicus’s theory that the Sun, not the Earth, lay at the centre of the heavens. It’s still widely believed that the Italian friar Giordano Bruno was burnt at the stake for holding that view, 33 years before Galileo recanted.

Historians of science oscillate between exasperation and resignation at the fact that nothing they say seems able to dislodge these convictions. They can point out that Copernicus’ book, published in 1543, elicited little more than mild disapproval from the Church for almost a century before Galileo’s trial. They can explain that Bruno’s cosmological ideas constituted a rather minor part of the heretical charges made against him. They can show that it was Galileo’s provocative style and personality – his readiness to lampoon the Pope, say – that landed him in trouble, and that he was wrong anyway in some of his astronomical theories and disputes with clerics (on tides and comets, say). They can reveal that the conventional narrative of science versus the Church was largely the polemical invention of John William Draper and Andrew Dickson White in the late nineteenth century. It makes no difference. In the “battle for reason”, science must have its heroic martyrs.

Is this perhaps because they are so hard to find? For over the course of history science’s resistance to ideological intervention and manipulation has been largely rather feeble. One of my most disillusioning realizations while researching my book Serving the Reich was of how little scientists in Germany did to oppose the Nazis.

They were of course in an extreme and hazardous situation, yet several German artists, writers (even journalists!), industrialists and, yes, religious leaders voiced criticisms that were nowhere to be found among scientists. The Austrian scientific editor Paul Rosbaud, who himself showed extraordinary bravery working as a spy for the Allies, noted how scientists at Göttingen University vowed to “rise like one man to protest” if the Nazis dared to dismiss their “non-Aryan” colleagues – and yet when it happened, they all seemed to forget this intention, and some even condemned those who resisted their dismissal.

If few scientists in Germany found the fortitude to show active resistance to the Nazis, that partly reflects how rare physical courage is – and who are we to judge them for that? But this isn’t really the issue. Those German scientists who had no sympathy for the National Socialists didn’t just stay silent to save their own skins and careers; they considered it their duty to do so. You could grumble in private, but as a professional academic one was expected to remain “apolitical”, a loyal and patriotic servant of the state. When Einstein denounced the Nazi laws publicly, he was vilified as a traitor to his country, an “atrocity-mongerer” who deserved to be expelled from scientific institutions.

This attitude explains much of the post-war silence of the German scientists. It’s not just that they lacked the honesty and self-awareness to confess, as the Dutch physicist Hendrik Casimir did, that they were held back by fear; most of them didn’t even feel there was a case to be answered. Their aim, they insisted, had been simply to “stay true to science”: an aspiration that became a shield against any recognition of broader civic responsibilities.

This is where danger still lies. Individual scientists are, in my experience, at least as principled, politically engaged and passionate as any other members of society. The passivity that historian Joseph Haberer deplored in 1969 – in which scientists merely offer their technical advice to the prevailing political system – seems instead to stem from science as an institution.

It isn’t just that science has in general lacked structures for mounting effective resistance to political and ideological interference. Until recently, many scientists still saw it as a virtue to avoid “political” positions. The Observer’s “Rational Heroes” column asks scientists why so few of them go into politics; physicist Steven Weinberg’s triumphant answer was that in science “you can sometimes be sure that what you say is true”. The implication is that science occupies a higher plane, unsullied by the compromised dissembling of politics.

This was Werner Heisenberg’s view too, and it enabled him to turn a blind eye to the depravities of the Nazis and to advance his career in Germany without exactly supporting them. “We should conscientiously fulfil the duties and tasks that life presents to us without asking much about the why or the wherefore”, he wrote.

At the top of many scientists’ political agenda are not political questions as such but demands for more funding. They should beware the example of the German physicists like Heisenberg who triumphantly proclaimed their cleverness at getting money out of the Nazis, whereas in fact Himmler and Goering were perfectly happy to fund tame academics. “Whether they support the regime or not”, a group of leading science historians has written recently, “most scientists, or perhaps better put [indeed!], scientific communities, will do what they have to in order to be able to do science.”

When inspiring opponents of political repression, such as Andrei Sakharov and Fang Lizhi, have arisen from the ranks of scientists, it has been their personal courage, not their beliefs in the role of science in society, that has sustained them, and they were afforded no official backing from the scientific bodies of their countries.

Because science works best when it is approached without prejudices (as far as that is humanly possible), it is tempting to equate this operational prerequisite with freedom of thought more generally. Yet not only does science have no monopoly on that, but it risks deluding itself if it elevates prickly, brilliant iconoclasts to the status of champions of free speech. History gives no support to that equation.

Wednesday, September 24, 2014

Sympathy for the devil

I have two half-Italian friends who have independently decided to flee that country, partly in despair at the state it’s in. The science magazine Sapere is trying to restore a little intellectual culture, and I'm glad to contribute a regular column on music cognition. Here is the latest installment.


Many people who dislike the atonal music of composers such as Arnold Schoenberg say that it’s because their works are full of harsh dissonances: notes that sound horrible together. Schoenberg argued that dissonance is just a matter of convention: there’s nothing intrinsically wrong with it, it’s just that we’re not used to it.

The truth is a bit of both. Some dissonance really is convention: in the Middle Ages, a major third chord (C and E, say) was considered dissonant, but by Mozart’s time it was perfectly harmonious. But there’s also a “sensory dissonance” that stems from the basic physics of sound. If two pure tones very close in acoustic frequency are played together, the sound waves interfere to create a rattle-like sensation called roughness, which is genuinely grating. This seems to imply that any notes should sound okay as long as they’re not close in pitch. But because instruments and voices produce overtones with a whole range of frequencies, you have to add up all the possible combinations to figure out how “rough” two notes will sound together. The nineteenth-century German scientist Hermann von Helmholtz was the first to do this, and modern calculations confirm his findings: perfect fifth chords (C-G) and octaves have very little sensory dissonance, but all other two-note combinations have much the same roughness except for the minor second (C-C#), which has a lot.

So maybe Schoenberg was right! As long as we don’t play notes that are directly adjacent on the keyboard, shouldn’t any chord sound fine? Not so fast. Some researchers claim that we have an innate preference for the chords that are conventionally labelled consonant – that we like a fourth (C-F), say, more than a tritone (C-F#, often called the ‘devil’s interval’ and used to represent the demonic). These claims come from studies of very young infants, whose preferences about sounds can be judged from their attention or agitation. The idea is that, if the children are young enough, their preferences haven’t been conditioned by hearing lots of consonant nursery rhymes.

But is that so? Babies can hear sounds in the womb, and they learn voraciously. So it’s extremely hard to know whether any preferences are truly innate even in newborns. One study claimed to find a slight preference for consonance in two-day-old babies of deaf parents, who wouldn’t have heard their parents sing in the womb. But the evidence either way is marginal at best.

In any case, culture seems to over-write any innate tastes in harmony. The ganga folk songs of Croatia use harmonized minor seconds that are usually deemed intrinsically dissonant, while Indonesian gamelan music uses tunings that sound jarring to Western ears. In comparison, Schoenberg doesn’t seem to be asking so much.

Saturday, September 20, 2014

The real drug dealers

What sort of company, then, is GSK? I ask because they seem to be trying very hard to convince us that Ben Goldacre actually gave Big Pharma an easy ride. I can’t help feeling that GlaxoSmithKline are wanting to see how close they can get to the boundaries of downright evil before we begin to really care. The head of GSK China, Mark Reilly, has just pleaded guilty to charges of bribery and given a three-year prison sentence. GSK has ensured that he gets deported so that he can serve the sentence here, because, you know, you can screw the Chinese in their own country but you don’t want to suffer their justice.

What’s he done? According to the Guardian, “The bribery case involved allegations that GSK sales executives paid up to 3bn yuan to doctors to encourage them to use its drugs… GSK was alleged to have used a network of more than 700 middlemen and travel agencies to bribe doctors and lawyers with cash and even sexual favours.” GSK is now saddled with a fine of comparable magnitude – close to £300m, approaching the GDP of a small nation.

In other words, it’s the old stuff – this is much the same as what GSK was fined $3 bn for in the US back in 2012. After I wrote about that case, I was fairly stunned at the response of the former GSK CEO Richard Sykes, who, when approached for a comment by journalists – remember that this stuff happened on his watch – said that he couldn’t comment until he’d read more about the case in the papers. In other words, he knew no more about it than you or I did. I wasn’t sure which was worse: that he expected us to believe this, or that it might be true.

But that, it seems, is the way GSK management views these scandals. For a start, there’s a whiff here of old-style orientalism: this is the way they do business in those Eastern countries, so we might as well join in. But the response of Andrew Witty, GSK’s current CEO, is just as astonishing. “Reaching a conclusion in the investigation of our Chinese business is important, but this has been a deeply disappointing matter for GSK”, he said. Uh, give me that again? “Reaching a conclusion is important” – that is, it’s “important” that this case has ended? I’m still struggling to find any objective meaning in these words. “Deeply disappointing” – meaning what, exactly? Disappointing that you got caught? Yes, I can see that. Disappointing that the ruling went against you? You mean, after Reilly admitted he’d done it? Disappointing that you are being run in such a sociopathic way? Missing a friend’s birthday party is disappointing. Pushing drugs on people by bribing doctors is many things, but disappointing isn’t the word that springs to mind.

Witty goes on: “We have and will continue to learn from this.” A shred of comfort here: I can stop worrying about whether my daughter is being taught English well enough to prepare her for a successful career. That aside: you will learn from this? What will you learn? That you shouldn’t bribe doctors? That you should hide malpractice better? That you seem to be rather bad at selecting your senior management? No, there is no lesson to be learnt here. There is just stuff to be deeply ashamed of – more ashamed, even, than is evidenced by taking a quarter of a million cut in your two million quid annual bonus.

Ah, but GSK has learnt. “The company said it had fundamentally changed the incentive programme for its sales force.” In other words, whereas before the incentive programme made it all too tempting to commit crimes, now it doesn’t. Oh, the lessons life teaches us.

Wednesday, September 03, 2014

Upside down and inside out

Tomorrow a new exhibition by Peter Randall-Page opens at Pangolin London, called Upside Down & Inside Out. Peter has a long-standing interest in natural processes responsible for the appearance of pattern and form, inspired by the ideas of D’Arcy Thompson. It has been my privilege to write an essay for the catalogue of this exhibition, which is freely available online. Here’s the piece anyway.


There are, in the crudest of terms, two approaches to understanding the world. Some seek to uncover general, universal principles behind the bewildering accumulation of particulars; others find more enlightenment in life’s variety than in the simplifying approximations demanded in a quest for unity. The former are Platonists, and in science they tend to be found in greater numbers among physicists. The latter are Aristotelians, and they are best represented in biology. The Platonists follow the tree to its trunk, the Aristotelians work in the other direction, towards branch and leaf.

The work of artist and sculptor Peter Randall-Page explores these opposing – or perhaps one should say complementary – tendencies. He sees them in terms of the musical notion of theme and variation: a single Platonic theme can give rise to countless Aristotelian variations. The theme alone risks being static, even monotonous; a little disorder, a dash of unpredictability, generates enriching diversity, but that random noise must be kept under control if the result is not to become incomprehensible chaos. It is perhaps precisely because this tension obtains in evolution, in music and language, in much of our experience of life and world, that its expression in art has the potential to elicit emotion and identification from abstract forms. This balance of order and chaos is one that we recognize instinctively.

This is why Peter’s works commonly come as a series: they are multiple expressions of a single underlying idea, and only when viewed together do they give us a sense both of the fundamental generating principle and its fecund creative potential. The diversity depends on chance, on happy accidents or unplanned contingencies that allow the generative laws to unfold across rock or paper in ways quite unforeseen and unforeseeable. Like Paul Klee, Peter takes lines for a walk – but they are never random walks, there are rules that they must respect. And as with Klee, this apparent constraint is ultimately liberating to the imagination: given the safety net of the basic principles, the artist’s mind is free to play.

It might seem odd to talk about creativity in what is essentially an algorithmic process, an unfolding of laws. But it is hard to think of a better or more appropriate term to describe the “endless forms most beautiful” that we find in nature, and not just in animate nature. We could hardly fail to marvel at the inventiveness of a mind that could conceive of the countless variations on a theme that we observe in snowflakes, and it seems unfair to deny nature here inventiveness merely because we can see no need to attribute to her a mind, just as Alan Turing insisted that we have no grounds for denying a machine “intelligence” if we cannot distinguish its responses from those of a human.

This emergence of variety from simplicity is an old notion. “Nature”, wrote Ralph Waldo Emerson, “is an endless combination and repetition of a very few laws. She hums the old well-known air through innumerable variations.” When Emerson attested that such “sublime laws play indifferently through atoms and galaxies”, it is surely the word “play” that speaks loudest: there is a gaiety and spontaneity here that seems far removed from the mechanical determinism of which physics is sometimes accused. For Charles Darwin, one can’t help feel that the Aristotelian diversity of nature – in barnacles, earthworms and orchids – held at least as much attraction as the Platonic principle of natural selection.

But one of Peter’s most inspirational figures was skeptical of an all-embracing Darwinism as the weaver of nature’s threads. The Scottish zoologist D’Arcy Thompson felt that natural selection was all too readily advanced as the agency of every wrinkle and rhythm of organic nature. The biologists of his time tended to claim that all shape, form and regularity was the way it was because of adaptation. If biology has a more nuanced view today, Thompson must take some of the credit. He argued that it was often physical and mechanical principles that governed nature’s forms and patterns, not some infinitely malleable Darwinian force. Yet at root, Thompson’s picture – presented in his encyclopaedic 1917 book On Growth and Form – was not so different from Darwin’s insofar as it posited some quite general principles that could give rise to a vast gallery of variations. Thompson simply said that those principles need not be Darwinian or selective, but could apply both to the living and the inorganic worlds. In this view, it should be no coincidence that the branching shapes of river networks resemble those of blood vessels or lung passages, or that a potato resembles a pebble, or that the filigree skeletal shell of a radiolarian echoes the junctions of soap films in a foam. Thompson was a pioneer of the field loosely termed morphogenesis: the formation of shape. In particular, he established the idea that the appearance of pattern and regularity in nature may be a spontaneous affair, arising from the interplay of conflicting tendencies. No genes specify where a zebra’s stripes are to go: if anything is genetically encoded, it is merely the biochemical machinery for covering an arbitrary form with stripes.

The exoskeleton of a radiolarian

It is a fascination with these ideas that gives nearly all of Peter’s works their characteristic and compelling feature: you can’t quite decide whether the impetus for these complex but curiously geometric forms came from biology or from elsewhere, from cracks and crystals and splashes. That ambiguity fixes the imagination, inviting us to decode the riddle. This dance between geometry and organism is immediately apparent in the monumental sculpture Seed commissioned by the Eden Project in Cornwall: an egg-shaped block of granite 13 feet high and weighing 70 tonnes, the surface of which is covered in bumps that you quickly discern to be as apparently orderly as atoms packed together in a crystal. But are they? These bumps adapt their size to the curvature of the surface, and you soon notice that they progress around the ovoid in spirals, recalling the arrangements of leaflets on a pine-cone or florets on a sunflower head. Can living nature really be so geometric? Certainly it can, for both of those plant structures, like the compartments on a pineapple, obey mathematical laws that have puzzled botanists (including Darwin) for centuries. These plant patterns are called phyllotaxis, and the reason for them is still being debated. Some argue that they are ordered by the constraints on the buckling and wrinkling of new stem tissue, others that there is a biochemical process – not unlike that responsible for the zebra’s stripes and the leopard’s spots – that generates order among the successively sprouting buds.

Seed, by Peter Randall-Page, at the Eden Project, Cornwall, and the inspiration provided by pine cones.

The bulbous, raspberry-like surface of Seed was carved out of the pristine rock. But in nature such structures are typically grown from the inside outwards, the cells and compartments budding and swelling under the expansive pressures of biological proliferation. “Everything is what it is”, D’Arcy Thompson wrote, “because it got that way” – a seemingly obvious statement, but one that brings the focus to how it got that way: to the process of growth that created it. With this in mind, the bronze casts that Peter has created for this exhibition are also made “from the inside”. They are cast from natural boulders shaped by erosion, but Peter has worked the inner surfaces of the moulds using a special tool to scoop out hemispherical impressions packed like the cells of a honeycomb, so that the shapes cast from them follow the basic contours of the boulders while acquiring these new frogspawn-like cellular patterns on their surface. By subtracting material from the mould, the cast object is itself “grown”, emerging transformed and hitherto unseen from its chrysalis.

A new work by Peter Randall-Page (on the right) being cast at the foundry.

The organic and unfolding character of Peter’s work is nowhere more evident than in his “drawings” of branching, tree-like networks: Blood Tree, Sap River and Source Seed. These are made by allowing ink or wet pigment to flow under gravity across the paper in a quasi-controlled manner, so that not only does the flow generate repeated bifurcations but the branches acquire perfect mirror symmetry by folding the absorbent paper, just like the bilateral symmetry of the human body. The results are ordered, but punctuated and decorated with unique accidents. The final images are inverted so that the rivulets seem to stream upwards in increasingly fine filaments, defying gravity: a process of division without end, arbitrarily truncated and all emanating from a single seed. The inversion suggests growth and vitality, a reaching towards the infinite, although of course in real plants we know that these branches are echoed downwards in the traceries of the roots. There is irony too in the fact that, while sap does indeed rise from trunk to tip, driven by the evaporation of water from the leaf, water in a river network flows the other way, being gathered into the tributaries and converging into the central channel. Nature indeed makes varied use of these branching networks – and often for the same reason, that they are particular efficient at distributing fluid and dissipating the energy of flow. But we must be vigilant in making distinctions as well as analogies in how they are used.

Peter Randall-Page, Blood Tree and Sap River V.

Were real trees ever quite so regular, however? Some of these look more like genealogies, a mathematically precise doubling of branch density by bifurcation in each generation – until, perhaps, the individual branches blur into a continuum. We could almost be looking at a circuit diagram or technical chart – and yet the splodgy irregularities of the channels warn us that there is still something unpredictable here, as though these are computer networks grown from bacteria (as indeed some researchers are attempting to do). If there can be said to be beauty in the images, it depends on this uncertainty: as Ernst Gombrich put it, the aesthetic sense is awakened by “a struggle between two opponents of equal power, the formless chaos, on which we impose our ideas, and the all-too-formed monotony, which we brighten up by new accents”.

The vision of the world offered by Peter Randall-Page is therefore neither Platonic nor Aristotelian. We might better describe it as Neoplatonic: as asserting analogies and correspondences between apparently unrelated things. This tendency, which thrived in the Renaissance and can be discerned in the parallels that Leonardo da Vinci drew between the circulation of blood and of natural waters in rivers, later came to seem disreputable: like so much of the occult philosophy, it attempted to connect the unconnected, relying on mere visual puns and resemblances without regard to causative mechanisms (or perhaps, mistaking those analogies for a kind of mechanism itself). But thanks to the work of D’Arcy Thompson, and now modern scientific theories of complexity and pattern formation, a contemporary Neoplatonism has re-emerged as a valid way to understand the natural world. There are indeed real, quantifiable and verifiable reasons why zebra stripes look like the ripples of windblown sand, or why both the Giant’s Causeway and the tortoise shell are divided into polygonal networks. When we experience these objects and structures, we experience what art historian Martin Kemp has called “structural intuitions”, which are surely what the Neoplatonists were responding to. And these intuitions are what Peter’s work, with all its intricate balance of order and randomness, awakens in us.

To find out more: see Peter Randall-Page, “On theme and variation”, Interdisciplinary Science Reviews 38, 52-62 (2013) [here].

Saturday, August 30, 2014

When and why does biology go quantum?

Here is my latest Crucible column for Chemistry World. Do look out for Jim and Johnjoe’s book Life of the Edge, which very nicely rounds up where quantum biology stands right now – and Jim has just started filming a two-parter on this (for BBC4, I believe).


“Quantum biology” was always going to be a winning formula. What could be more irresistible than the idea that two of the most mysterious subjects in science – quantum physics and the existence of life – are connected? Indeed, you get the third big mystery – consciousness – thrown in for good measure, if you accept the highly controversial suggestion by Roger Penrose and Stuart Hameroff that quantum behaviour of protein filaments called microtubules are responsible for the computational capability of the human mind [1].

Chemists might sigh that once again those two attention-grabbers, physics and biology, are appropriating what essentially belongs to chemistry. For the fact is that all of the facets of quantum biology that are so far reasonably established or at least well grounded in experiment and theory are chemical ones. The most arguably mundane, but at the same time the least disputable, area in which quantum effects make their presence felt in a biological context is enzyme catalysis, where quantum tunneling processes operate during reactions involving proton and electron transfer [2]. It also appears beyond dispute that photosynthesis involves transfer of energy from the excited chromophore to the reaction centre in an excitonic wavefunction that maintains a state of quantum coherence [3,4]. It still seems rather staggering to find in the warm, messy environment of the cell a quantum phenomenon that physicists and engineers are still struggling to harness at cryogenic conditions for quantum computing. The riskier reaches of quantum biology also address chemical problems: the mechanism of olfaction (proposed to happen by sensing of odorant vibrational spectra using electron tunneling [5]) and of magnetic direction-sensing in birds (which might involve quantum entanglement of electron spins on free radicals [6]).

Yet it is no quirk of fate that these phenomena are sold as a union of physics and biology, bypassing chemistry. For as Jim Al-Khalili and Johnjoe McFadden explain in a forthcoming comprehensive overview of the field, Life On the Edge (Doubleday), the first quantum biologists were pioneers of quantum theory: Pascual Jordan, Niels Bohr and Erwin Schrödinger. Bohr was never shy of pushing his view of quantum theory – the Copenhagen interpretation – into fields beyond physics, and his 1932 lecture “Light and Life” seems to have been influential in persuading Max Delbrück to turn from physics to genetics, on which his work later won him a Nobel Prize.

But it is Schrödinger’s contribution that is probably best known, for the notes from his lectures at Trinity College Dublin that he collected into his little 1944 book What Is Life? remain remarkable for their prescience and influence. Most famously, Schrödinger here formulated the idea that life somehow opposes the entropic tendency towards dissolution – it feeds on negative entropy, as he put it – and he also argued that genetic information might be transmitted by an arrangement of atoms that he called an “aperiodic crystal” – a description of DNA, whose structure was decoded nine years later (partly by another former physicist, Francis Crick), that still looks entirely apt.

One of the most puzzling of biological facts for Schrödinger was that genetic mutations, which were fundamentally probabilistic quantum events on a single-atom scale, could become fixed into the genome and effect macroscopic changes of phenotype. By the same token, replication of genes (which was understood before Crick and Watson revealed the mechanism) happened with far greater fidelity than one should expect from the statistical nature of molecular interactions. Schrödinger reconciled these facts by arguing that it was the very discreteness of quantum events that gave them an accuracy and stability not amenable to classical continuum states.

But this doesn’t sound right today. For the fact is that Schrödinger was underestimating biology. Far from being at the mercy of replication errors incurred by thermal fluctuations, cells have proof-reading mechanisms to check for and correct these mistakes.

There is an equal danger that quantum biologists may overestimate biology. For it’s all too tempting, when a quantum effect such as tunneling is discovered in a biological process, to assume that evolution has put it there, or at least found a way to capitalize on it. Tunnelling is nigh inevitable in proton transfer; but if we want to argue that biology exploits quantum physics here, we need to ask if its occurrence is enhanced by adaptation. Nobel laureate biochemist Arieh Warshel has rejected that idea, calling it a “red herring” [7].

Similarly in photosynthesis, it’s not yet clear if quantum coherence is adaptive. It does seem to help the efficiency of energy transfer, but that might be a happy accident – Graham Fleming, one of the pioneers in this area, says that it may be simply “a byproduct of the dense packing of chromophores required to optimize solar absorption” [8].

These are the kind of questions that may determine what becomes of quantum biology. For its appeal lies largely with the implication that biology and quantum physics collaborate, rather than being mere fellow travellers. We have yet to see how far that is true.

1. R. Penrose, Shadows of the Mind (Oxford University Press, 1994).
2. A. Kohen & J. P. Klinman, Acc. Chem. Res. 31, 397 (1998).
3. G. S. Engel et al., Nature 446, 782 (2007).
4. H. Lee, Y.-C. Cheng & G. R. Fleming, Science 316, 1462 (2007).
5. L. Turin, Chem. Senses 21, 773 (1996).
6. E. M. Gauger, E. Rieper, J. J. L. Morton, S. C. Benjamin & V. Vedral, Phys. Rev. Lett. 106, 040503 (2011).
7. P. Ball, Nature 431, 396 (2004).
8. P. Ball, Nature 474, 272 (2011).

Thursday, August 07, 2014

Calvino's culturomics

Italo Calvino’s If On a Winter’s Night a Traveller is one of the finest and funniest meditations on writing that I’ve ever read. It also contains a glorious pre-emptive critique on what began as Zipf’s law and is now called culturomics: the statistical mining of vast bodies of text for word frequencies, trends and stylistic features. What is so nice about it (apart from the wit) is that Calvino seems to recognize that this approach is not without validity (and I certainly think it is not), while at the same time commenting on the gulf that separates this clinical enumeration from the true craft of writing – and for that matter, of reading. I am going to quote the passage in full – I don’t know what copyright law might have to say about that, but I am trusting to the fact that anyone familiar with Calvino’s book would be deterred from trying to enforce ownership of the text by the baroque level of irony that would entail.


[From Vintage edition 1998, translated by William Weaver]

I asked Lotaria if she has already read some books of mine that I lent her. She said no, because here she doesn’t have a computer at her disposal.

She explained to me that a suitably programmed computer can read a novel in a few minutes and record the list of all the words contained in the text, in order of frequency. ‘That way I can have an already completed reading at hand,” Lotaria says, “with an incalculable saving of time. What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings? An electronic reading supplies me with a list of the frequencies, which I have only to glance at to form an idea of the problems the book suggests to my critical study. Naturally, at the highest frequencies the list records countless articles, pronouns, particles, but I don’t pay them any attention. I head straight for the words richest in meaning; they can give me a fairly precise notion of the book.”

Lotaria brought me some novels electronically transcribed, in the form of words listed in the order of their frequency. “In a novel of fifty to a hundred thousand words,” she said to me, “I advise you to observe immediately the words that are repeated about twenty times. Look here. Words that appear nineteen times:
“blood, cartridge belt, commander, do, have, immediately, it, life, seen, sentry, shots, spider, teeth, together, your…”
“Words that appear eighteen times:
“boys, cap, come, dead, eat, enough, evening, French, go, handsome, new, passes, period, potatoes, those, until…”

“Don’t you already have a clear idea what it’s about?” Lotaria says. “There’s no question: it’s a war novel, all actions, brisk writing, with a certain underlying violence. The narration is entirely on the surface, I would say; but to make sure, it’s always a good idea to take a look at the list of words used only once, though no less important for that. Take this sequence, for example:
“underarm, underbrush, undercover, underdog, underfed, underfoot, undergo, undergraduate, underground, undergrowth, underhand, underprivileged, undershirt, underwear, underweight…”

“No, the book isn’t completely superficial, as it seemed. There must be something hidden; I can direct my research along these lines.”

Lotaria shows me another series of lists. “This is an entirely different novel. It’s immediately obvious. Look at the words that recur about fifty times:
“had, his, husband, little, Riccardo (51) answered, been, before, has, station, what (48) all, barely, bedroom, Mario, some, Times (47) morning, seemed, went, whom (46) should (45) hand, listen, until, were (43) Cecilia, Delaia, evening, girl, hands, six, who, years (42) almost, alone, could, man returned, window (41) me, wanted (40) life (39)"

“What do you think of that? An intimatist narration, subtle feelings, understated, a humble setting, everyday life in the provinces … As a confirmation, we’ll take a sample of words used a single time:
“chilled, deceived, downward, engineer, enlargement, fattening, ingenious, ingenious, injustice, jealous, kneeling, swallow, swallowed, swallowing…"

“So we already have an idea of the atmosphere, the moods, the social background… We can go on to a third book:
“according, account, body, especially, God, hair, money, times, went (29) evening, flour, food, rain, reason, somebody, stay, Vincenzo, wine (38) death, eggs, green, hers, legs, sweet, therefore (36) black, bosom, children, day, even, ha, head, machine, make, remained, stays, stuffs, white, would (35)"

“Here I would say we’re dealing with a full-blooded story, violent, everything concrete, a bit brusque, with a direct sensuality, no refinement, popular eroticism. But here again, let’s go on to the list of words with a frequency of one. Look, for example:
“ashamed, shame, shamed, shameful, shameless, shames, shaming, vegetables, verify, vermouth, virgins…"

“You see? A guilt complex, pure and simple! A valuable indication: the critical inquiry can start with that, establish some working hypothesis…What did I tell you? Isn’t this a quick, effective system?”

The idea that Lotaria reads my books in this way creates some problems for me. Now, every time I write a word, I see it spun around by the electronic brain, ranked according to its frequency, next to other words whose identity I cannot know, and so I wonder how many times I have used it, I feel the whole responsibility of writing weigh on those isolated syllables, I try to imagine what conclusions can be drawn from the fact that I have used this word once or fifty times. Maybe it would be better for me to erase it…But whatever other word I try to use seems unable to withstand the test…Perhaps instead of a book I could write lists of words, in alphabetical order, an avalanche of isolated words which expresses that truth I still do not know, and from which the computer, reversing its program, could construct the book, my book.

On the side of the angels

Here’s my take on Dürer’s Melencolia I on its 500th anniversary, published in Nature this week.


Albrecht Dürer’s engraving Melencholia I, produced 500 years ago, seems an open invitation to the cryptologist. Packed with occult symbolism from alchemy, astrology, mathematics and medicine, it promises hidden messages and recondite meanings. What it really tells us, however, is that Dürer was a philosopher-artist of the same stamp as Leonardo da Vinci, immersed in the intellectual currents of his time. In the words of art historian John Gage, Melencolia I is “almost an anthology of alchemical ideas about the structure of matter and the role of time” [1].

Dürer’s brooding angel is surrounded by, the instruments of the proto-scientist: a balance, an hourglass, measuring calipers, a crucible on a blazing fire. Here too is numerological symbolism in the “magic square” of the integers 1-16, the rows, columns and main diagonals of which all add up to 34: a common emblem of both folk and philosophical magic. Here is the astrological portent of a comet, streaming across a sky in which an improbable rainbow arches, a symbol of the colour-changing processes of the alchemical route to the philosopher’s stone. And here is the title itself: melancholy, associated in ancient medicine with black bile, the same colour of the material with which the alchemist’s Great Work to make gold was supposed to begin.

But why the tools of the craftsman – the woodworking implements in the foreground, the polygonal block of stone awaiting the sculptor’s hammer and chisel? Why the tormented, introspective eyes of the androgynous angel?

Melencolia I is part of a trio of complex etchings on copper plate that Dürer made in 1513-14. Known as the Master Engravings, they are considered collectively to raise this new art to an unprecedented standard of technical skill and psychological depth. This cluttered, virtuosic image is widely thought often said to represent a portrait of Dürer’s own artistic spirit. Melancholy, often considered the least desirable of the four classical humours then believed to govern health and medicine, was traditionally associated with insanity. But during the Renaissance it was ‘reinvented’ as the humour of the artistic temperament, originating the link popularly asserted between madness and creative genius. The German physician and writer Cornelius Agrippa, whose influential Occult Philosophy (widely circulated in manuscript form from 1510) Dürer is almost certain to have read, claimed that “celestial spirits” were apt to possess the melancholy man and imbue him with the imagination required of an “excellent painter”. For it took imagination to be an image-maker – but also to be a magician.

The connection to Agrippa was first made by the art historian Erwin Panofsky, a doyen of symbolism in art, in 1943. He argued that what leaves Dürer’s art-angel so vexed is the artist’s constant sense of failure: an inability to fly, to exceed the bounds of the human imagination and create the truly wondrous. Her tools, in consequence, lie abandoned. Why astronomy, geometry, meteorology and chemistry should have any relation to the artistic temperament is not obvious today, but in the early sixteenth century the connection would have been taken for granted by anyone familiar with the Neoplatonic idea of correspondences in nature. This notion, which pervades Agrippa’s writing, held that, which joined all natural phenomena, including the predispositions of humankind, are joined into a web of hidden forces and symbols. Melancholy, for instance, is the humour governed by the planet Saturn, whence “saturnine.” That blend of ideas was still present in Robert Burton’s The Anatomy of Melancholy, published a century later, which called melancholics “dull, sad, sour, lumpish, ill-disposed, solitary, any way moved, or displeased.” A harsh description perhaps, but Burton reminds us that “from these melancholy dispositions no man living is free” – for melancholy is in the end “the character of Mortality.” But some are more prone than others: Agrippa reminded his readers of Aristotle’s opinion “that all men that were excellent in any Science, were for the most part melancholy.”

So there would have been nothing obscure about this picture for its intended audience of intellectual connoisseurs. It was precisely because Dürer mastered and exploited the new technologies of printmaking that he could distribute these works widely, and he indicated in his diaries that he sold many on his travels, as well as giving others as gifts to friends and humanist scholars such as Erasmus of Rotterdam. Unlike paintings, you needed only moderate wealth to afford a print. Ferdinand Columbus, son of Christopher, collected over 3,000, 390 of which were by Dürer and his workshop [2].

But even if the alchemical imagery of Melencolia I was part of the ‘occult parcel’ that this engraving presents, Besides all this, it would be wrong to imagine that alchemy was, to Dürer and his contemporaries, purely an esoteric art associated with gold-making. As Lawrence Principe has recently argued (The Secrets of Alchemy, University of Chicago Press, 2013), this precursor to chemistry was not just or even primarily about furtive and futile experimentation to make gold from base metals. It was also a practical craft, not least in providing artists with their pigments. For this reason, artists commonly knew something of its techniques; Dürer’s friend, the German artist Lucas Cranach the Elder, was a pharmacist on the side, which may explain why he was almost unique in Northern Europe in using the rare and poisonous yellow pigment orpiment, an arsenic sulphide. The extent of Dürer’s chemical knowledge is not known, but he was one of the first artists to use acids for etching metal, a technique developed only at the start of the sixteenth century. The process required specialist knowledge: it typically used nitric acid, made from saltpetre, alum and ferrous sulphate, mixed with dilute hydrochloric acid and potassium chlorate (“Dutch mordant”).

Humility should perhaps compel us to concur with art historian Keith Moxey that “the significance of Melencolia I is ultimately and necessarily beyond our capacity to define” [3] – we are too removed from it now for its themes to resonate. But what surely endures in this image is a reminder that for the Renaissance artist there was continuity between theories about the world, matter and human nature, the practical skills of the artisan, and the business of making art.

1. Gage, J. Colour and Culture, p.149. Thames & Hudson, London, 1993.
2. McDonald, M. in Albrecht Dürer and his Legacy, ed. G. Bartrum. British Museum, London, 2003.
3. Moxey, K. The Practice of Theory, p.93. Cornell University Press, Ithaca, 1994.