One of the highlights of my time as a scientist was having a long dinner with the great molecular biologist Sydney Brenner, who was a pivotal figure in the history of biology and whom sadly passed away in 2019.
This, along with interviewing Garry Kasparov the same year, marked the start of my interest in how we might learn from prior scientific environments to create better ones today.
There is an excellent interview of him that is sadly no longer online, so I am reposting it here.
There is also a short clip which highlights Brenner’s views, shared by many such as David Hubel [article], as to how the structure of science has changed.[YouTube]. He says:
‘Nowadays, most people who say they are in science aren’t really in science. They’re in something else. They’re in the management of science…. And these people do believe that everything can be solved by the application of what the Americans call ‘process’…. Their only challenge is ‘will I be awarded good points?’, ‘will I be promoted?’, ‘will I be able to survive in the economy of science?’’’
‘Of the essentials of preserving life, nourishing the breath has no peer. When the breath is exhausted, the body dies, when the people are downtrodden, the nation collapses’ Zen Master Hakuin Ekaku, Letter to a sick monk, 18th century Japan
‘…To explicate the uses of the Brain, seems as difficult a task as to paint the Soul, of which it is commonly said, That it understands all things but itself;’ Thomas Willis, Preface to Cerebri Anatome, 1664
Prologue: A Semi-Monocular I
‘You never really understand a person until you consider things from their point of view…until you climb into [their] skin and walk around in it.’
Harper Lee, writing as Atticus Finch, in To Kill A Mockingbird
A Forbidden Experience
This is a book about the self. And since eyes are said to be the window to the soul, they make a good place for us to start. Eyes have also been a topic close to me since my birth. To explain this, I want to begin by telling you something about myself, about how I see the world, and thus something about the brain that wrote this book.
I was born blind in one eye. Because of this, there is an experience that I have never had. Not only have I never perceived this experience, I cannot even imagine it. It is an experience that you, and almost everybody you have ever met, takes entirely for granted. Yet it is also profoundly beautiful, and this missing beauty in my own life captivates me. I know it is beautiful from the accounts of the few who lacked this experience, and then gained it later in life.
This experience is that of the ability to see depth. This depth perception occurs through binocular vision, called stereopsis, and it is a central part of how the normally sighted see and interact with the world. We commonly refer to it as three-dimensional vision.
Stereopsis comes from our forward-facing eyes, and therefore only some animals, usually predators, have it. It allows you to assess how far away an object is from you. It is what allows you to sense how far a balloon in the air is from you. When you intuit how far to reach for a glass, the sense of depth is at work.
How do we sense depth? There are multiple ways. Firstly, there are cues for depth that come from individual eyes: things that are closer to you move more when you move your head, and obviously they look larger than when they are far away. These are monocular cues. We can see this from the depth apparent in flat, two-dimensional works of art. Leonardo da Vinci in his notebooks made a detailed study of how such monocular cues can be used to create depth in art. But despite their usefulness, these cues lack a great deal of information compared to normal binocular vision, which I lack. This other, binocular kind is the one that is so elusively, beautifully different.
Binocular vision cannot come from either eye alone. That is to say, it only comes from the combined use of two eyes as one. It is therefore in part a creation of the mind.
The additional information used to create a sense of depth is the slight difference in viewing position between the two eyes, which leads to different images being formed and sent to the brain. To see this, try holding an object close to you, then close either eye, and see how the image changes. How much it changes depends upon how far away it is, and this information is used by the brain to estimate distance. For example, an apple held close to you looks much more different eye to eye than an apple far away. The brain uses this information to create binocular depth.
Without this estimation of depth, the world has a flatness that is obviously hard for me to describe. My brain has only ever had relatively minimal depth cues as it uses only one eye, and so my worldview is correspondingly flat.
Interestingly, once stereopsis has been acquired, even closing one eye does not make it fully go away. Normal brains use their past binocular experience to ‘fill in’ and create an enriched sense of depth even when one eye is shut, a fact confirmed by studies showing that binocularists are better at judging depth with one eye than are monocularists. Even if you closed one eye, it would not give you this same flatness that I see. Susan Barry wrote a beautiful memoir of her own experience gaining normal stereopsis later in life, an experience so unusual she was profiled by Oliver Sacks in the New Yorker for it. She describes herself being ‘overwhelmed’, the experience being ‘remarkable’ and ‘dramatic’. It is a ‘distinct, subjective sensation’, quite unlike anything else. She writes of how even films on flat screens gained depth for her upon gaining stereopsis, and of how she felt beautifully immersed in a deep, enveloping sense of worldly depth that she had never experienced before. A particularly haunting part for me was reading of her joy at being surrounded by and immersed in a scene of falling snow in winter, being herself part of this deepened world for the first time. She had been freed from a life of living through a flat television screen as I must do.
In her book, Barry quotes Frederick Brock (1899-1972), a pioneer in visual therapy who wrote ‘….Before stereopsis is actually experienced by the patient, there is nothing one can do or say which will adequately explain to him the actual sensation experienced.’ Like a colour that has never been seen before, it is beyond the imagination of monocularists like myself to conjure these sensations. Despite having ‘normal’ brains, our imagination is sharply constrained by our lack of experience. As no explanation of sound could ever satisfy the deaf, no explanation of depth could satisfy me. It is the invisible thing lying forever in front of me.
Neither is this imperception merely a sensory deficiency. Rather, this lack of depth pervades action also. Living in this flat land leads to a certain clumsiness, and a lifetime of odd habits built up to compensate. At a dinner with a friend in 2018, I reached quickly for a wine glass, misjudging its distance badly. My hand crashed into the newly filled glass, splashing red wine all over his white shirt. My explanation involving monocularity, stereoblindness and resulting poor motor control did not resolve the subsequent awkwardness.
I was aware of this deficiency from a young age, and I channeled this curiosity into a tendency to think about thinking, in order to try to understand myself. This in turn led to questions about the brain.
The birth of an I, and a blooming, buzzing confusion
Let us start with a ‘simple’ question: how do we begin to see? When thought begins in life is perhaps not a meaningful question. Most likely to me at least, thought gradually appears in the womb: we are not desktop computers with simple ‘on’ switches. But what is clear is that the opening of the eyes for the first time marks a dramatic change. Babies do not hide this fact upon their emergence, which is rarely a quiet process. What is not so immediately clear from watching babies is the marked challenges that seeing and doing present to the newborn, though the flailing of newborn babies hints at this.
We imagine babies looking back at us clearly for the first time, but the reality is almost certainly quite different. Upon being born, we are met instead by what William James called ‘the blooming, buzzing confusion’. As we shall see, neuroscience has proven James correct. In the ‘great bath of birth’ a blaze of ever-changing lights storm into the fertile young mind. Everything is unexplained, the mind restlessly clutches for anything predictable, familiar. All sorts of regularities are identified and saved by the brain at this time, a process that we shall track later. Edges, shapes, textures, movements, all form a rising crescendo of complexity that is caught by the brain. From this apparent chaos we eventually learn to recognize faces, appreciate beauty and read poetry. Here the brain is forming the foundation upon which more detailed understanding will later be laid, a layer of perceptual lessons upon earlier perceptual lessons. In studying the brain, we are in large part studying the machinery for understanding this apparent chaos into which the newborn is so suddenly immersed. Brains thus exist to learn useful things about the world.
We will discuss at length the challenges of something so apparently effortless to us as seeing. For now, I want to elaborate on seeing with two eyes.
We do not come into the world able to take the two images from both of our two eyes and make one single, hopefully unified worldview. But, at least for you, it happened. There is one self. A single, internal model of the world built from two eyes, two ears, taste, smell, touch, and even sensors that tell you where your joints are. From the cacophony of newborn experience comes our mental universe. This is not a passive process either: every experience comes to hint at, suggest, or even force action, which in turn changes sensation, and so the cycle continues. From all of this comes a whole, single world.
Well, at least for you. For me, things are more complex. When I, my self, developed in the womb, a small layer of connective tissue remained in the right eye, blocking the light from entering and reaching the retina. I was thus half-blind.
Being half blind did come with an advantage: for the first five weeks of my life, my baby self had no binocular problem, no need to combine the two images: I looked out upon the world with a single eye. All of those worldly confusions of vision were dealt with by one eye’s input to the brain. Things were relatively simple, and I, whatever this I was at that early age, could focus on other priorities in setting up my inner world.
Then, however, everything visual in my life changed. This aberrant connective tissue was removed at a very young age – five weeks – for reasons that we shall see. The second eye, lacking a lens due to this surgery, suddenly came into play, and the confusion blossomed into what I imagine was mental anarchy. A second image began flooding my brain – much blurrier, weaker, unable to focus at all as it lacked a lens – and this image needed to be dealt with.
What happens next in babies such as I was? The ideal outcome, of course, is to have two eyes see as one, ie., to be normal. Here, though, there are problems. It is much easier for the brain to simply continue to use one eye, especially when that one eye has much clearer vision than the other. What happens then, if left untreated, is exactly this: the brain blocks the image from the second eye, and so people use one eye only. This is a common reason for having a squint. The brain never learns to use the weaker eye, and so that eye simply lives on as a kind of vestigial sensory artifact, an eye without a brain. The eye gets forgotten, if it was ever truly known.
To prevent this abandonment of the weaker eye, children like me go through a rather time-intensive and laborious process of ‘patching’ or, more technically, monocular occlusion therapy. In this process, a patch is placed over the ‘good eye’, as my mother used to call it, forcing the child to use the ‘bad eye’. This patch is often worn for the majority of each day, and I used to make frequent visits to Great Ormond street hospital in London, where they sought to track my progress. In childhood photographs, I can be seen wearing this patch, peering out at the world through the blurry, weaker eye. I wore it during almost all of my time at primary, or elementary, school. My mother went through parental hell trying to coax her child’s brain into seeing. I repeatedly tore off this cumbersome, sticky object, and had to be bribed with sweets to keep it on. It became a bargaining chip in many a negotiation. I yearned for the hour when I could rip the patch off and see a crystalline world again. Every day, for the first ten years of my life, I went through this. It did not endear me to the school bullies: pirate jokes abounded.
With my mother during monocular occlusion therapy. I am being forced to use my right eye to ‘teach’ my brain how to use it.
The purpose of this treatment is to make the brain learn to listen to the input from the ‘bad’ eye, and it can be effective. For many who go through this, the outcome is relatively normal binocular vision. Some people’s brains still reject the ‘bad’ eye, remaining monocular once the patch is off. They remain unable to use the bad eye at all. Regardless, the normal outcome is almost always one worldview, whether it be from just one eye, or from successfully using the two eyes together. A ‘single’, obvious I that sees and acts, a centre point of the self.
For me though, something rather different happened, a compromise between these two extremes. I ended up seeing everything twice.
A doubled life: ‘imperfect’ partners
The task of fusing two eyes’ images was never accomplished by me. Doubtless my baby brain tried to make sense of the images together, but gave up the struggle. Instead, I have suffered from double vision all of my life. As I write, I see two pages, two sets of words, blending into each other. When I look at the sky, I see two suns. I see two moons, two lover’s faces, two horizons, two of everything.
What is it then, to see twice? Which eye represents the experience of the self? Which image is reality? In truth, one image is more like a ghost, hovering over or under the reality. This ghost is almost always the bad eye: very blurred, legally blind, a little like looking through very frosted glass. I have been unable to wear contact lenses since an eye operation at age 25, but even when I could, I preferred not to. The blurriness helps me to keep those two images separate: one grounded, one floating. This is less confusing.
The relationship of the two images to one another is of particular interest. When I was a child, I almost always presented a ‘lazy eye’, my right eye would stubbornly not look straight, not look in alignment with the left eye. As I turned thirteen, this increasingly bothered me, because of how it looked to others.
I began to think long and hard about how to fix this problem of having an obviously lazy eye. I consulted doctors: there is a commonly used surgery that makes the eyes straight by subtly tightening or loosening muscles. However, this only works if the offset of the two eyes is constant. Mine was not. My right eye would drift in different directions, and so there was no fixed correction to make. Doctors said I was also too old to hope that my brain would learn to fuse these two dueling images from the two eyes(more on that duel later). The development of vision happens very young, they said, in a ‘critical period’ where the brain is especially malleable. Because of this, I would have to put up with being cross-eyed forever.
But I was stubborn. I persisted in trying to solve this problem without medical help. What was clear to me was that the fix, if it could be found, would not be found in my eyes, but in my brain. What I realized then was that the double vision was actually a secret blessing: it meant that if I concentrated I could tell when my eyes were looking at the same point or not. This is harder than it sounds, since I lacked the mental routines to do this check effortlessly, but over much time I learnt to do it through extensive trial and error, every time learning to make the relevant estimate of how far apart my eyes were. I learnt to tell whether the eyes were drifting or not, and thus whether I looked cross eyed.
I then had to learn to move each eye independently to bring them back into alignment. This does not come naturally to anyone, not even someone with a visual brain as strange as my own. I began searching for the right mental lever, the right thought, to allow me to control the eyes separately. Over time, I gradually found the way to do it. I explored all kinds of abstract mental places before I found the correct one, the one that attached to the ‘move right eye’ or the ‘move left eye’ commands. Over months, I gradually taught myself how to correct each of the relative displacements between the two eyes, including testing myself in front of mirrors. I used the errors, the difference between the expected and the actual outcomes, to slowly calibrate my brain’s ability to keep my eyes straight. As a teenager, I did consciously what you have done effortlessly since early childhood. The images never fused, but they were relatively aligned. Most of the time, unless I am tired or distracted, my eyes are aligned.
I had created in my mind a habit based upon the close coupling of action and sensation. Each little drift in the two images was met by the necessary corrective action. I had not learnt to use my eyes as one, but I had created an illusion of having done so. Even now, several times a minute I notice a drift of the images that is big enough to need a conscious correction, but it is doable.
Recompense for monocularity
I wrote earlier that I ‘suffer’ from double vision, but this is only partly true. It is true that it is most irritating trying to read with this constant, shifting sets of two images. It is also difficult to become immersed in something while maintaining the constant corrective habits of keeping the eyes aligned. Following slides in a lecture is particularly difficult. I find it very hard to look at objects in detail, to fix my gaze upon an object for any length of time. The constant habit of correcting the drift is immensely distracting, like trying to think whilst balancing on one leg.
In essence, I lack what the yoga masters called Drishti, an essential part of yoga involving steady, calm, yet concentrated gaze, that in turn settles the body and breath. I am in a constant juggling, shifting state of attention, unable to minutely examine precise detail. It makes eye contact especially odd, and I habitually avoid this discomfort.
But seeing the world differently in this way also has its advantages. I’ll begin with the beauty of the images I see. Because my ‘bad eye’ has no natural lens, the image it sees is heavily blurred. This means that points of light are seen as blazes of illumination, like car lights through a rainy window. That eye’s world resembles a living impressionist painting, with blurs of colours and poorly resolved shifts of shapes. I sometimes sit and simply look out of the right, weaker eye, seeing the world in this unfocussed way. It gives an odd reminder of how distorted normal perception is.
But the real beauty comes when using both eyes together, as best I can. Through this, viewing the world becomes a kind of internal poetry, an impressionist painting dancing above the crystal, lens-given ‘reality’ of the good eye. A dancer’s precise pirouettes are illuminated in a bright, motion-filled blur. Reflections of light each have their own shifting halo. Nights are especially beautiful: on top of the precise points of light, the sparkle of the weaker eye floats, broader and more diffuse, like a glow from within each object. Or perhaps this glow is underneath the clear image, not on top. It is hard, even impossible, to work out which is the foreground. This double vision is more irritating in day: the brightness of both images becomes more of a fight within my brain, whilst the calmer hues of night allow a complementary co-existence. The real world is bathed in the warm glow of the weaker eye.
But the benefits of seeing double extend beyond the aesthetic. How you see alters how you think. Studies have even shown that you can predict aspects of personality from the movement of the eyes. Changes in ways of seeing have been linked to great artistic ability: it has been speculated from self-portraits that Rembrandt, Picasso and even da Vinci had lazy eyes, and a corresponding flatness of worldview. This forced adaptation, compromise, a development of alternative ways of seeing and depicting the world.
But double vision did not help me become a brilliant painter. In fact, I’m terrible at drawing: trying to visualize things on the page is very difficult, as the two eyes constantly jostle, meaning there is no fixed point of focus to imagine from. This outweighs any artistic advantages I could gain from seeing a flat world.
Instead of leading me to art, it led me to thinking about thought. Seeing the two images competing, completing, complementing one another is to watch my own thought in real time. This is true for you also: the world you see is the inside of your head. When you open your eyes and gaze upon a sunset, you are actually watching your brain at work.
Your brain performs this illusion well. but in me it is far from perfect. When my attention changes between the two eyes, the images gain and shrink in relative vividness, even relative realness. I can see attention at work. As a child I used to sit and play with the two images, often in a futile attempt to achieve my dream of using two eyes as one.
That dream has not yet come true, and I increasingly wonder if it is such a dream at all. When I close my right eye, seeing with the good eye only, the world becomes still, static, even dull. It is like acquiring a newfound yet bland peace, like an ADHD sufferer suddenly finding calm and realizing they have nothing to do. Rather, my imperfect duet of the two eyes gives the world a living, breathing air, each glance birthing a new flurry of thoughts buzzing in this abstract, unfocussed theatre inside of my head, the theater that sees twice. Viewing can never be a quiet act for me. How much of my personality comes from this eccentric worldview I cannot tell, but I am inclined to be optimistic.
Whatever the answer, fate has given me at least one great gift. Through all of these moments, from long hours in hospitals taking tests, years wearing an eye patch to coax my brain into seeing, experiments in and frustrations with seeing double, a fascination with what I am and how I came to be was born, alongside the tiny speck of tissue that blocked the sight into my right eye. I became restlessly curious to understand this problem deep inside myself, a problem of myself and its partial doublement, and was drawn to reading and learning all that I could about brains. I went on to study neuroscience as an undergraduate, and then as a PhD student.
That is what this book was originally about: neuroscience, and the brain: the instrument that I thought makes the mind. But since then it has expanded, as my own life experience showed me my own embodiment. I will not give away the narrative of the book, but this is me in April 2017, in an intensive care bed following surgery on my aorta:
The author in an intensive card bed, April 22nd 2017.
This, and two other experiences around then, fundamentally changed my sense of self, my thoughts as a scientist, and ultimately this book, which has become as much a memoir of a scientist’s life as it is a book about the brain: my hopes, frustrations, thoughts, experiences, discoveries, friendships, training, successes, failures etc. The core parts of this book were written, at least in my head, during the build up and recovery from this operation. We shall come to this question of embodiment and the mind:body link.
“All we have to do is create opportunity for those who want to take risk. If we start funding this, there will be a long line of young people who are willing to participate, and will release a huge energy which has been so far suppressed. That’s why I’m trying to promote this message.”
Garry Kasparov is widely considered to be the strongest chess player of all time. The youngest world champion in history when only twenty-two, he lost just a single match against a human in his twenty-five-year career. Now retired, he is a leader in the Russian opposition movement and a contributing editor to the Wall Street Journal.
One of the first things you notice about Kasparov is his intensity: he walks rapidly, and when in conversation his whole body seems to focus, confronting the questions I pose. Life, then, mirrors chess, where Kasparov was renowned as much for his compelling chess style as his results. It is a style that he describes as “very dynamic, aggressive chess, dominant chess”, contrasting with the more “pure”, “long-term” approach of the current #1 ranked player Magnus Carlsen.
He speaks quickly, jumping between sentences. This energy is important. For him chess consisted in intense encounters that required mental but also physical preparation, with championship matches lasting months. “Exercise was a very important part of my overall preparation” he says, “to be in the perfect shape before the match you have to work out the combination of your body and your mind, so feeling strong and being in excellent shape physically always helped to generate more energy.”
His memory is extraordinary. Kasparov reputedly could remember every professional game of chess that he had ever played, so I printed out two chess positions, selected randomly from a huge online database of his games. As soon as he glimpsed them, he told me when and where the games were played and named his opponent. He even knew which round of the tournament the games were from, the subsequent moves, and the improvements that he should have made. It was a surprising start to an interview, yet Kasparov merely looked indifferent. “But these are my own games…” he said, his voice trailing off. “You could have made that a lot harder”, added his aide, laughing.
For Kasparov, analysing one’s mistakes is crucial to success. “When playing chess I learnt that every decision requires post-mortem analysis… There is no such thing as a perfect game.” Optimising his performance was a matter of finding a unique approach: you have to “build your own — which is only your own — decision making formula to maximise the effect of your strengths, and to minimise, obviously, the negative effect of your weaknesses.”
In early 2005, after being the number one ranked grandmaster for more than twenty years, he retired from chess to shift his energy toward restoring democracy in his home country, Russia. A constant critic of the regime, he was recently detained and beaten whilst at the Pussy Riot trial rallies.
Does Kasparov still hope to overthrow Putin? “I think that things are heating up, but this is not a linear process. its like a volcano, you have all the signs about eruption, but you can’t say its going to happen tomorrow or the day after tomorrow.” The man who predicted the fall of communism does not have strong predictions for Russia’s future. “I believe that Mr Putin under no circumstances will survive his six-year term. In the next two/three years maximum we will see a major explosion in Russia. I’m not saying it will bring us positive results, but I think the status quo, the current status quo in Russia, is doomed and is about to expire.”
It was the global economic stagnation that drew Kasparov to Oxford: he visited the Oxford Martin School to meet with academics and students from Oxford University to continue to develop his view of the crisis, which he has formed along with Paypal innovators Peter Thiel and Max Levchin. From Kasparov there is no talk of restructuring debt, or of yearly growth targets. To him, the crisis results from the “virus of risk-averse society”, where innovation has stagnated and short-term thinking has triumphed.
In his event at the Oxford Martin School, Kasparov contrasted the mid-twentieth century and today, pointing to the rapid development of antibiotics, rocket technology, nuclear technology and more. Even the internet has its origins in the 1960s. And today? Our planes travel at the same speed they did in the 1950s. Our major recent technological developments, mobile technology and computers, are actually advances from the mid-twentieth century. Our satellites are launched in a similar manner to Sputnik. Growth comes not from technological advance but from the housing market. We are even running out of antibiotics.
What went wrong? He points to the emergence of a safe, ‘milestone driven’ approach to progress. ‘Nobody wants to take a risk, and it reflects very much the over-cautious nature of the publicly or privately funded science today’. He points to the present lack of big, blue-sky projects, such as the Apollo missions.
To Kasparov, this shift began in the “late sixties”, but was only visible much later. “We had such a huge pile of innovations allocated over decades, so that’s why you didn’t even feel it in the seventies or eighties. I think the first time where we actually could feel the heat was the early nineties, after the collapse of the Soviet Union. The existential threat for the free world has disappeared, and it helped to expose the public appetite for a safe, comfortable life.”
Kasparov sees Fukuyama’s End of History as symptomatic of this shift, the view that society has reached an endpoint. “So the world reached the end of history, so now we can afford, you know, to enjoy the life we inherited from our parents and grandparents.” He hits the table, emphasising the point. “No more sacrifices, the ideal of sacrifice has disappeared from the public, private and social agenda.
“Now its time to recognise that the notion that the next generation will have a better life than the previous one may not work, actually, it will not work.” So can we do anything? “Of course we can… At the end of the day its about public pressure… If the public wanted a Mars expedition, Americans would be landing on Mars in this decade.”
Kasparov admits there is “no immediate solution.” The answer lies in creating opportunities. “All we have to do is create opportunity for those who want to take risk. If we start funding this, there will be a long line of young people who are willing to participate, and will release a huge energy which has been so far suppressed. That’s why I’m trying to promote this message.”
This is an article published in the Telegraph in June 2018 by myself and my brother, concerning UK national strategy in science and innovation. We called on the UK to ‘lead the future by creating it’. Below it is a comment and endorsement of it as ‘good advice’ by computing pioneer Alan Kay, whose phrase ‘create the future’ was an inspiration.
Science holds the key, June 7th 2018, Telegraph, James W. Phillips & Matthew G. Phillips
The 2008 crisis should have led us to reshape how our economy works. But a decade on, what has really changed? The public knows that the same attitude that got us into the previous economic crisis will not bring us long-term prosperity, yet there is little vision from our leaders of what the future should look like. Our politicians are sleeping, yet have no dreams. To solve this, we must change emphasis from creating “growth” to creating the future: the former is an inevitable product of the latter.
Britain used to create the future, and we must return to this role by turning to scientists and engineers. Science defined the last century by creating new industries. It will define this century too: robotics, clean energy, artificial intelligence, cures for disease and other unexpected advances lie in wait. The country that gives birth to these industries will lead the world, and yet we seem incapable of action.
So how can we create new industries quickly? A clue lies in a small number of institutes that produced a strikingly large number of key advances. Bell Labs produced much of the technology underlying computing. The Palo Alto Research Centre did the same for the internet. There are simple rules of thumb about how great science arises, embodied in such institutes. They provided ambitious long-term funding to scientists, avoided unnecessary bureaucracy and chased high-risk, high-reward projects.
Today, scientists spend much of their time completing paperwork. A culture of endless accountability has arisen out of a fear of misspending a single pound. We’ve seen examples of routine purchases of LEDs that cost under £10 having to go through a nine-step bureaucratic review process.
Scientists on the cusp of great breakthroughs can be slowed by years mired in review boards and waiting on a decision from on high. Their discoveries are thus made, and capitalised on, elsewhere. We waste money, miss patents, lose cures and drive talented scientists away to high-paid jobs. You don’t cure cancer with paperwork. Rather than invigilate every single decision, we should do spot checks retrospectively, as is done with tax returns.
A similar risk aversion is present in the science funding process. Many scientists are forced to specify years in advance what they intend to do, and spend their time continually applying for very short, small grants. However, it is the unexpected, the failures and the accidental, which are the inevitable cost and source of fruit in the scientific pursuit. It takes time, it takes long-term thinking, it takes flexibility. Peter Higgs, Nobel laureate who predicted the Higgs Boson, says he wouldn’t stand a chance of being funded today for lack of a track record. This leads scientists collectively to pursue incremental, low-risk, low-payoff work.
The current funding system is also top-down, prescriptive and homogenous, administered centrally from London. It is slow to respond to change and cut off from the real world.
We should return to funding university departments more directly, allowing more rapid, situation-aware decision-making of the kind present in start-ups, and create a diversity of funding systems. This is how the best research facilities in history operated, yet we do not learn their key lesson: that science cannot be managed by central edict, but flourishes through independent inquiry.
While Britain built much of modern science, today it neglects it, lagging behind other comparable nations in funding, and instead prioritising a financial industry prone to blowing up. Consider that we spent more money bailing out the banks in a single year than we have on science in the entirety of history.
We scarcely pause to consider the difference in return on investment. Rather than prop up old industries, we should invest in world-leading research institutes with a specific emphasis on high-risk, high-payoff research.
Those who say this is not government’s role fail the test of history. Much great science has come from government investment in times of crisis. Without Nasa, there would be no SpaceX. These government investments were used to provide a long-term, transformative vision on a scale that cannot be achieved through private investment alone – especially where there is a high risk of failure but high reward in success. The payoff of previous investments was enormous, so why not replicate the defence funding agencies that led to them with peacetime civilian equivalents?
In order to be the nation where new discoveries are made, we must take decisive steps to make the UK a magnet for talented young scientists.
However, a recent report on ensuring a successful UK research endeavour scarcely mentioned young scientists at all. An increased focus on this goal, alongside simple steps like long-term funding and guaranteed work visas for their spouses, would go a long way. In short, we should be to scientific innovation what we are to finance: a highly connected nerve centre for the global economy.
The political candidate that can leverage a pro-science platform to combine economic stimulus with the reality of economic pragmatism will transform the UK. We should lead the future by creating it.
Comment from Alan Kay:
Good advice! However, I’m afraid that currently in the US there is nothing like the fabled Bell Labs or ARPA-PARC funding, at least in computing where I’m most aware of what is and is not happening (I’m the “Alan Kay” of the famous quote).
It is possible that things were still better a few years ago in the US than in the UK (I live in London half the year and in Los Angeles the other half). But I have some reasons to doubt. Since the new “president”, the US does not even have a science advisor, nor is there any sign of desire for one.
A visit to the classic Bell Labs of its heyday would reveal many things. One of the simplest was a sign posted randomly around: “Either do something very useful, or very beautiful”. Funders today won’t fund the second at all, and are afraid to fund at the risk level needed for the first.
It is difficult to sum up ARPA-PARC, but one interesting perspective on this kind of funding was that it was both long range and stratospherically visionary, and part of the vision was that good results included “better problems” (i.e. “problem finding” was highly valued and funded well) and good results included “good people” (i.e. long range funding should also create the next generations of researchers). in fact, virtually all of the researchers at Xerox PARC had their degrees funded by ARPA, they were “research results” who were able to get better research results.
Since the “D” was put on ARPA in the early 70s, it was then not able to do what it did in the 60s. NSF in the US never did this kind of funding. I spent quite a lot of time on some of the NSF Advisory Boards and it was pretty much impossible to bridge the gap between what was actually needed and the difficulties the Foundation has with congressional oversight (and some of the stipulations of their mission).
Bob Noyce (one of the founders of Intel) used to say “Wealth is created by Scientists, Engineers and Artists, everyone else just moves it around”.
Einstein said “We cannot solve important problems of the world using the same level of thinking we used to create them”.
A nice phrase by Vi Hart is “We must insure human wisdom exceeds human power”.
To make it to the 22nd century at all, and especially in better shape than we are now, we need to heed all three of these sayings, and support them as the civilization we are sometimes trying to become. It’s the only context in which “The best way to predict the future is to invent it” makes any useful sense.