Tag Archives: Singularity

A poor way to avoid the Singularity

A day after listening to Ray Kurzweil at Learning Without Frontiers 2012 I had the immense pleasure of listening to Martin Rees at the same conference. As Lord Puttnam said in his introductory remarks, Martin Rees is one of the most remarkable men, not just in the UK, but in the world. (You can hear some of what Lord Rees covered by checking out his TED talk.)

Martin Rees talking at the LWF conference, January 2010

Martin Rees at the LWF conference, January 2010
Credit: LWF

The focus of Rees’ talk was on science teaching and science education (since the conference was about the future of learning), but he commented on Kurzweil’s talk of the previous day. Rees pointed out that one of the corollaries of an exponentially increasing level of technology is that individuals will soon have access to technologies that could destroy civilisation. If the world is a village, what happens if the village idiots get their hands on biological weapons that could wipe us all out? We might never get the chance to see whether Kurzweil’s Singularity will happen.

Again, I don’t see these Doomsday scenarios as being a satisfying solution to the Fermi paradox. But it’s a depressing thought that such scenarios might cause one particular technological species, namely us, from making our presence felt in the universe.

The Singularity and Fermi

A couple of days ago I had the pleasure of listening to a talk by Ray Kurzweil at the Learning Without Frontiers 2012 conference. Kurzweil is a powerful, entertaining speaker. His talk ranged far beyond the narrow limits of his PowerPoint slides, and covered areas as diverse as his acquaintanceship with Noam Chomsky, his founding of the Singularity University, his numerous inventions and much else beside. But it was those PowerPoint slides that I found most interesting. Slide after slide showed evidence of the exponential increase in the power of information technology per unit currency. That increase has never paused over recent decades, and it shows no signs of abating any time soon. Moore’s Law in computing is just a special case of this exponential increase in the power of information technology. (This ‘Law’ is actually an observation first made in 1965 by Intel co-founder Gordon Moore. He noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965, and predicted that this trend would continue for years to come. Well, it was a pretty good prediction.)

Our lives will be transformed in the coming decades, in ways we can’t easily predict, because of the fact that different areas of science have now become information technologies and have, therefore, hopped onto that exponentially accelerating escalator. Think of human genetics, for example. Next year the technology used to analyse the human genome will twice as powerful as it is right now; a year later it will be four times more powerful; in three years’ time it will be eight times more powerful…

A page from the book of the human genome

A page from the book of the human genome. One day biotechnologists will be rewriting this book, with consequences that are hard to foresee.
(Credit: Rob Elliott)

The human brain isn’t very good at really ‘getting’ exponential increase. We have a gut understanding of linear increase, but not exponential increase. There’s probably a good reason for that: our distant ancestors lived in a worlds where they had to predict the future on a linear basis. (“If me and that lion continue our paths then we’ll meet in 20 seconds – I’d better head that way instead.”) Sometimes, even trained scientists don’t ‘get’ that difference between linear and exponential increases. They understand it at an intellectual level, of course, but typically they will vastly underestimate where technology will be in the near future. That was one of the clear points Kurzweil made, and it’s hard to disagree. In a few years time, the computing power that resides in an object the size of an iPhone will reside in something the size of a blood cell. We don’t know precisely how that technological miniaturisation will take place, but we can be pretty sure that it will happen. And what will that mean for all of us? We can only guess.

This idea of ever-accelerating technological advancement led Kurzweil and others to introduce and popularise the concept of a technological Singularity: a point in the not too distant future when advances in computing occur so rapidly, and computation becomes so powerful, that unaugmented human brains will be unable to comprehend the nature of these technologically transcendent ‘beings’.

Perhaps such a Singularity will happen. Perhaps not. But suppose it does happen. I was chatting to a couple of people at the conference who argued that this was the explanation for the Fermi paradox: we don’t see alien beings because they’ve merged with their technology, hit the Singularity, and become trancendent beings. I covered this argument in Where is Everybody? and, personally, I don’t see how it addresses the paradox. The question “where is everybody” applies just as well to transcendent machine intelligence as it does to biological intelligence. There’s no sign of either.