The Singularity is Not Necessarily Near

For those of you not already familiar with him, let me start off by introducing the visionary futurist Ray Kurzweil. Over the years, he’s done some amazing things. He was one of the first people to predict the internet was going to be a big deal (he made this claim in the ’80s, when there were less than 100 computers on the entire network). He was the first to develop a good, robust optical character recognition system (a way for computers to read printed text), as well as a voice synthesizer, and was the first to combine the two technologies into a machine that could read books to blind people. He has more recently done more work in this area, getting computers to translate speech into text, translate it into other languages, and speak it again (basically a crude Universal Translator a la Star Trek). He’s done stuff with music synthesis, and in the late ’90s he made arguably the best keyboard synthesizer in the world. He’s done some amazing things over the years.

Consequently, I am very nervous about calling his more recent predictions a load of bollocks. However, this is the conclusion to which I (and quite a few other people) have come. He gave a good summary of his new vision at TED 2005. It basically says that most scientific, technological, and biological developments follow exponential curves when plotted over time. This isn’t just Moore’s Law, showing how computing power per dollar is rising exponentially. He’s talking about life expectancy (if you can live to 2040 the life expectancy will increase faster than we age and we’ll all live forever), the rate at which we can sequence and understand genes, the precision of medical devices and techniques, the rate of innovation in wireless devices, miniaturization rates of computers and other machines, the amount knowledge we have about the universe, even the rate technological paradigm shifts. This last two bits are often called the Singularity: the claim is that the time between major technological paradigm shifts is decreasing exponentially and before 2040 the world will be revolutionized on an hourly basis and technology will be changing so rapidly that we won’t be able to keep up with all the amazing stuff happening around us. He also points out many other trends that appear to be growing/improving exponentially, and you can see more of them in the links I’ve provided so far.

The two claims I’d like to debunk here are this whole Singularity business and the claims about life expectancy. Here’s the graph that Kurzweil likes to pull out when he brings up the Singularity:

It takes 15 lists of major “this changes everything” events in history, and plots the time difference between successive events as a function of how recently they occurred. The claim is that if you follow the line they trace out, the time between successive events goes to zero exponentially quickly, and new world-changing events will happen so often that you can’t even comprehend how often they will happen.

but let’s take a closer look at this graph. First off, the x-axis is a logarithmic scale of time before the present. If you go arbitrarily far to the right, you will approach the present date, but you’ll never actually get there (you’ll be a second in the past, then a millisecond in the past, then a nanosecond, &c). In other words, this graph has been defined in such a way that it cannot depict the future, and if we follow the line plotted, we would instead predict that the Singularity has already occurred (ah, the dangers of a log-log plot!). This doesn’t seem right to me.

The second problem I have with it is the time scale. The points farthest to the left occurred a little over 10^10 years ago, or about 15 billion years ago. You know what happened then? That’s right, the Big Bang. These are not graphs of technological innovation, they’re graphs of all sorts of progress, even before humans were around, even before the earth was formed. If you look at the original data used to compile that list, most of the events are things along the following lines:

  • the Big Bang
  • the Milky Way is formed
  • the first life on Earth
  • the first flowering plants

In my opinion, much of this data doesn’t really say anything about technological innovation. I’d say if we really wanted to look at this data for technology itself, we should start less than 100,000 years ago. I don’t have dates really pinned down, but it’s my understanding that around then, people started burying their dead (implying some sort of society) and the beginnings of language formed. This time period, along with all human civilization, is in the right half of the graph, which makes me think half this data isn’t really relevant. In this subset of the data, only 8 of the 15 lists have any entries at all. I assume I am missing the point here, but I honestly can’t figure out why this data is at all relevant to the technological revolution. Wouldn’t it be better to look only at the subset of time where humans had technology of some sort, and then to populate it with a whole bunch of other data points? The ones that jump to mind in no particular order are the invention of calculus and Newtonian mechanics, the creation of steel, antibiotics, the rise of countries and empires, the mastery of electricity, the invention of the computer (or the Babbage machine, or something), the telephone, and the domestication of animals. Of the examples I mention, electricity and the rise of empires are each on a single list, and none of the other topics appear anywhere. I think these are all developments that totally revolutionized life at the time, and I’m confused why none of them are considered in this discussion.

As a bit of an aside, I tracked down a couple of the papers that originally published the data plotted (these are for the Modis 2003 and Modis 2002 data, respectively). Modis compiled his lists from looking at many other lists of important milestones and choosing the ones where they all agreed. Do you know which lists he used? In order, they are Carl Sagan, AMNH, Encyclopedia Britannica, ERAPS at the U of A, Paul Boyer, &c. Kurzweil’s graph includes 2 lists of data that are just averages of the other lists. No wonder they all correlate so well! I couldn’t track down original sources for the rest of the data, but that second link contains the final rankings of the other sources, so you can see what they thought were important.

Modis’ 2003 analysis tries to see whether an exponential or logistic curve would fit the data better, and does not come to a conclusion (if you’re not familiar with logistic curves, they’re S-shaped things that come up all the time when modeling population growth, and for the first half of the graph they appear exponential but then they flatten out). In other words, Modis looked at the same data as Kurzweil, and couldn’t tell if it was exponential and there will eventually be a Singularity or if it was logistic and there will never actually be such an event.

So to sum up so far, the graph Kurzweil is always pointing to does not and cannot by definition say anything at all about the future, and indeed predicts that the Singularity has already occurred. The graph starts at the Big Bang, doesn’t involve humans until the second half and doesn’t really involve technology until a third of the way from the right, which makes me think that most of the data is not relevant to the technological Singularity in the first place. Moreover, several of the datasets he uses are just averages of the other datasets he uses, which makes me question how independent the data really are. and on top of that, the creators of some of his datasets have analyzed the same data and come to different conclusions (including the conclusion that no conclusion can yet be reached).

Now, to examine Kurzweil’s claims about exponentially increasing life expectancy. As far as I can tell, he’s spot on in saying that the life expectancy has, as a general rule, grown exponentially over time. He then concludes, however, that this growth will continue, and we’ll all live forever after the life expectancy starts growing faster than our ages. To check this, let’s look at why the life expectancy has grown so much over the past few millennia. Kurzweil claims that in ancient Egypt, life expectancy was around 25 years. I can only find sources that say it was 35 to 40 years, but suppose we give him the benefit of the doubt and go with 25 years. This does not mean that people only lived to 25 at all! What it really means is that most adults lived to ripe old ages of 60-80, but child mortality rates were through the roof (around one third of children died before age 5). A similar trend was echoed in ancient Greece, and ancient Israel.

Even today, there is a pretty good correlation between infant mortality and life expectancy. Note that no matter what year you look at, the graph sort of looks like it is composed of two lines, a steep-ish line for infant mortality over 80 deaths per 1,000 babies, and another, less steep line for lower infant mortality rates. I suspect this is because when infant mortality gets small enough, it no longer becomes a factor in life expectancy, and the “true” life expectancy takes over (i.e. people start dying of old age complications), though I have no other source to back up this claim besides my own intuition. I should warn you that the infant mortality in that graph is logarithmic, so please check my argument and make sure you agree with my analysis, rather than accepting it blindly. Certainly correlation doesn’t imply causation, but the historical records from milennia ago often depict adults living to ripe old ages that aren’t reflected in the average life expectancy until the twentieth century. Child mortality has been decreasing exponentially, which has in turn caused the life expectancy to appear to increase exponentially. However, these days child mortality is fairly low in most of the world, and we won’t increase the life expectancy much more by decreasing child mortality further. Despite the improvements in avoiding a premature death, for over two millenia “old” people have lived to be 60-80, with some unusual cases living into their 90’s (such as Ramses II) and even over 100 (Eratosthenes). I admit that it wasn’t until the 20th contury that anyone lived to be 120, but the age at which most people die from old age has stayed pretty consistent. Although I expect this age to increase somewhat in the future, I cannot anticipate any runaway exponential increase in the age at which people die from the medical complications of old age.

Don’t get me wrong, I’d love it if these things actually happened. I suspect I would really enjoy it if technology took off so fast I couldn’t keep up with it, and I’d certainly enjoy living many hundreds or even thousands of years to experience it all. and I wish Kurzweil, Marvin Minsky, Aubrey DeGrey and all the others the best of luck in making this a reality. I just don’t expect the Singularity or immortality to happen within the next generation, if ever. Then again, Kurzweil talks about how the naysayers in the Human Genome project didn’t expect it to be completed on time but it worked out due to the exponential growth of the technologies they used. He talks about how most of this exponential growth is sustained by jumping from one technology to the next as the old one uses up its potential, and I can believe that this can continue (now that child mortality is low, I’m sure we will increase life expectancy by making adults live longer; once physical interactions created life things progressed faster than they had through physics, now technology is progressing faster than evolution, and after AIs can think better than humans, I’m sure they’ll improve technology faster than we currently do), but I don’t think it will continue exponentially forever. I’d love to be one of the naysayers proven wrong, but from where I’m standing right now I just can’t see these things happening to the huge extent that Kurzweil claims.

I guess the point of this rant is that although Ray Kurzweil’s data appears correct, I don’t think it supports the conclusions he draws, as pleasant as those conclusions are.

Leave a Reply

14 Comments

  1. I remember when he spoke at Mudd everyone was making fun of him afterwords as sort of a kook. And now he’s all over the place and I was telling people that he may not be correct, but I didn’t have any convincing explanations for why I thought that. Your post is good.

  2. csn says:

    Yeah, a related area of singularity is people from a primarily computing-background being very optimistic about mapping out the brain and basically understanding consciousness, but if you have any understanding of the enormous complexity of the brain and some of the seemingly intractable problems of consciousness, this is kind of laughable, especially given that we still know so little.

    • Alan says:

      Well, actually this isn’t nearly as intractable as you might think. I was at the American Association of Artificial Intelligence conference in 2005, and there were dozens of talks on modeling various parts of the brain (the visual cortex, the sensorimotor cortex, the neocortex, etc). There were talks about systems that could construct logical arguments and reason about the world, and even programs that could learn the rules of arbitrary games and play them fairly well. not to mention all the advancements in unsupervised learning in the past few years, which enable programs to pick up on patterns and trends without being explicitly told about them. Natural language processing isn’t quite solved yet, but it’s good enough that you can tell what such systems are talking about (for instance, Google Translate and AltaVista Babelfish can usually get the point across, even if they’re not perfect). Just because it’s an enormously complex problem doesn’t mean lots of small bits and pieces haven’t already been solved. I wouldn’t be in the least surprised if, by 2030, we had AIs as smart as young children.

      • csn says:

        You’ve just proven my point :) This is exactly what I’m talking about–people from an AI background who think they can map the brain they same way they might an artificial neural network. I can see how you would be optimistic given all these tidbits, but if you have any kind of understanding of neurobiology, it just makes you laugh. I mean, first of all, the brain isn’t partitioned so neatly. Just because we’ve been able to map certain areas to, it seems, very general functions, doesn’t really mean we understand it at all. It doesn’t tell us the way they interrelate. The problem with the brain is that it is not really designed like anything humans have ever made. The parts all seem to interlock with one another in a way that is really far from being understood, and it constantly recreates itself. If anything, I personally think that the idea of complex systems arising out of simple rules is a good model, at least in the sense that it gives an idea that it is going to be very very hard to actually get a grip on what’s happening.

        However, I don’t think you can study neuroscience in a meaningful way without getting very deep into questions of philosophy of mind: What is consciousness? What is a self? Originally, I wanted to go into neuroscience, but gradually I came to the conclusion that the nature of the mind is such that even the more perfect understanding of the brain would not necessarily lead me to a full understanding of my mind.

          • csn says:

            Thanks for that. I’ve started perusing them. The nature of time is one of those things I contemplate a few times a week and still blows my mind–of course, that we think of time by itself, but of course it’s really part of space, and secondly–what sense does it make for anything to be “moving,” much less in a straight line!? It completely doesn’t make sense. Agh!

        • Alan says:

          I’m sorry this got so long. The short version is that yeah, we actually know a lot about the brain (in particular a lot of low-level details), we don’t need to know every single tiny detail to build a mind and get it to work, and we have already considered questions about philosophy of the mind.

          I’m confused as to what your point was. I had thought it was that we now very little about how the brain works, I replied that we actually know a fair amount and gave some high-level examples of what has already been done, and you said this just shows how little we know and any claims of knowledge are laughable. Am I misunderstanding you? Certainly one can’t just put all the subsystems in a box and expect them to work together correctly, but I think it’s totally within the realm of possibility to connect them together correctly. To some extent, these connections are already understood (for instance, a good overview of how the five senses work and how they’re processed by higher faculties is in Sensation and Perception by Goldstein). Could you explain why you think this is significantly harder than the people who actually work on it do?

          (I’ve moved a very long tangent about our understanding of vision into the next comment, since LJ has a maximum size restriction on comments)

          Another thing to keep in mind is that it’s possible to understand how something works and still not know (or need to know) how it accomplishes particular things. I have a pretty good grasp of how machine learning works, but that doesn’t mean I can watch a classifier and point out when and where it learns particular things, or even where the knowledge is stored. but I can still build ML systems even without this understanding. As a more striking example, consider the work of Ray and Pargellis on artificial life in the early ’90s (see pages 116-118 of Life’s Other Secret for a quick overview; search for “computer” to see those pages): they didn’t in the least understand how their systems became self replicating or how they developed parasitic traits or sexual reproduction, but they didn’t need to understand every minute detail to get the system to have interesting behavior anyway. As long as one understands enough to build these things and get them to work, one doesn’t need to understand exactly what’s going on at any given moment (and we’re starting to get pretty close to having enough knowledge to build one).

          I think questions about philosophy of the mind have already been asked and studied (Searle’s Chinese Room and Chalmers’ Zombies jump, for example), and the conclusion at which people have arrived is that we’re really just glorious machines, that the soul does not actually exist, that consciousness is mostly (if not entirely!) a fascinating illusion, and that in principle we can totally build people and it’s just a matter of sitting down and figuring out the details. FMI, see the links that inferno0069 pointed out.

          To sum up, I don’t know what gaps in knowledge you’re talking about, that you think this can’t possibly be solved in twenty years. Could you elaborate?

          and to point out one quick thing I should have done before: the world will be revolutionized when/if we create human-level AIs, but I doubt it will be much more impactful than, say, the Industrial Revolution or the invention of language. It’s just another paradigm shift to plot on Kurzweil’s flawed graph, and I doubt it will have the huge impact of the supposed Singularity.

          • csn says:

            So, Chris already linked to the IEEE issue, but specifically I would like you to read: http://www.spectrum.ieee.org/jun08/6278/3
            I think this sums up some of my views fairly well. Notice that, contrary to a lot of the other people talking about singularity or big advances in human level AIs, these people actually make studying the brain their lives, and they have a fairly different tone than the more hyperbolic claims of the computer scientists, with good reason.

            On that article, once you’ve read it–I think they express well how difficult it is to do what you’re talking about–but it is important to note that in my opinion, by excluding all those senses of what it is to be human, they’re talking about an extraordinarily low-level threshold of “consciousness”–and even this is a huge challenge!

            I think the way they go about it, discussing consciousness as a continuous gradient, makes sense. By this definition, even a lightbulb has some amount of consciousness (and why not?) Now, imagine what it will take to reach an objective understanding of high-level consciousness–all those other functions accounted for, the subjective experiences of being human, accounting for a very large amount of integrated information, indeed.

            Stepping out at an even more abstract level, what is most interesting to me, is once you get to this point. One of the classic examples is: You have a physiologist who has a perfect understanding of how the brain works–but she’s never seen the color red before. So when she walks into a red room for the first time, what happens? Well, it’s obvious to me that she gains new information. And this is the fascinating part to me, the irreconcilable domains of objective and subjective information/experience. The collection of perfect abstract knowledge is itself, a collection of brain states, but the experience of what it is to be in those various states are those various states. At a certain point, we can not know what an experience is, except by experience.

        • Alan says:

          If you mean that a high-level neuroanatomical overview doesn’t imply a low-level understanding, then I agree, but we also have low-level understandings for most of these systems. As an example, we know a fair amount about how the brain processes vision and recognizes objects. We understand the way rods and cones perceive light, the way lateral geniculate cells can take the output of rods in a bullseye pattern to detect features, how simple cortical cells can use rows of LG cells to detect lines at various slopes, how complex cortical cells can take nearby simple cortical cells over time to detect movement in various directions, how end-stopped cortical neurons can take multiple complex cortical cells to track particular corners as they move around, and how the pyramidal neurons and interneurons in the neocortex process the results of all these other cells and the responses of other parts of the neocortex in this amazing feedback tree (it has a hierarchical structure, so I hesitate to call it a feedback loop) to recognize shapes and poses in the world. Add in multimodal neurons that correlate these visual data with results from the other senses and your previous beliefs about your surroundings, and it’s not too difficult to create a vision-based representation of your surroundings. Each of these steps has already been done in silico and has already been used to create robust shape and object recognition systems. and just in case you think I’m bluffing, this is all stuff I learned several years ago (I’m not pulling this out of thin air for the sake of argument, though I did need to look up the names of some of the cells in my old textbooks). People who work on computer vision often study human vision and model their software after the way the brain does it.

          I’ll grant that I personally don’t understand exactly how these surroundings, once calculated, are represented and manipulated in the brain, but I’m by no means an expert on this. I strongly suspect that someone who has actually studied the field does understand such things. As far as I can tell, the main problem is that each subsystem (vision, hearing, logic, intention, memory, pattern recognition, &c) is complicated (but not intractably so!), and putting them all together is complicated enough and requires enough work that no individual group has done it yet. but again, it’s not intractable; it’s just really hard!

  3. inferno0069 says:

    I hope you got (the urge to write) this from, or have elsewhere been pointed toward this: http://www.spectrum.ieee.org/singularity
    I haven’t read much yet, but what I have read has been pretty good.

    • Alan says:

      I hadn’t seen that before; thanks for showing it to me! I got the urge to write this after seeing that Kurzweil had given his talk at TED. I remembered how fishy it sounded the first time I heard it, so I decided to fact check a little, which lead to some interesting problems with his arguments, which lead to more fact-checking, which eventually turned into this.

      but that seems like a pretty interesting page; I’ll have to look through it more. Thanks!

    • jcmdev0 says:

      “In recent decades, the enthusiasts have been encouraged by an enabling trend: the exponential improvement in computer hardware as described by Moore’s Law, according to which the number of transistors per integrated circuit doubles about every two years.” — http://www.spectrum.ieee.org/jun08/6306

      Too bad Moore’s Law is dead. (Intel says 2026 will probably be the end, but even before that, there are all of the heat and wire delay problems of faster clock speeds. I can’t dig up a link at the moment).

    • csn says:

      Oh, i was just about to link to that..

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>