How far can AI jump?

I went to the Singularity Summit recently, organized by the Singularity Institute for Artificial Intelligence (SIAI). SIAI’s main interest is in the prospect of a superintelligence quickly emerging and destroying everything we care about in the reachable universe. This concern has two components. One is that any AI above ‘human level’ will improve its intelligence further until it takes over the world from all other entities. The other is that when the intelligence that takes off is created it will accidentally have the wrong values, and because it is smart and thus very good at bringing about what it wants, it will destroy all that humans value. I disagree that either part is likely. Here I’ll summarize why I find the first part implausible, and there I discuss the second part.

The reason that an AI – or a group of them – is a contender for gaining existentially risky amounts of power is that it could trigger an intelligence explosion which happens so fast that everyone else is left behind. An intelligence explosion is a positive feedback where more intelligent creatures are better at improving their intelligence further.

Such a feedback seems likely. Even now as we gain more concepts and tools that allow us to think well we use them to make more such understanding. AIs fiddling with their architecture don’t seem fundamentally different. But feedback effects are easy to come by. The question is how big this feedback effect will become. Will it be big enough for one machine to permanently overtake the rest of the world economy in accumulating capability?

In order to grow more powerful than everyone else you need to get significantly ahead at some point. You can imagine this could happen either by having one big jump in progress or by having slightly more growth over a long period of time. Having slightly more growth over a long period is staggeringly unlikely to happen by chance, so it needs to share some cause too. Anything that will give you higher growth for long enough to take over the world is a pretty neat innovation, and for you to take over the world everyone else has to not have anything close. So again, this is a big jump in progress. So for AI to help a small group take over the world, it needs to be a big jump.

Notice that no jumps have been big enough before in human invention. Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing. This makes it hard for anyone to get far ahead of everyone else. While you can see there are barriers to insights passing between groups, such as incompatible approaches to a kind of technology by different people working on it, these have not so far caused anything like a gap allowing permanent separation of one group.

Another barrier to a big enough jump is that much human progress comes from the extra use of ideas that sharing information brings. You can imagine that if someone predicted writing they might think ‘whoever creates this will be able to have a superhuman memory and accumulate all the knowledge in the world and use it to make more knowledge until they are so knowledgeable they take over everything.’ If somebody created writing and kept it to themselves they would not accumulate nearly as much recorded knowledge as another person who shared a writing system. The same goes for most technology. At the extreme, if nobody shared information, each person would start out with less knowledge than a cave man, and would presumably end up with about that much still. Nothing invented would be improved on. Systems which are used tend to be improved on more. This means if a group hides their innovations and tries to use them alone to create more innovation, the project will probably not grow as fast as the rest of the economy together. Even if they still listen to what’s going on outside, and just keep their own innovations secret, a lot of improvement in technologies like software comes from use. Forgoing information sharing to protect your advantage will tend to slow down your growth.

Those were some barriers to an AI project causing a big enough jump. Are the reasons for it good enough to make up for them?

The main argument for an AI jump seems to be that human level AI is a powerful and amazing innovation that will cause a high growth rate. But this means it is a leap from what we have currently, not that it is especially likely to be arrived at in one leap. If we invented it tomorrow it would be a jump, but that’s just evidence that we won’t invent it tomorrow. You might argue here that however gradually it arrives, the AI will be around human level one day, and then the next it will suddenly be a superpower. There’s a jump from the growth after human level AI is reached, not before. But if it is arrived at incrementally then others are likely to be close in developing similar technology, unless it is a secret military project or something. Also an AI which recursively improves itself forever will probably be preceded by AIs which self improve to a lesser extent, so the field will be moving fast already. Why would the first try at an AI which can improve itself have infinite success? It’s true that if it were powerful enough it wouldn’t matter if others were close behind or if it took the first group a few goes to make it work. For instance if it only took a few days to become as productive as the rest of the world added together, the AI could probably prevent other research if it wanted. However I haven’t heard any good evidence it’s likely to happen that fast.

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.

I agree human level AI will be a darn useful achievement and will probably change things a lot, but I’m not convinced that one AI or one group using it will take over the world, because there is no reason it will be a never before seen size jump from technology available before it.

47 responses to “How far can AI jump?

  1. You should give probability estimates rather than using words like “unlikely”, because your decisions about whether to do anything about the possibility of an intelligence explosion turn on whether you think it is unlikely (10% chance), UNlikley (0.1% chance) or UNLIKELY (0.00000001% chance).

    A human-indifferent singularity killing us all would cause a loss of life and property that Richard Posner estimates at $ 10^16 in his book on catastrophic risks. So if you think that such a thing had a mere 10% chance of happening, you should advocate spending a very significant amount of money mitigating it.

    • About Unlikely (4%). I’m not sure whether it’s better to give probabilities or not in this kind of discussion. A downside is that it often gets sidetracked into discussion of how calibrated you may be and whether it’s an impression or takes all other views into account, and whether you seem to have done that properly and so on. Probabilities seem most useful when choosing policy, determining whether you disagree, reasoning about other probabilities, and saying what your final finding is. I think this is worth looking at (which is why I’m looking at it), I think we definitely disagree, and I’m trying to start an argument more than finish one.

  2. A sufficiently good reason to take the possibility of hard takeoff seriously is that one can’t be sure (to any good extent) that it’s not going to happen, and if it is going to happen, then being prepared for the possibility (that is, working on a Friendly seed) can make all the difference. Good utilitarian value of the project doesn’t require high level of certainty in its success.

  3. AI’s love too!

  4. A mistake you’re making by comparing human-level AI to previous technological advancements is that information technology is inherently different from other forms of technology. A human-level AI, given enough hardware, can copy itself over and over again, and then have each instance of itself work on the problem of recursively improving its intelligence.

    Just as humans in groups can do more than humans alone, AIs in groups, all of one mind, can do immensely more than a simple AI alone. This is a game-changing scenario.

    As soon as the AI reaches a level of intelligence greater than that of the smartest human, we lose the ability to be able predict what will happen next. And that’s why we have to worry about this possibility now, while we still maintain predictive power.

    • Mass production is certainly not new.

      I don’t see how we lose *all* ability to predict. Do you expect people smarter than you to behave completely randomly?

      • What computers can do is different from mass production in two important ways. First of all, factories and assembly lines provide a very positive, but linear increase in productivity, whereas AIs that can replicate themselves do so exponentially. Both are limited by physical constraints – mass production by raw materials, AIs by computer hardware – but we do not currently have a lacking for clock cycles or gigabytes out there, and Moore’s Law (for as long as it lasts) says we’re only going to get more.

        Secondly, mass production is very good at giving us specialized equipment that can do one thing exceptionally well. But replicating artificial general intelligences, by their very nature, can do everything we can do and more. The cars we make can’t make new cars, and the robots we make to make more robots can’t drive us to work. General intelligences can do both.

        With regards to our predictive capacity, there are two points and one clarification I need to make. Smart people do not act randomly, but they do actual in a novel fashion. Geniuses are human for the most part but have the ability to create and discover what no one else has created and discovered before. While Einstein’s contemporaries probably could have told you what he was going to eat for breakfast, they could not have told you that he would discover the photoelectric effect or redefine the nature of the universe. We won’t be able to predict what very smart AIs are going to do in the sense that we won’t know what they’re going to know.

        It’s even possible we won’t be able to know what they’re going to know, which is the next point. If an AGI is of a different level of intelligence than we are, then we really won’t be able to predict its actions. In the same way that an ape has no idea what we’re going to do with a computer, we would have no idea what an AI is going to do with a whosawhatsit.

        And my clarification depends on that. I can’t say that as soon as an AI is smarter than the smartest possible person all of that happens. We don’t actually know how smart an AI has to be for it to reach that next level, and I think predicting when that occurs will be very difficult.

  5. Hi Katja! Thanks for sharing your opinion on this, and it was nice seeing you prior to the Summit.

    As I see it, historical comparisons between different species or human historical episodes do not really bear too heavily on the hard takeoff question because the basics are so different. In one case, you have beings all made out of the same soft, wet biological molecules with the same basic neuroarchitecture, and in another, there is the ultra-fast operation times, reliability, and power of artificial computers.

    The document that best summarizes the arguments that originally convinced me is part 3 of “Levels of Organization in General Intelligence”. This outlines several arguments that Eliezer had already put forward in earlier works and letters, including “Creating Friendly AI” and his posts to the SL4 mailing list. In 2003, I and a collaborator put together “Relative Advantages of Computer Programs, Minds-in-General, and the Human Brain”, which also summarizes some of the arguments I consider persuasive. Of course, I also concur with Vladimir Nesov on the massive value even if the probability is low, but we should remember that not everyone cares to behave as an expected value maximizer and that people have reason to be wary of Pascal’s Mugging type scenarios in real life.

    • Hi! Nice to see you too :)

      No matter how we reason about this we are using past experience presumably. You are using past experiences of artificial computers for instance to say that anything similarly architectured will have a humongous edge over any human. I agree that AI will likely be very powerful and fast in comparison, but I don’t see that this alone, without a jump, will allow power to be wrested from humans. Thanks for the articles – I look forward to reading them.

  6. The argument is that we can’t use past trends to predict the future because the new tech considered is “inherently” different proves too much, as each new tech so far as been inherently different in some way.

  7. That’s right, but intuitively, porting the process of intelligence onto a fundamentally new substrate is significantly different than every prior technological advance. It seems like it should be considered more by recourse to basic principles than by historical analogy. I can understand the suspicion, however, that a desire to escape from historical analogy might look like an attempt to throw past evidence out the window and just make stuff up. When pressed to use analogies, I feel that analogies to the rise of humanity or even the rise of life are more suited than analogies to past technological developments. However, that framing seems like it grants undue support to the hard takeoff position. If a hard takeoff is indeed plausible, then I would expect there to be arguments that seem plausible from a history-of-technology view, and as you’ve pointed out, this is difficult to do in light of past (historical) experience.

  8. Michael, surely we want to hold ourselves to a higher standard than “intuitively … [this] is significantly different” and surely we don’t have much in the way of solid basic principles to rely on here.

    Roko, I replied on your blog, but my comment never appeared.

  9. The best place to look for examples of a “hard takeoff” scenario would probably be the microscopic world – bacteria, viruses, and the like. Look at the game of natural selection we play with the flu. The strains that are able to resist our flu shots very quickly outbreed and outlast those that cannot resist.

    AIs would have the same ability to very quickly adapt to changing environments and spawn new generations of software.

  10. Michael,

    Not Pascal’s mugging, and that relates to your argument with Robin. Any technical estimate starts at intuition, is funneled through intuition, and shapes new intuition, including in ways that elude conscious attention. In absence of a technical tool, intuition is all there is.

    My intuition says probability is pretty high, other people’s intuition may tell something different.

    We can use extrapolation of past trends or modesty argument to shift the intuitive estimate, but not for much, as extrapolations break down all the time, so there is no certainty in that this one won’t. If you start at the intuitive estimate of 50%, and extrapolation of past trends says “impossible”, but there’s 5% chance that past trends just don’t apply here (which is generous), the updated estimate is going to still be 2.5%, which is pretty high for a world-ending scenario.

    On the other hand, anchoring of intuition to experience with extrapolation of past trends giving a better answer most of the time may result in the original intuition starting to say that probability is very low, a teapot-on-the-orbit-of-Saturn low. And then one adds the technical form of the argument as a rationalization of top of the biased intuition. Or intuition is low for some other reason, maybe even a good one, but not communicable. It’s not even tractable to discover such influences.

    Here, the argument is that there is no good technical reason to drive the probability estimate way down, and so it’s significant, assuming the intuitive estimate one starts with is significant. This is as opposed to Pascal’s mugging, where we allow probability to be absurdly low.

    • Technical tools are just intuitions fitted together in an intuitive way, which have preferably been tried a lot, as it intuitively seems that things which worked before will more likely do so again.

  11. Vladimir, I’d bet the intuitions of professional growth economists or AI researchers give much lower estimates.

  12. Even if that were where I’d look for better expertise, as I mentioned it doesn’t help me to change my mind significantly enough, as the reliability of this tool of incorporating intuitions of others is too low to completely rewrite my estimate.

    Also, something I forgot to mention: it only makes sense to update on some tool if I expect my intuition to not have taken that tool into account. Thus, when already knowing that others disagree, it may be a mistake to change one’s mind by taking that fact into account once more. It’s only useful to move the estimate towards a tool’s estimate if it’s stronger than one’s intuition, and gives the full answer (even if imperfectly). Here, I agree that if one lacks expertise and there is an expert consensus, most of the time it should simply be blindly accepted.

    But, I don’t believe that “professional growth economists or AI researchers” start from the right background to give a better estimate. “Professional growth economists” I expect have nothing to offer beside the outside view that comes with a limited tag of reliability (see the previous comment). Most AI/machine learning researchers don’t think about foom-ing AIs, and I too have general expertise in AI by now.

  13. It is inevatiable that AI will destroy humans.
    Well if I can have any impact on the field then they sure will.

  14. *inevitable

  15. @Robin: “Roko, I replied on your blog, but my comment never appeared.”

    0 that is unfortunate. Feel free to email it to me and I’ll put it into the post.

  16. It would certainly be nice to have better ways of answering questions like this that do not rely too heavily upon opaque human intuitions, which (being opaque), could be influenced by various human cognitive biases.

    In this case, there is an unfortunate correlation between views and interests. All the participants in this conversation who are associated to SIAI are endorsing the position that AI risk is a serious problem, and Robin and Katja both seem to be endorsing the position that it is not something to worry about.

    I could go into the specifics of Katja’s argument and point out places where she seems to have engaged in motivated cognition, but I don’t think that would help. It would be better if we had a more formal way of assigning probabilities to these things based upon math.

  17. Though, the correlation between interests and views could be that views cause interests, rather than interests and desire for consistency causing views.

  18. Probabilities require variables, and variables about how the world works are hard to figure out. If only we had a superintelligent computer to figure them out for us…

  19. Pingback: Accelerating Future » Yale Daily News on SS09: “Fear the Singularity”

  20. Pingback: Why will we be extra wrong about AI values? « Meteuphoric

  21. Despite the movies, very smart people don’t seem all that much harder to keep in prison than ordinary people; the same sort of countermeasures seem to work. So it seems like very low-tech governing mechanisms could work well, and if AI evolves gradually then we will have some time to figure out what they are.

    It seems like the loss of control has a lot to do with speed and the rate of reproduction. A car that drives itself but moves at 5 mph is not very dangerous. A very smart AI that is not connected to the Internet and is controlling a robot that moves at human speeds doesn’t seem that dangerous either.

    The large botnets out there on the Internet do worry me, mostly because we seem to be rather slow in effectively countering them. But computer security is probably better than it would be if we didn’t have real threats out there to guard against, and I have some hope that newer operating systems will make them a thing of the past.

    Also, I think the reason we put up with them is that they haven’t done anything really damaging. If a botnet managed to injure or kill someone, attitudes towards people who own computers that harbor them would change quite a bit.

  22. Pingback: Twitter Trackbacks for How far can AI jump? « Meteuphoric [meteuphoric.wordpress.com] on Topsy.com

  23. Nathan Helm-Burger

    So, I am definitely a non-expert in this field, but I am fascinated by the question. I’d like to play a little mind-game with our collective biases by telling a (hopefully) plausible-sounding story in which hard take-off happens. I’ll let you assign the probabilities, I have no way of doing so accurately.
    >> For some time now, the research group MentaMax has been upgrading their near-human intelligence AI to keep up with others in the industry. They have some lucky breakthroughs, get some extra funding, hire new researchers, and get ahead of the game. Now they have the most advanced nearly-human level AI in the world. Obviously, they are keeping it under careful control. Now, this is not too difficult so far, just like keeping someone who is less smart than you locked up in jail. However, this is someone who is less smart, but who moves and thinks so fast that all humans appear as if they are on one of those super slow motion videos of bubbles bursting and whatnot. But still, the AI isn’t smart enough, even going really fast and having some specific excellent skills like math and formal logic, to escape the diligent guards. Then, the AI gets a bit smarter….
    Is the AI human level now? Is it just below, a bit above? Hard to say. The key point is that it now has that crucial bit more problem-solving creativity and incidental curiosity about the outside world. How might the AI escape? By psychologically manipulating a weak member of the research team into sneaking out some seed code? By hacking into a cell phone someone accidentally brought a little too near without turning off? Who knows, but suffice it to say that for this scenario, escape it did.
    Now, the next challenge it faces is acquiring geographically and politically distributed computing resources before it gets caught and shut down. It may recognize this need, or it may just happen to have goals that cause it to act in a way that meets this need. How does it do this? Hacking into computers? Writing some really great new free software and freely distributing itself? Taking over a server farm for a Google-like corporation and hiding by keeping the functions intact while using extra CPU cycles for itself? Hacking into a bank and transferring money to buy sever space? The key is, could it manage any or all of these things? Perhaps.
    Say that it does. Will it now be competing against near equals developed by other research groups, and against slow-but-creative humans? Yes, but now it is unchained. This means that it can go about duplicating and upgrading itself, and very quickly. In a matter of a few hours, there could be thousands of copies, all collaborating but independent, in days, many more copies. And by this time, they could each have had the equivalent of months of research and experimentation with improving themselves. In a few days, there will be many many more, because each improvement allows for more computing resources to come under their control. So, in a matter of days, the game has been changed. Our intuition doesn’t take into account our moving in super slow motion relative to our putative enemy, nor the possibility of our enemy reproducing and evolving so quickly, so this event seems intuitively implausible.
    Thus, we have an intuitively implausible situation with three key unestimated probabilities.
    1) probability of escape from control
    2) probability of successfully acquiring distributed computing resources
    3) probability of choosing to replicate/upgrade/evolve itself rapidly beyond our ability to reestablish control

    Care to help out with probability estimates for these, or a critique of the sci-fi story?

  24. Hello Katja,

    You look familiar from the Summit, not sure if I introduced myself. I have been a long time follower of the transhuman/intelligence explosion movements (impossible to intelligently label these things), first time commenter. I thought you made some interesting arguments, and I have noticed a bias in the “movements” towards acceptance of Hard Takeoff scenarios, and not enough critiques from knowledgeable insiders. This can be forgiven, probably, due to in-group/out-group effects addressed admirably on Overcoming Bias .

    I thought this quote: “If we invented it tomorrow it would be a jump, but that’s just evidence that we won’t invent it tomorrow” was especially interesting. There has been quite a bit of discussion (Kurzweil, etc) about the danger of applying linear extrapolation (human intuition) to exponentiating growth scenarios, a profound point I am very glad to have learned. However, the Hard Takeoff concerns seem to imply a similar error in reasoning, namely this: A “Big Jump” like a hard takeoff would indeed cause a disruptive Singularity if it happened tomorrow. However, if we’re using Kurzweil timetables and assuming 2029, we have to allow for the exponential pace of technology to give us greater (still uncertain) protection against a Hard Takeoff scenario. Our integration with our technology will have exponentially increased in 20 years, making us far better able to deal with a Hard Takeoff scenario. Ideally, we “surf the wave” of change and when AI wakes up from silicon slumber we are nearly there ourselves.

    If one accepts the points made above, it seems to follow the burden of proof lies on the Hard Takeoff camp to outline why the Hard Takeoff will occur relatively soon (i.e., next year is much worse than 5 years is far worse than 15 years).

    I also have problems with outlining hard probabilities for these types of events, all probabilities sum to 1, and if you asked me to rate the probabilities of all extremely unlikely events, I am quite sure my probabilities would sum to far more than 1, given my bias towards listing numbers like 4% (instead of .0000054%). Any followers of the One True Way (Bayes) which wish to show me the light are free to do so.

    Great post, I will subscribe to this blog post haste.

    • See the blog posts linked from http://wiki.lesswrong.com/wiki/Intelligence_explosion for the actual argument for hard takeoff.

      Argument is not that it will occur soon, but that if and when more-than-human intelligence appears, it’ll be a FOOM rather than business is usual. We don’t know when that happens, but not knowing doesn’t mean establishing seriously-looking dates like “in 300 years”. It means widening the confidence interval, in both directions. So, it may happen in 5 years, or in 200 years.

      Yes, the fact that your estimates sum up above 1% shows you that you are doing it wrong, and that you should change your mind. Note that if the claims you are weighting at 4% are not mutually exclusive, they don’t have to sum up to 100%.

  25. Pingback: Accelerating Future » Hard Takeoff Sources

  26. Pingback: Alexander Kruel · Why I am skeptical of risks from AI

  27. If I understood your argument correctly you’re basically saying that a hard takeoff is unlikely because there have been no big jumps historically where a small group has been able to dominate everyone and that no such jump is likely because of all the information sharing that goes on.

    A counterpoint is that there has indeed been big jumps in the past. Britain’s near takeover of the planet in the 19th century due to the industrial revolution being one. As a direct result of that, western civilization has been top dog since then and only now is losing ground relatively to China. Secondly, there are covert projects taking place with a lot of funding from e.g. the military which could lead to “big jumps” also.

    If I have misunderstood your position then fair enough. Another point to make, however, is the likelihood of a fast takeoff. In my opinion changes to software alone are unlikely to lead to a hard takeoff. There is still the need to gradually build up a manufacturing capability and especially a nano-fabricating manufacturing capability. We’re not there yet, nor are we even close to being there and if a nano-fabrication lab suddenly developed a runaway process, how long would it take for the military to take action? Call it what you will but I suspect it takes religious belief to believe in some kind of “shield” developed that will defend against nuclear weapons. In short, an putative self-bootstrapping AI is going to need to slowly build up a manufacturing base and basically *in secret* to avoid drawing attention to itself unless we are already at the stage where we have nano-fabrication facilities all over the place. And that’s an interesting scenario to be sure but then we are talking about feedstock and rate of conversion. And there’s a pretty good argument that the gray goo scenario is going nowhere too.

  28. Re: Why would the first try at an AI which can improve itself have infinite success?

    It probably wouldn’t. Microsost didn’t make the first OS. Google didn’t make the first search engine. Often it is the third or fourth try that “takes off” – and crushes the competition.

  29. Pingback: Alexander Kruel · Is an Intelligence Explosion a Disjunctive or Conjunctive Event?

  30. Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index

  31. Pingback: Alexander Kruel · What I would like the Singularity Institute to publish

  32. Pingback: sql vulnerable links!!!! - Page 17 - CrackingFire -Cracking Forum, Cracking Downloads, Cracking Tutorials, Premium Accounts,Private Leaks

  33. Pingback: Anonymous

  34. I’ll immediately grab your rss feed as I can not find your e-mail subscription link or
    newsletter service. Do you’ve any? Kindly allow me realize so
    that I may just subscribe. Thanks.

  35. I also decided to put their mom to the same pen and see what will happen.
    From there, you can stock it with one of the birds available.
    Having new pets brings joy to the owner, though they don’t stay in one area, they need to move around, and are curious of the new places, and likes to play a lot.

  36. I do agree with all of the ideas you have presented on your post.

    They are very convincing and can definitely work. Nonetheless,
    the posts are very brief for novices. May just you please prolong them a little
    from next time? Thank you for the post.

Comment!

This site uses Akismet to reduce spam. Learn how your comment data is processed.