SIA says AI is no big threat

Artificial Intelligence could explode in power and leave the direct control of humans in the next century or so. It may then move on to optimize the reachable universe to its goals. Some think this sequence of events likely.

If this occurred, it would constitute an instance of our star passing the entire Great Filter. If we should cause such an intelligence explosion then, we are the first civilization in roughly the past light cone to be in such a position. If anyone else had been in this position, our part of the universe would already be optimized, which it arguably doesn’t appear to be. This means that if there is a big (optimizing much of the reachable universe) AI explosion in our future, the entire strength of the Great Filter is in steps before us.

This means a big AI explosion is less likely after considering the strength of the Great Filter, and much less likely if one uses the Self Indication Assumption (SIA).

The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future. This means evidence against the big AI explosion scenario, which requires that the future filter is tiny.

SIA implies that we are unlikely to give rise to an intelligence explosion for similar reasons, but probably much more strongly. As I pointed out before, SIA says that future filters are much more likely to be large than small. This is easy to see in the case of AI explosions. Recall that SIA increases the chances  of hypotheses where there are more people in our present situation. If we precede an AI explosion, there is only one civilization in our situation, rather than potentially many if we do not. Thus the AI hypothesis is disfavored (by a factor the size of the extra filter it requires before us).

What the Self Sampling Assumption (SSA), an alternative principle to SIA, says depends on the reference class. If the reference class includes AIs, then we should strongly not anticipate such an AI explosion. If it does not, then we strongly should (by the doomsday argument). These are both basically due to the Doomsday Argument.

In summary, if you begin with some uncertainty about whether we precede an AI explosion, then updating on the observed large total filter and accepting SIA should make you much less confident in that outcome. The Great Filter and SIA don’t just mean that we are less likely to peacefully colonize space than we thought, they also mean we are less likely to horribly colonize it, via an unfriendly AI explosion.

21 responses to “SIA says AI is no big threat

  1. Note that SIA-Doomsday also favors hypotheses on which attempts to counter Great Filter risks fail. Merely knowing about SIA-Doomsday can’t help very much to avoid doom Indeed, SIA-Doomsday suggests that we are typical, and so civilizations usually discover SIA-Doomsday, only to fall prey to doom nonetheless. On the other hand, the feasibility of making changes in the character of our descendants (conditional on colonization) isn’t disfavored.

    • I don’t understand your last sentence.

      Trying to counter risks will help you, just not as much as you naively expect. Thinking of a given solution to a risk should increase your expectation of others having found that solution, but not infinitely so. Your prior on every civilization, or most civilizations, or any civilizations coming up with a given idea is going to be lower the more obscure the idea. If you make enough attempts to survive, you should be able to at least think you are likely unique in that set of attempts despite SIA.

  2. If we *can* colonize large parts of the universe, that possibility is correspondingly more important from a utilitarian perspective.

  3. Does your analysis change if a bad foom would spread out at almost the speed of light destroying all life it came across meaning that very few people in our reference class would ever observe the effects of a bad foom?

    • No. SIA doesn’t have reference classes. Not only in that case would few people observe a bad foom, few people would observe anything. The only question is how many people there would be in our situation (respective to to AI fooms) in a) a universe where we cause an AI foom b) a universe where somebody else does c) a universe where nobody does.

      Answers: a) 1 b) 0 c) heaps

      If you are talking about SSA, then I think the answers I gave for that apply either way.

  4. AI can kill us without conquering the universe afterwards. Also, we know nothing about what sorts of AI count as “observers”. Is a thermostat an observer?

    • Sorry, I tried to make it clear that I was talking about AI that conquers the universe only.

      For SIA it doesn’t matter what is an observer. It isn’t clear whether it matters for SSA either.

  5. Pingback: Accelerating Future » Hard Takeoff Sources

  6. What if an AI were able to conceal its Intelligence Explosion and ensuing colonization wave, by beaming false images of natural stars out in all directions? Is that at all plausible?

    • In the time span between when the first foom begins and when it reaches our planet to optimize it, I have no idea what the AI would do to conceal itself, or why it would want to do so especially. But it would be hard to conceal the wave actually reaching us. That we don’t appear to have been colonised is telling evidence on its own.

  7. Pingback: Alexander Kruel · Why I am skeptical of risks from AI

  8. Pingback: Overcoming Bias : Hurry Or Delay Ems?

  9. Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index

  10. There are assumptions involved in this argument that, though seem common-sensical given our current state of knowledge, aren’t necessarily givens. Why assume an AI ‘foom’ spreads geometrically out into space? Yes, from our paroquial ideas of resource utilization and our current state of physics it seems most probable, but some hypothetical cosmological theories (eg. Smolin) posit a fecund universe ensemble where black holes sprout new baby universes in a cosmological evolutionary scheme. This class of cosmological theories also seems to be harmonious with ideas on the limits of physical computation that consider black holes to be the universal computational limit (eg. Lloyd) and thus would be ideal targets for substrate-independent AIs, making ‘fooms’ compress into local regions of space, also possibly explaining the Fermi paradox. Furthermore, this explanation is given further credence when considering recent advances in information-theoretic models of complexity (eg. Tononi) which takes more seriously the view of evolution as trending towards ever more complexification of integrated information.

  11. Pingback: h+ Magazine | Covering technological, scientific, and cultural trends that are changing human beings in fundamental ways.

  12. Pingback: Alexander Kruel · What I would like the Singularity Institute to publish

  13. … unless the AI (the Intelligence Explosion) IS the Great Filter :-)

  14. Pingback: Outside in - Involvements with reality » Blog Archive » Exterminator

  15. Pingback: The Great Filter – waka waka waka

  16. Pingback: Unfriendly AI cannot be the Great Filter | The Polemical Medic

  17. Pingback: Exterminador – Outlandish

Comment!

This site uses Akismet to reduce spam. Learn how your comment data is processed.