Light cone eating AI explosions are not filters

Some existential risks can’t account for any of the Great Filter. Here are two categories of existential risks that are not filters:

Too big: any disaster that would destroy everyone in the observable universe at once, or destroy space itself, is out. If others had been filtered by such a disaster in the past, we wouldn’t be here either. This excludes events such as simulation shutdown and breakdown of a metastable vacuum state we are in.

Not the end: Humans could be destroyed without the causal path to space colonization being destroyed. Also much of human value could be destroyed without humans being destroyed. e.g. Super-intelligent AI would presumably be better at colonizing the stars than humans are. The same goes for transcending uploads. Repressive totalitarian states and long term erosion of value could destroy a lot of human value and still lead to interstellar colonization.

Since these risks are not filters, neither the knowledge that there is a large minimum total filter nor the use of SIA increases their likelihood.  SSA still increases their likelihood for the usual Doomsday Argument reasons. I think the rest of the risks listed in Nick Bostrom’s paper can be filters. According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated. SSA’s implications are less clear – the destruction of everything in the future is a pretty favorable inclusion in a hypothesis under SSA with a broad reference class, but as always everything depends on the reference class.

10 responses to “Light cone eating AI explosions are not filters

  1. That’s right if you assume away other applications of the SIA, but things change when you let those applications back in.

    For instance, SIA gives an immense update in favor of being simulations that will soon be shut down to run similar (pre-colonization civilizations some billions of years into their universe) simulations again, since that would let populations of beings with our experiences be so tremendously more numerous (with logical uncertainties about the technological feasibility or motivations for huge amounts of simulation getting resolved in favor of sims by SIA).

    But since a tendency for events that prevent colonization in the “basement” would reduce the quantity of simulations, SIA actually makes the probability of conventional Earth-civilization-extinction risks in the basement lower, while increasing the risk of simulation shutdown to very high levels.

    • Carl, does this imply that if the two of us are in a simulation chances are people very similar to us are being simulated a huge number of times because we are connected to some kind of special historical event, and most of the people who we might naively think exist are not even a real part of the simulation but just exist as false memories or false historical records?

  2. “No, get out. Next!”
    [audio src="" /]

    I don’t place too much credence in SIA. Most of the people I know who have considered the SIA in its full detail (Presumptuous Philosopher with respect to logical uncertainty, etc, etc) seem to reject it. Most philosophers buy the thirder position on the narrow sleeping beauty case, for a variety of different reasons, but I know of few who accept the generalizations needed for SIA and the SIA-Doomsday and SIA-Simulation arguments.

    A larger and more representative survey of philosophers and cosmologists exposed to the key considerations could sway me a lot in the direction of the results however.

  3. Pingback: Overcoming Bias : Beware Future Filters

  4. I agree with your post. I read your thesis and enjoyed it. I actually wrote a counter argument on Accelerating Future. But I thought it was very good.

  5. An inchoate thought: I wonder if SIA, reference class, and related problems can be shown to be ultimately undecidable via some Godel like proof. Maybe it already has?

  6. Maybe we should add the third dimension – our uncertainty in different version of DA. For example, I do not know which version of Sleeping beauty is true – and should take the midle i.e. (1/2+1/3)/2= 5/12. This in fact make me believe in DA, because if it only two times weaker, it is inside error of estimation.

  7. Pingback: Overcoming Bias : Adam Ford & I on Great Filter

  8. Pingback: The Great Filter – Interview with Robin Hanson | Science, Technology & the Future

  9. Pingback: The Great Filter, a possible explanation for the Fermi Paradox – interview with Robin Hanson | Science, Technology & the Future


Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s