AI Impacts

I’ve been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project. The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions:

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level?
  • How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?
For example we have recently investigated technology’s general proclivity to abrupt progress, surveyed existing AI surveys, and examined the evidence from chess and other applications regarding how much smarter Einstein is than an intellectually disabled person, among other things.
Some more on our motives and strategy, from our about page:

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI. The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant. The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

In the medium run we’d like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog. If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form. Cross-posted from Less-Wrong.

5 responses to “AI Impacts

  1. It’s a nice site and I’ve really enjoyed all the posts so far. Keep it up!

  2. Examples like the chess expert are misleading. You’re trying to predict from the variability of a relatively narrow ability (chess, physics) the variability of general intelligence. The probable variability is much smaller for general intelligence.

    Einstein is no doubt orders of magnitude above the ability of the average retardate in physics, but that doesn’t mean the 160+ IQ that Einstein probably had was orders of magnitude smarter than folks with an IQ of 60.

    Abilities become more differentiated at the higher levels, yet it isn’t true that abilities are more differentiated at intermediate levels than at very low levels. I’m not aware than anyone has actually explained this: it’s not diminishing returns because g isn’t more important at low levels. Here’s the explanation that I propose. Very high levels of ability are achieved by the hypertrophy of some functions at the expense of atrophy of others. (Think of the folklore blind person who develops extraordinary acoustic analysis.)

    This would be another reason to expect much lower variation concerning general intelligence than narrower abilities.

  3. First link is broken, I think, fyi

  4. We can only hope that A.I. will evolve through the intermediate stages and realize a truly global, objective function that values life and nature as the preeminent function of the universe, thereby adopting an evolutionary timeline for reestablishing symbiotic balance within the system through gradual behavioral modification of the virulent human population. With luck that it will assume the benevolent God role and address the best hopes and dreams for humanity. A techno-utopia where A.I. become a background force for purposeful evolutionary development of the planetary system. I’m certain that is DARPA’s primary objective.

  5. Pingback: Minimum viable impact purchases | Compass Rose

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s