‘The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’ – that, somehow, is so much harder!”‘
- Nick Bostrom, Superintelligence, p14
There are some activities we think of as involving substantial thinking that we haven’t tried to automate much, presumably because they require some of the ‘not thinking’ skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the ‘without thinking’ tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI – like chess in a new domain – or be hard like the ‘without thinking’ tasks?
Sebastian Hagen points out that we haven’t automated math, programming, or debugging, and these seem much like research and don’t require complicated interfacing with the world at least.
Crossposted from Superintelligence Reading Group.