Last Sunday the New York Times ran a front-page article Scientists Worry Machines May Outsmart Man that read a bit like a bad science fiction story about the dangers of computers that get too smart. The article reported on the AAAI Presidential Panel on Long-Term AI Futures that took place at Asilomar in February with many reasonable AI folks including a few that I know well, EC regulars Michael Wellman and David Parkes, my old NEC boss David Waltz and TTI-Chicago chief David McAllester.
McAllester chatted with me about the upcoming "Sigularity", the event where computers out think humans. He wouldn't commit to a date for the singularity but said it could happen in the next couple of decades and will definitely happen eventually. Here are some of McAllester's views on the Sigularity.
There will be two milestones.
- Operational Sentience: We can easily converse with computers.
- The AI Chain Reaction: A computer that boot straps itself to a better self. Repeat.
We'll notice the first milestone in automated help systems that will genuinely be helpful. Later on computers will actually be fun to talk to. The point where computer can do anything humans can do will require the second milestone.
McAllester's arguments assume that humans are just fancy computers. Personally I believe we think in ways computers never can, in particular having a self-awareness and reasoning beyond the capability of machines and we'll never see a Singularity.
We are more than Turing machines.