Friday, July 31, 2009

The Singularity

Last Sunday the New York Times ran a front-page article Scientists Worry Machines May Outsmart Man that read a bit like a bad science fiction story about the dangers of computers that get too smart. The article reported on the AAAI Presidential Panel on Long-Term AI Futures that took place at Asilomar in February with many reasonable AI folks including a few that I know well, EC regulars Michael Wellman and David Parkes, my old NEC boss David Waltz and TTI-Chicago chief David McAllester.
McAllester chatted with me about the upcoming "Sigularity", the event where computers out think humans. He wouldn't commit to a date for the singularity but said it could happen in the next couple of decades and will definitely happen eventually. Here are some of McAllester's  views on the Sigularity.

There will be two milestones.
  1. Operational Sentience: We can easily converse with computers.
  2. The AI Chain Reaction: A computer that boot straps itself to a better self. Repeat.
We'll notice the first milestone in automated help systems that will genuinely be helpful. Later on computers will actually be fun to talk to. The point where computer can do anything humans can do will require the second milestone.

McAllester's arguments assume that humans are just fancy computers. Personally I believe we think in ways computers never can, in particular having a self-awareness and reasoning beyond the capability of machines and we'll never see a Singularity.

We are more than Turing machines.

25 comments:

  1. I recognize Andrew, Tom, Manuela and Sebastian. Nice to see some AI people featured on this blog!

    Lance, I think your views remind me of the 19th century chemists who believed organic compounds could never be synthesized in a lab.

    ReplyDelete
  2. I agree with you. But am I programmed to say that?

    ReplyDelete
  3. @rjlipton I think that everything is a response to something and in that sense I will say that we are "programmed"

    ReplyDelete
  4. As a computer scientist I thought you'd know better... Well, good thing you're not a computational neuroscientist ;)

    ReplyDelete
  5. If we're more than Turing Machines, then in what way are we more? Does this mean that you disbelieve the Church-Turing thesis?

    ReplyDelete
  6. Like mikero I would like to hear you expound on your remarks. Do you believe that self-awareness and reasoning are incomputable?

    ReplyDelete
  7. @Bill - although it may be a silly question, I think the first question should be - can we even give a formal definition of self-awareness and the kind of reasoning the brain performs. If we don't have a such a formal definition, then I don't see how it's meaningful to ask the computability (or not) of the above.

    ReplyDelete
  8. For the sake of argument, let's say you truly are self-ware. How do you know for sure that other people are as well? The only source of evidence you have are via empirical observations that can, in principle, be reproduced artificially.

    Perhaps you strongly believe other human beings are self-aware because they act (and look) like the only being you truly know to be self-aware -- yourself. I often make the same implicit assumptions about other human beings as well. This seems to suggest that we are influenced by a very strong prior.

    ReplyDelete
  9. I attempted to use a diagonalization argument to disprove the basis of singularity. The definition of the ultra intelligent machine as given by IJ Good is a self refrencing definition, therefore looked suitable for diagonalization argument to show that an ultra intelligent machine, which is more intelligent than humans can't exist. IJ Good defined an ultra intelligent machine as a machine which is capable of creating a more intelligent machine than itself. If that is possible then by induction, you would have an ever increasing intelligent machines (which could have a limit though, as we know an infinite sequence of increasing numbers could easily have a finite limit.)

    So how far did I go? It turned out that you can prove the results both ways, i.e., an ultraintelligent machine is impossible, and you can also prove their existence (based on hypothetical scenarios as used in physics).

    If we restrict ourselves in the realm of turing machines, one can easily prove that ultra intelligent turing machines do not exist. This is because turing machines satisfy transitivity. If a machine A can create machine B, then whatever task machine B could do, in principle machine A could have done by itself by simulating machine B. So machine B can't be more intelligent than machine A.

    So to make the situation interesting, one has to take this transitivity away. Indeed, humans could create a machine which can do things which humans themselves can't do, e.g., number crunching in "reasonable" time. This reasonable is a subjective attribute, so let us make it objective.

    Say humans could compute 1 arithmetic operation in 1 second. So you could find tasks which human's can't do even in 100 years, but a computer can do (e.g., counting from 1 to billions).

    Now computers are made from silicon, an element with atomic number 14. Let me say that an element with atomic number N is described as E[N]. Mathematically it is consistent to hypothetically assume that one could create elements E[N], for every positive integer N. It is also consistent to hypothetically assume that a computer made with element E[N+1] is million times faster than a computer made with E[N]. Suppose creating an element E[N+2] requires a million times more arithmetic operations than creating an element E[N+1]. So it is hypothetically consistent to assume that a computer made with E[N+2] can hypothetically be made by an existing computer E[N+1] in reasonable time but not with a computer E[N]. So now you have a sequence of computers with ever increasing ability.

    ReplyDelete
  10. "Operational Sentience: We can easily converse with computers"

    I know humans who can't pass this metric in e-mail.

    ReplyDelete
  11. I'd love to here why you believe we are more than Turing machines?

    ReplyDelete
  12. "Personally I believe we think in ways computers never can, in particular having a self-awareness and reasoning beyond the capability of machines and we'll never see a Singularity."

    100 years from now this discriminatory attitude will be shared only by rednecks.

    ReplyDelete
  13. Artificial human-like behavior? Definitely coming (IMHO).

    Artificial intelligence? Hmmmm ... perhaps doubtful of achievement .. on the grounds that the core behaviors that make us human require only limited rationality.

    Bootstrapping? Since humans aren't particularly good at bootstrapping human behavior, what reasons do we have expect that machines will do better at bootstrapping machine behavior?

    It is interesting that mathematicians---who don't professionally study human behavior---nonetheless tend to assume that human behavior is rational.

    Primatologists---who do professionally study human behavior---are not so sure...writings like Frans de Waal's Chimpanzee Politics are highly recommended in this regard.

    ReplyDelete
  14. To make the above point another way, we can imagine a world in which primate mathematicians created proofs by non-rational mechanisms (e.g., via hot, wet, noisy, neural-net heuristics).

    Of course, they would then check these proofs by logical reasoning ... but this checking process is more-or-less mechanical ... not involving reasoning processes of any very sophisticated level.

    (And isn't this the way that mathematics is done, in our world?)

    These primates might (mistakenly!) regard themselves as rational, even though their core cognitive processes were not rational in any significant sense of the word ...

    ... and this would explain why attempts by these primates to build artificial intelligences on foundations of logical reasoning would fail repeatedly.

    ReplyDelete
  15. And to complete a tryptych ... we are lead by modern primatological research to envision two future worlds of AI ...

    One world in which the architecture of artificial intelligences is fundamentally rational ... a world in which intelligences (human or artificial) steadily evolve to become more-and-more alike.

    It goes without saying that in this world, all mathematical proofs are consistent.

    On the other hand, we can envision a "primatological" world in which all intelligences (human and artifial) are predominantly non-logical ... a world in which individual styles of intelligence steadily increase in diversity ... a planet of ten billion "hot, wet, noisy, heuristic" intelligences, no two of whom are alike.

    And needless to say, in this world too, all mathematical proofs are checked for consistency ... so that the integrity of the mathematical literature is maintained, even as individual styles of cognition become steadily more diverse.

    To me, the latter world seems much more fun ... much more diverse ... much more creative ... which is good, because (IMHO) it is the world that we already live in!

    The Singularity? It's here already! :)

    ReplyDelete
  16. "Personally I believe we think in ways computers never can, in particular having a self-awareness and reasoning beyond the capability of machines"

    This is a familiar sentiment; Do you see any way to state it a little more formally?

    ReplyDelete
  17. You say this is needed:
    Operational Sentience: We can easily converse with computers
    and at the end you say that we are more than turing machines, well the one above is tue turing test nonetheless. And we have nothing to do with computers, because they are never going to be sentient beings, so to me all these is nonsense-

    In order to clarity here are 4 important differences between brains and computers:

    -Brains are analog and computers are digital.
    -Processing speed is not fixed in the brain and it is lacking the system clock.
    -No hardware/software distinction can be made with respect to the brain or mind.
    -Processing and memory are preformed by the same components in the brain. In the computer they are not.
    -Brains have bodies.
    -Brains can not run windows.
    -Computers can be turned off.

    ReplyDelete
  18. Mariana: las emociones son el faro que nos permite llegar a puerto, no la niebla que nos hace encallar.

    We won´t need more than Turing machines to simulate general purpose human brains (neuromental level) but we will get at best, (all) human neuromental capabilities.

    I would develope this thesis if only this wasn´t too informative.

    ReplyDelete
  19. Gil Kalai asks: Do you see any way to state [a non-Turing hypothesis] a little more formally?

    In the words of Python's Sir Robin, That's EASY! :)

    Example of such a hypothesis: "The architecture of mammalian brains is such that the Turing-tape simulation of n neurons having physiological correlation time τ requires O(n^4 T/τ) Turing operations to simulate a duration T."

    The human brain has about 10^11 neurons, with a correlation time of about 10 msec---so if the above scaling could be proved, then Lance's hypothesis "We are not Turing Machines" would be definitely established.

    To frame this hypothesis more broadly, is it not conceivable that mammalian brain architectures that are not rule-based might have a moderate-exponent polynomial scaling with respect to Turing (or von Neumann) computational architectures?

    It is interesting to reflect, that our brains have been under strong Darwinian selection pressure, for about 500 million years, respond rapidly (not logically) to external challenges.

    In other words, perhaps biological Turing machines did evolve ... but these organisms became extinct long ago ... for reasons of computational inefficiency! :)

    There might not be a scholarly article in this ... but perhaps there might be a science fiction story.

    ReplyDelete
  20. common, John... you can't be serious...

    ReplyDelete
  21. My remarks weren't entirely serious ... but they weren't entirely joking either.

    It seems to me that nowadays, in many branches of math, science, and engineering, there is considerable "turbulent mixing" at the boundary between problems that are (provably) hard as contrasted with problems that are (empirically and/or generically) easy.

    Dick Lipton has been posting on whether SAT is hard or easy ... in-between posts I've been working on whether simulating noisy and/or observed quantum systems is hard or easy (it's easy!) ... and now Lance is asking us whether human-type AI is so easy as to be coming soon, versus so hard as to be beyond the reach of Turing/von Neumann machine architectures.

    It's clear that there are plenty of good mathematical and scientific arguments on both sides of all of these hard-versus-easy questions.

    To the extent that there is any emerging consensus at all, IMHO it is that the boundary between "hard" and "easy" is so richly structured, that we are all very lucky to be alive (to paraphrase Feynman) in the generation that is first coming to understand these issues.

    ReplyDelete
  22. I would be more comfortable with the discussion if there was any kind of consensus as to what intelligence is, and, more precisely what is it precisely that machines will be able to do, presumably better than humans.

    After all, people thought that playing checkers, or doing arithmetic were examples of intelligent behavior, and clearly computers are already better than humans in these.

    Is "intelligence" a single trait? (If, a I believe, it is not, the question becomes "questions").

    Also, the arguments about biological vs digital computers are irrelevant. Most AI folks would be happy with AI based on a "wet" computer.

    The "vital spirit" theory of the 19th century was easily disproved by experiments (synthesis of urea, and Pasteur's experiment showing the existence of microorganisms).
    What are the experimental features of "successful AI"?

    ReplyDelete
  23. Jan Van den Bussche11:55 AM, August 03, 2009

    Clearly accordingly to our best current scientific understanding, we are indeed machines. But I think that is besides the point. The real question is whether we will ever be able to understand ourselves well enough in order to be able to program a computer to think like ourselves. So I think it is more a matter of limitation of humans than of machines. In my mind (!) there might well hold some kind of Gödel incompleteness phenomenon for humans.

    ReplyDelete
  24. Here is a specific (and not too unrealistic) description of a world in which cognition is a physical process that is not simulatable by Turing/von Neumann architectures.

    We assume that cognition takes place on spin-glasses of n dipole-coupled spins. We further assume that the spins are immersed in a thermal bath that concentrates quantum trajectories onto a tensor network manifold having dimension O(n^2). To simulate cognition by integrating (noisy) dynamical trajectories on this manifold, we have to raise symplectic one-form indices ... this is (naively) an O((n^2)^3) = O(n^6) process ... and perhaps O(n^4) if we use some tensor network tricks.

    Now we take n~10^11 ... and we're busted! To compute even a single time-step requires 10^44 to 10^66 operations.

    So in this imagined world of hot, wet, noisy spin-based cognition ... Turing/von Neumann architectures are of little use in simulating cognition.

    Just to be clear, I do not regard the above model as physiologically realistic in any sense whatsoever ... I mention it solely as a reminder that there may perhaps be "hot, we, noisy" brain architectures that are infeasible to simulate by classical computation algorithms.

    ReplyDelete
  25. I think that Jan Van den Bussche said something extremelly interesting and pretty accurate.

    ReplyDelete