Thursday, January 05, 2012

Starting the Year with Turing

This week I was in Boston for the Joint Math Meeting, a combined meeting of the AMS, MAA and a couple of other three-letter math societies with 7000 of my closest math buddies. This is the main American meeting of mathematicians one part of which are interviews for math jobs which seem few and far between.

The conference didn't seem large to me because I spent most of the meeting at the AMS-ASL Special Session on the Life and Legacy of Alan Turing. I got to see some exciting speakers I haven't seen before including Martin Davis, Andrew Hodges (who authored the famous Turing biography soon to be re-issued) and my great-grand advisor Marvin Minsky. Minsky talked mostly about the sorry state of AI over the past few decades including how the Watson people were working on the wrong problem. I found myself in the strange position of defending AI before my talk the next day.

Craig Bauer, a math historian, talked about the early days of voice encryption during Work War II, basically digitizing and then applying a one-time pad. Turing developed a mechanism that used a hardware PRG improving the quality of the audio and reducing the space needed from a large room to small box, though it was never deployed in the field.

Ted Slaman send me this link with Turing suggesting that PRG can help in searching. I guess Turing did care about running time after all but we still haven't found his lost letter on P v NP.

Interesting fact: Gödel and Turing both admired each other's work but there is no evidence that they ever met or had any direct communication of any kind.

A fun workshop but I'm all Turing'd out and it is only the first week of the Alan Turing Year though I am still looking forward to June to attending the ACM Turing Celebration in San Francisco and CiE in Cambridge.

Next week I'm off to Dagstuhl for Computability, Complexity and Randomness. The fun also continues in the Boston area with ITCS.


  1. What did Minsky think the *right* problem was?

  2. He wants computers to understand concepts not just provide "probabilities". One example is why we can pull an object with a string but we can't push an object with a string.

  3. And here I thought the right question was "which concept classes possess low-complexity polynomial threshold functions." I'm not sure what it means for a computer to understand a concept, but my best definition would involve probabilities.

  4. dip the string in water, freeze it, then push

    let's get humans all good at concepts first :)

  5. The problem of integrating concept-driven, rational faculties with data-driven, empirical faculties is very old and very deep. I have to agree that I haven't seen much progress on that score in mainstream AI over the last 30 years. Most research traditions tend to drift to one bank of the stream and ignore the other from that point on.

  6. I wonder how many computer scientists think of Watson as interesting AI research.

  7. We can imagine a Watson-style AI that is expert in playing chess: composed of a huge database of openings, a vast set of heuristic evaluation rules, a fast alpha-beta search engine, and an enormous lookup table of perfect end-game strategies. Of course there is no need to imagine such engines; they exist concretely and their code is in the public domain (Crafty, Fruit, Stockfish).

    Then we can imagine a Minsky-style AI that can play chess almost as well as the Watson-style AIs, and moreover can do what the Watson-style AIs cannot do: construct explanatory narratives about chess games. For these AIs, the narratives are the code. Of course there is no need to imagine such AIs; they exist concretely as chess grandmasters, and their narratives are largely in the public domain (Kasparov's My Great Predecessors).

    As is well known, machines aren't (yet) much good at constructing narratives-as-compact-codes (chess or otherwise), and perhaps this because we humans haven't (yet) constructed good narratives about how we construct narratives (chess or otherwise).

    In this latter regard — the process of constructing compact narratives about the process of compact narrative construction — I commend to readers of the Fortnow/GASARCH weblog Amelie Rorty's celebrated (in some circles) and much-reprinted article Spinoza on the Pathos of Idolatrous Love and the Hilarity of True Love.

    To the extent that Rorty's narrative conceptions could be formalized and instantiated in machine code, considerable progress might be made toward post-Watson modes of self-programming AIs.

    In any event, the capability to read-and-understand essays like Rorty's amounts to a Turing test that no present-day machine (and perhaps not too many present-day humans) can pass with facility.

    It is natural to ponder this question: Had Turing himself been better able to pass the Turing Test of Rorty's essay, might this have helped Turing to find in daily life sufficient hilaritas as to be still with us?

  8. Minsky talked mostly about the sorry state of AI over the past few decades including how the Watson people were working on the wrong problem.

    I'd say that doing the opposite of what Minsky says is a sign that AI is on the right track.

  9. The great grandparents of AI — Turing, McCulloch, Pitts, Minsky, Papert, Ashby, McKay, Weiner, Chomsky, Schützenberger, Arbib — were at least literate about the classical dimensions of the enterprise. I can imagine how bemusing it would be for them to observe the same old hammers being used on the same old screws.

  10. Feel bemusement? some of those grandparents should feel embarrassment for setting AI out on a wild chase of overpromising and underperforming research lines.

    Minsky's "advice" about how and how not to do AI research---which are unchanged since his early years---should have about the same weight as the statements of a XV century Chinese rocketeer lamenting the sorry state of today moon rockets because they lack the sparkles his come to expect from fireworks.

  11. and perhaps this because we humans haven't (yet) constructed good narratives about how we construct narratives (chess or otherwise).

    What makes you believe that they exist? Fifty years ago this would have been a not bad working hypothesis, but the lack of a single one in the ensuing half-century of AI research strongly suggests that there is none. If there was one, humans would learn chess by memorizing said narrative, instead we do massive data collection and (subconscious) classification.

    We seem to be a collection of simple, one-of heuristics that the brain can deploy massively in parallel, including semi-exhaustive searches of choice branches of the solution space, most of which happens subconsciously.

  12. Anonymous said... If there was [a narrative foundation for human chess mastery], humans would learn chess by memorizing said narrative, instead we do massive data collection and (subconscious) classification."

    Granting that massive data collection is necessary to chess mastery, it is striking that chess grandmasters also are notably skilled at annotating games, and conceiving stories about the ebb and flow of tension in the game. Bill Thurston's celebrated article On proof and progress in mathematics surveys similar themes.

    Perhaps what distinguishes (today's) computer programs like IBM's Watson, or even the computational capabilities of human savants, from the capabilities of masters like Kasparov and Thurston, is that the latter have a facility for collecting data and refining narratives simultaneously.

  13. I was also at Minsky's talk. I thought it was embarassing, mainly because it was billed was a talk on how Turing had influenced everything Minsky and his students had done, and it seemed to me that it didn't really touch on that (as well as being rather rambling and unfocused). I did subsequently speak to someone who thought that Watson was significant because of how it could process natural language -- which is another question entirely, of course, but interesting and significant in itself.