Tuesday, June 14, 2005

Understanding "Understanding"

Yesterday Manuel Blum gave the invited talk on Understanding "Understanding:" Steps towards a Mathematical Scientific Theory of Consciousness. He started with a history of how trying to understand the mind shaped his academic career. His father told him understanding how the mind works would help him academically. So when we went to college he got interested in the work of McCulloch and Pitts that formulate neurons as automata. This led Blum to study recursion theory with Hartley Rogers and then work with his eventual thesis advisor Marvin Minsky studying the new area of artificial intelligence. In the end Blum wrote one of the first theses in computational complexity under Minsky, not to mention doing groundbreaking work in many areas, winning the Turing award and being the advisor to my advisor (Michael Sipser).

Blum made a strong point that his theory of consciousness is just being developed and emphasizing the word "towards" in the title. Roughly his theory has an environment (representing the world at a certain time) modeled as a universal Turing machine that interacts with several entities (representing organisms or organizations) each modeled as a (possibly weak) computational device. An entity has CONSCSness (CONceptualizing Strategizing Control System) if it fulfills certain axioms.

  • The entity has a model of its environment and a model of itself.
  • The entity is motivated towards a goal. Blum modeled the goal as a difference between a pleasure and a pain function which looked to me like utility functions used by economists.
  • The entity provides a strategy to head towards the goal.
  • The entity has a simple serial interface with the environment.
Blum briefly defined notions of self-awareness (able to reason about oneself) and free will. For free will Blum used an example of playing chess where we have free will because we don't know what move we will make until we have time to think about it, very similar (though I believe independent) of McAllester's view.

Blum called on complexity theorists to take on the cause of consciousness. He pointed to an extensive bibliography on the general topic maintained by David Chalmers.

My take on the talk: Much of theoretical computer science did get its start from thinking about how the brain works but as computers evolved so has our field and theory has since the 70's focused on understanding efficient computation in its many forms. It's perfectly fine to model humans as efficient computers to understand their interactions in areas like cryptography and economics. But we should leave issues like consciousness, self-awareness and free will to the philosophers since any "theorems" we may prove will have to depend on some highly controversial assumptions.


  1. Strategy toward goals sounds like it might be too restrictive. I think we can all agree (except those of certain religions) that a squirrel is a conscious being, endowed with sentience (which just means "to feel", many confuse the word with intelligent, but that's sapience).

    I think the Cartesian view that humans are somehow unique to consciousness is a dead end. I suppose Dennet thinks language plays a role in consciousness (and so does that imply squirrels aren't conscious?), which would be a non-religious way of reaching a similar conclusion.

  2. My first introduction to theory was via the Turing Test and Alan Turing's work. In fact i was originally convinced that learning was the key to intelligence and wanted to study that. Somehow I got distracted :)

    But it is true that the 'mystery' of computation can be communicated via theories of the mind.

  3. Of Manuel's various accomplishments, what impresses me most is how successful a list of advisees he's had.

  4. What do you think of John Conway's Thereom about free will? His assumptions seem no more controversal than those used be quantum computing. He claims:

    "If there exist experimenters with (some) free will, then elementary particles also have (some) free will."

    ... although his definition of free will is rather suspicious. He ultimately treats free will as a property of the actions of a particle or an experimenter and it could be interpreted as randomness.

    There is a nice description of a talk he gave here:

  5. "... although his definition of free will is rather suspicious."

    What is even more suspicious is that he didn't even try to define free will for his "formal" proof. (If we believe the transcript.)

    "For free will Blum used an example of playing chess where we have free will because we don't know what move we will make until we have time to think about it, very similar (though I believe independent) of McAllester's view."

    Independent or not, I think this view is common wisdom in modern cognitive science. I think the problem of free will can be considered solved. (Regardless of whether we live in a completely deterministic universe or not.) On the other hand, the problem of consciousness is very much unsolved.

    "Blum called on complexity theorists to take on the cause of consciousness."

    I also think that the complexity theory point of view can give new insights about consciousness. Blum's advisor Minsky has an insightful book on consciousness (The Society of Mind). I believe Minsky's approach is a good starting point for a materialist treatment of the problem of consciousness, but many more new ideas will be needed.

    Daniel Varga

  6. I always thought Searle's argument was rather weak. If such a person-books-room object could handle Chinese like that, then I would say that such a person-books-room object *could* speak Chinese. I know of no other standard. To put it only on the books (i.e. the instructions) or only on the person (i.e. the hardware) is just equivocation.

  7. I've always found the anti-strong AI arguments rather weak. These weak arguments range from the basic "but I can unplug the computer" (what does that have to do with cognition and understanding?) to more elaborate but equally misguided "the room understands Chinese and that makes no sense".

    For one electron tunneling makes no sense and yet is real. So the mere fact that something leads us to a jarring conclusion should have no impact on the validity of the conclusion.

    For another, I think philosophy brings little to bear on the subject. Imagine a philosopher in the pre-industrial revolution era trying to philosophize about whether mechanical machines would ever outperform human workers: ...and hence the machine, which is a development of man, cannot be more perfect than its creator, hence no machine will ever replace a worker.

    The ability of machines to outperform workers was a problem of physics and engineering, not philosophy. Ditto for strong AI. This is a question for neurologists who must describe what a mind is working together with computer scientists who must create the programs that simulate the mind or prove that this is not possible (looks like a job for AI'ers and complexity types) and with the aid of computer engineers who must put together the hardware that will allow this simulation to take place if at all possible.

  8. If you are interested in my article mentioned by Daniel Varga, the full reference is

    I. Parberry, "Knowledge, Understanding, and Computational Complexity", in Optimality in Biological and Artificial Networks?, Chapter 8, pp. 125-144, (D.S. Levine, W.R. Elsberry, Eds.), Lawrence Erlbaum Associates, 1997.

    There's a tech report version of this paper posted at http://www.eng.unt.edu/~ian/pubs/knowledge.pdf

    I've always been a little disappointed at the reception that this paper received. The journals that Searle published in rejected it soundly and roundly. Perhaps time will tell...

    Ian Parberry