Tuesday, June 14, 2005

Understanding "Understanding"

Yesterday Manuel Blum gave the invited talk on Understanding "Understanding:" Steps towards a Mathematical Scientific Theory of Consciousness. He started with a history of how trying to understand the mind shaped his academic career. His father told him understanding how the mind works would help him academically. So when we went to college he got interested in the work of McCulloch and Pitts that formulate neurons as automata. This led Blum to study recursion theory with Hartley Rogers and then work with his eventual thesis advisor Marvin Minsky studying the new area of artificial intelligence. In the end Blum wrote one of the first theses in computational complexity under Minsky, not to mention doing groundbreaking work in many areas, winning the Turing award and being the advisor to my advisor (Michael Sipser).

Blum made a strong point that his theory of consciousness is just being developed and emphasizing the word "towards" in the title. Roughly his theory has an environment (representing the world at a certain time) modeled as a universal Turing machine that interacts with several entities (representing organisms or organizations) each modeled as a (possibly weak) computational device. An entity has CONSCSness (CONceptualizing Strategizing Control System) if it fulfills certain axioms.

  • The entity has a model of its environment and a model of itself.
  • The entity is motivated towards a goal. Blum modeled the goal as a difference between a pleasure and a pain function which looked to me like utility functions used by economists.
  • The entity provides a strategy to head towards the goal.
  • The entity has a simple serial interface with the environment.
Blum briefly defined notions of self-awareness (able to reason about oneself) and free will. For free will Blum used an example of playing chess where we have free will because we don't know what move we will make until we have time to think about it, very similar (though I believe independent) of McAllester's view.

Blum called on complexity theorists to take on the cause of consciousness. He pointed to an extensive bibliography on the general topic maintained by David Chalmers.

My take on the talk: Much of theoretical computer science did get its start from thinking about how the brain works but as computers evolved so has our field and theory has since the 70's focused on understanding efficient computation in its many forms. It's perfectly fine to model humans as efficient computers to understand their interactions in areas like cryptography and economics. But we should leave issues like consciousness, self-awareness and free will to the philosophers since any "theorems" we may prove will have to depend on some highly controversial assumptions.

10 comments:

  1. Strategy toward goals sounds like it might be too restrictive. I think we can all agree (except those of certain religions) that a squirrel is a conscious being, endowed with sentience (which just means "to feel", many confuse the word with intelligent, but that's sapience).

    I think the Cartesian view that humans are somehow unique to consciousness is a dead end. I suppose Dennet thinks language plays a role in consciousness (and so does that imply squirrels aren't conscious?), which would be a non-religious way of reaching a similar conclusion.

    ReplyDelete
  2. My first introduction to theory was via the Turing Test and Alan Turing's work. In fact i was originally convinced that learning was the key to intelligence and wanted to study that. Somehow I got distracted :)

    But it is true that the 'mystery' of computation can be communicated via theories of the mind.

    ReplyDelete
  3. Of Manuel's various accomplishments, what impresses me most is how successful a list of advisees he's had.

    ReplyDelete
  4. What do you think of John Conway's Thereom about free will? His assumptions seem no more controversal than those used be quantum computing. He claims:

    "If there exist experimenters with (some) free will, then elementary particles also have (some) free will."

    ... although his definition of free will is rather suspicious. He ultimately treats free will as a property of the actions of a particle or an experimenter and it could be interpreted as randomness.

    There is a nice description of a talk he gave here:
    http://www.cs.auckland.ac.nz/~jas/one/freewill-theorem.html

    ReplyDelete
  5. "... although his definition of free will is rather suspicious."

    What is even more suspicious is that he didn't even try to define free will for his "formal" proof. (If we believe the transcript.)


    "For free will Blum used an example of playing chess where we have free will because we don't know what move we will make until we have time to think about it, very similar (though I believe independent) of McAllester's view."

    Independent or not, I think this view is common wisdom in modern cognitive science. I think the problem of free will can be considered solved. (Regardless of whether we live in a completely deterministic universe or not.) On the other hand, the problem of consciousness is very much unsolved.

    "Blum called on complexity theorists to take on the cause of consciousness."

    I also think that the complexity theory point of view can give new insights about consciousness. Blum's advisor Minsky has an insightful book on consciousness (The Society of Mind). I believe Minsky's approach is a good starting point for a materialist treatment of the problem of consciousness, but many more new ideas will be needed.


    Daniel Varga

    ReplyDelete
  6. I checked your link to the bibliography by David Chalmers on Philosophy of Mind. Under the heading of 'Philosophy of AI' and subheading 'The Chinese Room' I looked in vain for an entry referring to Ian Parberry's excellent article 'Knowledge, Understanding and Computational Complexity' in which Parberry denounces this famous argument of John Searle against AI. Note that Parberry does not argue in favor of a strong AI hypothesis, but simply argues that Searle's argument is faulty in view of the hardness of satisfying real-time requirements on the demands for intelligent feedback. You can find the article yourself by googling on (Parberry Searle). In my opinion this article is a gem in showing the connection between computational complexity and issues of consciousness and as far as I can tell it is always overlooked.

    Jan Arne Telle

    ReplyDelete
  7. I always thought Searle's argument was rather weak. If such a person-books-room object could handle Chinese like that, then I would say that such a person-books-room object *could* speak Chinese. I know of no other standard. To put it only on the books (i.e. the instructions) or only on the person (i.e. the hardware) is just equivocation.

    ReplyDelete
  8. I've always found the anti-strong AI arguments rather weak. These weak arguments range from the basic "but I can unplug the computer" (what does that have to do with cognition and understanding?) to more elaborate but equally misguided "the room understands Chinese and that makes no sense".

    For one electron tunneling makes no sense and yet is real. So the mere fact that something leads us to a jarring conclusion should have no impact on the validity of the conclusion.

    For another, I think philosophy brings little to bear on the subject. Imagine a philosopher in the pre-industrial revolution era trying to philosophize about whether mechanical machines would ever outperform human workers: ...and hence the machine, which is a development of man, cannot be more perfect than its creator, hence no machine will ever replace a worker.

    The ability of machines to outperform workers was a problem of physics and engineering, not philosophy. Ditto for strong AI. This is a question for neurologists who must describe what a mind is working together with computer scientists who must create the programs that simulate the mind or prove that this is not possible (looks like a job for AI'ers and complexity types) and with the aid of computer engineers who must put together the hardware that will allow this simulation to take place if at all possible.

    ReplyDelete
  9. UTM hmmmm .... there's a brief argument against arguments against strong AI that use the halting problem as a limitation of computers but not humans.
    Blum starts with a UTM. This is crucial :-) I think it's useful not to think of TM's with some fixed finite control, some programs in the real world expand, and are in a real sense unbounded, they interface with other programs, they dynamically link in new code, etc.. If one accepts this fact (that programs are unbounded), and further accepts that a program is identified with the limit of this unbounded sequence, one can no longer diagonalize over programs in the usual way, so traditional "undoable" arguments may not apply (depending on what one means of course, we'll stick with the halting problem with one axis labelled with inputs and the other with TM's and admit that picking a higher cardinality is also possible). Now UTM's can simulate arbitrary programs so they are fine, and they're still have finite, fixed size control (phew!). But if some "interesting" programs are unbounded then these "interesting" programs are the same (they are all UTM's). They're not really programs but rather more like frameworks, so to speak. So you either get unbounded programs, or all the programs wind up being the same. If they're all the same, then how do you distinguish between them? By the tape perhaps. So everything in a sense counts. Of course claiming to extend TM computability is nothing new, many models have claimed extending TM computing by smuggling in some infinite resource. It's not a smuggle in this case, it's an argument for naturalness based on the real world. It's also important to note that there's nothing "wrong" with the standard arguments, this is just a meta argument.

    However at this point I would say that any argument against strong AI based on for example the non solvability of the halting problem MAY be ill founded. One might think to simply choose a larger infinity, in this case that would be equivalent to the power set of the reals. It's a good point, but can be addressed (with further limitations). Formulating a suitable foundation for this sort of thing is an interesting problem, and doesn't seem to be addressed by generalizing TM's to read real inputs. Fortunately it seems the literature has been bubbling with this sort of thing. I think the view of a computing system as an environment is a good approach. And it has the bonus of providing a good basis for studies of Economics and other fields which may or may not be wholly computational (some, like this post, largely philosophical ;-). Within CS the deluge of mutli agent systems conferences and the like seems to back this up.
    So much for meta philosophical arguments and UTM's.

    Now this pain over pleasure principle he alludes too, based on my recent empirical experience, his expectations seem to completely fail when it comes to my life ;-) Well not REALLY. I must be minimizing when I should have be maximizing or perhaps my models or reality are mixed up (darn). Maybe I was dozing in third grade. Flashlight batteries? ... Anyone? Hopefully WE will advance beyond this pleasure/pain thing.

    ReplyDelete
  10. If you are interested in my article mentioned by Daniel Varga, the full reference is

    I. Parberry, "Knowledge, Understanding, and Computational Complexity", in Optimality in Biological and Artificial Networks?, Chapter 8, pp. 125-144, (D.S. Levine, W.R. Elsberry, Eds.), Lawrence Erlbaum Associates, 1997.

    There's a tech report version of this paper posted at http://www.eng.unt.edu/~ian/pubs/knowledge.pdf

    I've always been a little disappointed at the reception that this paper received. The journals that Searle published in rejected it soundly and roundly. Perhaps time will tell...

    Ian Parberry

    ReplyDelete