Thursday, January 20, 2005

Does a Chess Program have Free Will?

A non-CS Chicago Alum asked me a question about free will and computation. I passed the question to David McAllester, an AI professor at TTI, and he gave the following interesting reply.
The idea that I could be simulated on a computer seems at odds with my subjective experience of free will and my intuition that my future actions are not yet determined — I am free to choose them. But consider a computer program that plays chess. In actual chess playing programs the program "considers" individual moves and "works out" the consequences of each move. This is a rather high level description of the calculation that is done, but it is fair to say that the program "considers options" and "evaluates consequences". When I say, as a human being, that I have to choose between two options, and that I have not decided yet, this seems no different to me from the situation of a chess playing computer before it has finished its calculation. The computer's move is determined — it is a deterministic process — and yet it still has "options". To say "the computer could move pawn to king four" is true provided that we interpret "could do x" as "it is a legal option for the computer to do x". To say that I am free is simply so say that I have options (and I should consider them and look before I leap). But having options, in the sense of the legal moves of chess, is compatible with selecting an option using a deterministic computation. A chess playing program shows that a determined system can have free will, i.e., can have options. So free will (having options) is compatible with determinism and there is no conflict.


  1. I think the question could be more defined: Can a submarine swim?

  2. I've heard this "no conflict" argument over and over, and it always sounds like sophistry to me. I don't understand what the term "free will" could possibly mean, if it didn't include (as one aspect) the inability of other agents to predict a probability distribution over your future actions. Yet this condition is failed by the chess-playing program, since we could always simulate it using another program. In my view, while we might someday have an intellectually satisfying explanation for free will, we don't currently have one, any more than we have an explanation for why NP-complete problems are hard, or people before Darwin had an explanation for adaptation in nature. In all three cases, progress isn't helped by pretending there's no problem where there clearly is one.

  3. I'd have to agree with Scott's comment. McAllester's definition of free will seems to imply that a thrown rock has 'free will'. The rock has the 'option' of going one direction or another. Due to the angle and velocity at which it was originally thrown, the rock 'chose' to go in a particular trajectory. The chess program similarly has the 'option' of making one choice or another, but due to its program 'chooses' a particular one. I tend to favor a defintion of free will that does not include determined behavior.

    - Homin

  4. What then about a chess-playing program that chooses its most highly-valued move but chooses randomly when more than one such move exists? This has both choice and unpredictability. Is that sufficient for free will?

  5. In my opinion, no. I don't see how the issues raised by randomized algorithms are much different from those raised by deterministic algorithms. After all, random processes are *not* unpredictable; they're the very essence of predictability if you know the distribution. Ask any casino boss.

  6. No, MacNeil has it right. This is a problem that can be easily resolved, because the question itself is mistaken. "Free will" is a broken concept. What I'm reading in the comments above is that Free Will must be used to make choices, but cannot be predictable itself. It seems to me that if a) you make useful decisions, there is, by definition, a "predictable" algorithm involved since it has to relate actions to your knowledge and sensory inputs. b) Should you have any kind of identity or personality, (i.e. there is a way to describe you as a person) then by definition this also forms a constraint on your possible decisions. You can't make true randomness a basis for a theory of free will.

    At an even deeper level, it seems we want to separate decision making systems (nervous systems and perhaps, if we believe in strong AI, properly-programmed computers) from the rest of Nature because we are living things which value our own power, survival, and reproduction. From the standpoint of functionality, a brain and an AI are different than a rock, but as far as metaphysics goes, neither physics nor natural selection seems to need it to proceed. My take is that therefore, consciousness (what it's like to BE that decision/control system known as the human mind) is an inherent aspect of physical change. And in fact, it can't be an aspect. It IS physical change. If it were separable from it, there would be no need for it at all. This position also handily eliminates dualism. Any "ghosts in the machine" instantly become the pattern of physical change (computation) in the machine.

    Now, I've just conflated consciousness and free will, mainly because i believe what we're really talking about is the subjective sense we have when we make decisions ourselves. It could be that the value-laden sense we have is merely the reward and avoidance systems of the brain kicking in as we evaluate how best to solve a problem (whether it's intellectual or just the household chore variety) with the maximum usefulness and minimum expenditure of effort. Maybe we could call that free will and get away with it, but any definition that thinks determinism makes free will disappear is pointless. Like it or not, we are part of Nature.

    On the flip side you're probably going to object to the side effect of giving consciousness to rocks in motion, creeks, and other parts of the inanimate world. Please note I'm not attributing intelligence or much organized awareness of any kind to any of these things. Nor do they have to be single, whole minds either. There just has to be some kind of experience, however weird, that is the physical change in these things. It just seems to me that any attempt to deny the minimal, fractured consciousness of the "dead" portions of nature or assign it to biological or "sentient" nervous systems only must be inherently teleological (i.e. attaches meaning to things that inherently lack goals, like physics or natural selection.) Either that, or such attempts will break apart under the weight of observation, because they must rely on functionality and purpose to give consciousness to some things but not others (if I make a mistake on a test or in writing some code, is that a moment in which I am not conscious? How about if I experience a hallucination? Or if I daydream about some non-existent object like a unicorn?)

  7. I think it's more meaningful to talk about "the experience/sensation of having free will". A human has this sensation because she (or any other human, or any other agent she has ever met) is unable to systematically predict the future state of her mind or body. You would quickly lose your sense of free will if you met someone who always tells you what you will do in ten minutes, even after you purposely try to disprove her.

    But deterministic computers running nontrivial programs also have this unpredictability property. The computer can't predict its future configurations. Other agents also can't predict its future configurations unless simulating the run of its program. If (because of e.g. time complexity constraints) this simulation is not feasible (slower than the computer itself, thus not having predictive power), this deterministic computer has free will in every meaningful sense.

  8. "Inability of other agents to predict a probability distribution over future actions" seems ill-defined to me. Being able to predict depends on what information you have about the agent in question. Probability is a function of the predictor's belief about the agent, not the agent herself.
    A qubit(or even a classical bit) whose state I do not know is unpredictable to me; a chess program whose parameters I do not know is unpredictable in the same way.
    Moreover the notion of probability when applied to a unique non-repeatable event has issues, to say the least. On the other hand, statistically humans are quite predictable as well.

    (On a lighter note, is this is a sign of the computer's "free will":
    Internal Server Error
    The server encountered an internal error or misconfiguration and was unable to complete your request.)

  9. IMHO we don't know brain function well enough to answer this in detail. Have Penrose's theories of quantum-level involvement been scientifically (dis)credited?

    As far as I know, chess programs choose their opening moves "randomly" but play deterministically once they are out of "book" lines. Their choices *may* depend on the time settings for the game. I own Fritz8, a leading commercial PC program, but cannot find anything about this in its documentation.

    McAllester and I had an interesting exchange in the mid-1980s. He asked me how many moves I (an International Master, the rank below Grandmaster) typically consider in a position. I answered "usually 2 or 3", and he couldn't believe so low an answer. But chess machines are now so fast that they often alpha-beta prune out most options in the first few milliseconds. Maybe my brain takes some instants to "apprehend" every legal move in a position, and in that sense I "consider" them---though I'm not conscious of this. (I happen to believe that we do much useful thought in parallel subconsciously, part reason I have students read thru all of an exam before I give the signal to write.) So that distinction between my own mental experience and what I observe in chess programs has lessened.

  10. I agree that the non-deterministic definition of free will is problematic, but it is the correct one (as that it is how the word is used). Whether or not free will exists is a valid question, though if one were to follow Howell's reasoning, it'd be hard to justify any moral judgments.

    Howell's discussion of consciousness reminds me of David Chalmer�s work. While Howell seems to have a similar solution to Chalmer (expanding our notion of consciousness to include rocks), he only addresses the functional aspects of consciousness while ignoring the experiential. If you're interested in other viewpoints, Chalmers includes a pretty fair summary of opposing view points here.

  11. Whoops. Forgot to sign that.

    - Homin

  12. I tried to post this yesterday, but blogger comments seem to be lagged badly.

    I'm saying that free will doesn't exist because when one examines the definition, the result is self-contradictory. This is not merely "problematic". I'm guessing that the source of the objection is that moral philosophy usually requires independent agents using reason as a starting point. Ideas are not like buildings. If the foundation is broken and the superstructure is sound, you can replace it without demolishing the entire building. If I can't treat human beings as special solely on the basis of their consciousness or "free will", I admit, this becomes more difficult. I can justify it perhaps on the basis that the rest of Nature is already brutally indifferent (living and non-living), while I at least can reduce the suffering of the animals I consume. And humans are already a highly cooperative species (in the self-interested market economy sense) due to our having originated as small bands of hunter-gatherers which got a great deal of productivity out of cooperative efforts. We'd probably like to see this justified on higher moral ground, but if Enlightenment philosophers got away with working upwards from self-interest, I don't know why it should be allowed to refute a naturalistic explanation of consciousness.

    I didn't exactly discuss qualia because I felt it would be off topic in a discussion of free will. However, I don't see where you get the idea that I ignored the experiential aspects of consciousness.

    "My take is that therefore, consciousness (what it's like to BE that decision/control system known as the human mind) is an inherent aspect of physical change. And in fact, it can't be an aspect. It IS physical change."

    It was only implicit, but I define consciousness entirely experientially. I believe (and yes, I agree with Chalmers) that sentience can be explained scientifically in functional terms, but consciousness/qualia cannot. ALL physical change (is any change _not_ physical?) is experience of some kind. A collection of closely causally connected change can form a single experience. The pattern of that change is what determines its inner "flavor" (for a real lack of a better word) and whether that physical activity is one's inner voice, perception of color and shape, or sense of touch. I believe that we might be able to understand and compare qualia by using formalism in the same way we work with formal number structures in mathematics. I differ from Chalmers in that he seems to consider that the failure of the fact of consciousness to supervene on the physical requires a form of dualism. My instinct is to respond that the dualism only consists of our need to discuss it separately from the thing itself.... but really a more sophisticated explanation of why we should expect this given my hypothesis is needed. Either that, or I really need to reexamine induction and logical supervenience.

  13. There's an excellent discussion on
    the notion of free will, using chess
    playing programs (and also Conways's
    game of life) to illustrate some
    concepts and arguments, in Daniel
    Dennett's book "Freedom Evolves"
    (2003). I recommend that book to
    anyone interested in these topics.