Friday, May 07, 2004

Games

A readers asked about the complexity of games like Go and Chess. David Eppstein has a nice site giving a short description and references to a number of specific games.

Let us thought put such games in a general framework. We have a board and each player in turn can make one of a list of legal moves that depend on the current placement of pieces on the board. We focus on deterministic games of complete information, as opposed to games like backgammon or poker.

Games like Go and Chess are played on a fixed board, one could just enumerate all of the possible board combinations and perform perfect play in a constant amount of time. So we need to look at generalized versions of Go and Chess where the size of the board and the set of rules can vary.

Let's place this in a general setting. We have a polynomial-time algorithm that given a board and the current player can tell whether the game has ended with its outcome or can give a list of legal moves for the player. Chandra, Kozen and Stockmeyer have a seminal paper on these alternating games: If we restrict the length of the game to polynomial-time, such games characterize PSPACE (problems solvable with polynomial memory and unlimited time). Games with arbitrary long play on polynomial-size boards characterize EXP (exponential time).

So we have results like given an opening position on a generalized Go games, it is EXP-complete to determine if a player have a forced win. But even if the official Go rules allow it, I find it hard to believe that players can play the game for an exponential number of moves. So it makes sense to add some artificial stopping rules that cause the game to end after a reasonable amount of time and such games are usually PSPACE-complete.

The PSPACE-completeness results hold for many very simple games. This mirrors the fact that complexity does not arise from complicated actions, rather from the interactions of many simple actions.

No comments:

Post a Comment