Friday, January 28, 2011

Game Description: Adjacent Hex

While at the CGT (Combinatorial Game Theory) workshop earlier this month, I watched Ryan Hayward give an excellent talk about Hex and "Rex" (Misere Hex). Ryan is an excellent Hex player and even better Hex theorist (and also a good curling skipper). I was determined to play some Hex with him, but needed a little extra help since I had played so little. Thus, I suggested we play a variant: Adjacent Hex.

The game is exactly like standard Hex, except it mimics the play restrictions like Atropos: after an opponent paints a hexagon their color, you must play adjacent to their last move. If all the adjacent hexagons are already colored, you may choose to play in any uncolored hexagon on the board. (We've been referring to this as a "jump".)

Ryan and I played one game with these rules. At one point, I thought I was doomed, but Ryan pointed out a really good move for me. I can't recall whether that move led me to win, but either way it was clear his Hex skills were helping out. We proceeded to play two other games using different rules for jumping, but both agreed the original game was best.

I convinced Alan Guo to play a few games with me. We went back and forth, winning and losing, but building up some good intuitive strategies for the game.

Since returning to Wittenberg, I am undefeated. (My aide, Ernie, would like to add: "So far.") I've played against a number of students, and even beat the combined talents of my aide and teaching assistant, Patrick, earlier this week. Obed, the director of our math resource center, has vowed to get a set to practice with his own students.

This game plays very nicely: knowledge of Hex will help out, though knowing that playing in one hexagon will win you the game doesn't always help if your opponent never plays adjacent to that spot. In most of the games I've played, players are usually fighting to be the one working on their own path, which is sometimes harder than it sounds. Usually one player is currently working on a path, while the other player is just trying to derail that path using the adjacency rule. If their derailing is successful (this can be quite tricky) then that player starts building a path fragment and the other player is now trying to thwart that plan.

Sometimes play moves into an area that is closed off where the color of the hexagons does not matter. Now both players are competing to force their opponent to paint the last hexagon in the region, ensuring that they will be the one to "get the jump" and play wherever they want. A few times I've been in great situations where I decided not to connect two of my important paths because doing so would give my opponent a jump. This is a tough choice!

One nice property of this game is (I think) that it is "automatically" PSPACE-complete. If I understand Stefan Reisch's original "Hex is PSPACE-complete" proof, the gadgets in the reduction don't have any adjacent uncolored hexagons [citation needed], thus ensuring that each move gives the opponent a jump and the adjacency rule doesn't come into play. (I wonder how helpful it would be to produce a translation of Reisch's paper.)

In my office, I have a 14 x 14 hex board, just ready for adjacent-minded opponents!

Tuesday, January 25, 2011

Computational Complexity of Games

At BIRS this year, the game everyone was interested in was Go.

I guess it's hard to compete with that! Aside from Go, one of the big features was NoGo. We had the first ever NoGo world tournament, both for humans and computers. After those tournaments were over, Fan Xie, the reigning human champion, battled Bob Hearn's champion NoGo-playing program. When presented with a new game like NoGo, as a computer scientist I'm always interested in the computational complexity: how hard is it to determine which player has a winning move? How long would it take a computer program to figure this out?

For some games, we have a bad (or good, depending on how you see it) answer: a lot. For Chess, for example, an algorithm that can evaluate the "winnability" of every game requires an exponential number of steps in the worst case. When I say "exponential", I mean, "exponential in the amount of information needed to describe the game board". Thus, if I look at all Chess boards that can be described using 1,000,000 bits of information, some of those require around


steps to figure out who is going to win where a is some constant greater than 1. This is not a feasible amount of time even for computers to have. As the boards get bigger, the number of steps increases by a multiplicative factor of a. We say that these problems are in the class EXPTIME because they take an exponential amount of time. (Figuring out who wins a game of Go is also one of these problems.) To be more general, we also include problems which can be solved in less time. Thus, the "complexity class" EXPTIME is the set of all problems which can be solved in an exponential amount of time (or less).

In general, if you want to figure out who wins a game by just trying all the options, this method usually takes an exponential amount of time because there are an exponential number of scenarios to reach the end of the game which must be tested.

However, for some games, the player with a winning strategy can be figured out really quickly! In the game Nim, for example, there is a fast procedure ("trick") to figure out whether the next player can win. This can be done in a polynomial amount of time, meaning there is a polynomial function that describes the number of steps needed for a computer to determine who has a winning strategy, again in terms of the size of the information needed to describe the game state. Although polynomials can have large exponents, for any one given polynomial, that exponent remains constant. We say that problems that be answered in a polynomial amount of time are in the complexity class P. Since these are both sets, and anything that can be solved in exponential time can also be solved in polynomial time, we know that P is a subset of EXP-TIME and that the two are not equal.

For some games, we don't know how long the fastest algorithm to determine winnability takes. Sometimes we describe these in terms of resources other than time. The complexity class PSPACE is the set of problems which can be solved with a polynomial amount of "space" meaning a polynomial amount of room to keep a record of important information in the algorithm. The space can be reused; erasing is allowed! Imagine that instead of restricting the amount of time you have to take a test, you are instead restricted by the amount of scrap paper you are given to do your calculations. It has been shown that P is a subset of PSPACE, which is in turn a subset of EXPTIME. It is not known whether either is equal to PSPACE (obviously both cannot be).

We can, however, say that there are some games that REQUIRE a polynomial amount of space to solve, and are thus among the hardest problems in PSPACE. These are known as PSPACE-complete. We show that a game (or any sort of computational problem) is PSPACE-complete by showing two things:
  • The game is in PSPACE
  • Any other problem in PSPACE can be efficiently rewritten as an equivalent instance of this game. (Known as hardness. Proving this means the game is PSPACE-hard.)
The notion of completeness works for any complexity class. Thus, for some class XXXXX, proving that a problem is in XXXXX and is XXXXX-hard means that the problem is XXXXX-complete. Chess, for example, is known to require and exponential amount of time because it has been shown to be EXPTIME-complete.

To show the second part (hardness) for a game, Y, means starting with any other PSPACE-complete game, X, and showing that any instance of X can be rewritten as Y where the next player wins Y exactly when the next player would win X. This is known as "finding a reduction" or "reducing X to Y". This is where the fun is! Check out this game table to see some known complexity results for games.

Why bother to show that a game is PSPACE-complete? Although we don't know whether P = PSPACE, it is likely that the two are not equal. Solutions to problems are generally considered "efficient" if there is a polynomial-time algorithm, so then PSPACE-complete problems cannot be solved "efficiently".

This post has gotten long, but I realized I should explain a bit more about computational complexity before blundering forward into some complexity results from BIRS. Next week I will talk more about that, and especially how it relates to NoGo.

I'm sure I left lots out! Please help me out in the comments!

Friday, January 21, 2011

Updated Game Table

I made some updates to this game table with suggestions from others (and many things I learned at BIRS).

I am considering removing the last column ("Can I win this turn?") mostly because it's only interesting for a few combinatorial games. Anyone have a strong feeling about this?

Tuesday, January 18, 2011

BIRS Workshop, 2011

Last week I was at the Banff International Research Station for a workshop on Combinatorial Game Theory. It was excellent! I got to meet many CGT bigwigs, play a lot of great games, present some things and even prove a few things.

Here were some highlights:
  • Presenting Atropos
  • Meeting 35 new friends
  • Playing Cookie Cutter with creator Paul Ottaway
  • Listening to current NoGo World Champion, Fan Xie, explain why he was losing to Bob Hearn's NoGo program while playing it.
  • Proving something on Tuesday and seeing the result mentioned in a talk on Wednesday morning!
  • Learning to Curl
  • Witnessing Zhujiu Jiang take on eight mathematicians at once in Go. (He only lost one of the matches.)
  • Playing Games... and losing most of them.
I learned a lot, and once I sort out all my notes I'll have a number of good blog topics from the meeting.

In other news, welcome to the Spring semester! I will return to the schedule of posting Tuesdays and Fridays. As normal, let me know if there's anything interesting you'd like me to cover! :)