## Friday, October 30, 2009

### Misere Hardness

I'm very pleased with the way my software engineering class is going. I've accidentally asked the students to code up some difficult games at times, but they've done a good job of figuring out exactly how to fix these issues. For the most part, using a game suite as the continuing project has worked extremely well!

Early on, I asked them to include both nim and misere nim. As with all misere games, misere nim is the same game, except that the person who makes the last move loses. Thame Plambeck used to maintain Advances in Losing, a blog solely about misere games. My first combinatorial game, Atropos, was misere: the first player to create a 3-colored simplex (triangle) loses. The misere notion is very intuitive for many games.

As we continue with the ongoing class project, I have continued to add difficult parts but at some point I stopped asking them to add more games to the suite. Oops! Currently, I've given them another tricky task: have users make moves through a series of mouse clicks (instead of entering choices in text boxes). I figured it was time to throw in some new games along with that. At the same time, I hope they realize they can apply "misereness" with any games, and not just with Nim.

Solution? They've already programmed a simple version of Kayles and Hex, so why not include misere Kayles and misere Hex? Even after thinking about that, I became immediately interested in misere Hex. How does that game work? Whichever player first creates their bridge between both sides loses? Weird! Who wins misere Hex? What is the complexity of this game?

In 1999, Jeffrey Lagarias and Daniel Sleator tackled that first question. They find the immediate intuition---that the first player never has a winning strategy---incorrect and instead base the winnability on the parity of the board size. They do this by showing that the losing player can stretch out the game play until the final hexagon has been colored. Thus, on boards with an even number of hexagons, the first player can win! The proof they give applies to games starting from the initial (empty) game configuration.

Does this answer the complexity question? Not in any obvious way. In order for the complexity to be solved, an algorithm would have to work for any game state. One can invoke a board where the game cannot be dragged out to fill in all the cells. (Give red two nearly-complete paths with only one hex missing, and let the rest of the board be filled in. If it's red's turn, what can they do?) It could be that the proof generalizes well into an algorithm.

Thinking from the complexity angle, as this blog (too?) often does, is there anything we can say about the relationship between the complexity of a game and it's misere counterpart? Again, I am interested in considering Atropos, which is already misere. The non-misere version means that the first player to create a 3-colored triangle wins. In this impartial game, each player may choose to color one circle any of the three available colors. The circles are laid out similarly to hex---so that each has six neighbors---inside a triangle with borders colored to suffice for Sperner's Lemma. This means that any play along the borders can win: there are always two different neighboring colors, so simply paint that circle with the remaining color.

Thus, the strategy from the initial position is easy. Choose one of those bordering circles and paint it the third color. Game over. In mid-game positions, however, a player must choose to paint a circle that neighbors the last-painted circle (if an open circle exists). Thus, the game might not be quite as simple from any given situation; one of the border circles may not be adjacent to the last play. (But why wouldn't the first player have just won the game? Why would you ever get to this situation?)

In general, are there any results about misere games and complexity? It's been a while since we phrased a question, so let's do it: (I've been marking the relevant posts with question# labels).

Question (2): Given a game G and it's misere version, M, must the complexity of determining the winner of one of these games be in P?

I think this is the best way to phrase the "What is the relationship between the complexity of G and M?" question. Consider nim and misere nim: both are easy. With Hex and Atropos, each has a PSPACE-complete version, but the opposite looks like it might be easy. Is there a game I'm overlooking which is hard both normally and misere...ishly? (someone help me out with French!)

Of course, I'm still curious about misere Hex.

Question (3): What is the complexity of determining whether the current player in misere Hex has a winning strategy?

For those interested, user Zickzack at boardgamegeek.com has a good description and list of misere games.

## Wednesday, October 28, 2009

### Economic Game Theory vs CGT

When I started grad school in computer science, I quickly learned it was dangerous to admit that to people. "Oh, that's great! You can help me fix my computer!" Though I'm no Linux guru, it quickly became my safe haven: "Which OS are you using? Oh, I haven't run Windows in years... I don't know anything about that." At least that wasn't a lie; I just have no experience cleaning up viruses, etc. Still, I learned it was often easier to tell people I was studying math instead and save the whole explanation. Again, this wasn't too far from the truth. Theoretical Computer Science doesn't look much like computer science to most people.

Nowadays I'm faced with an alternate dilemma related to explaining my field. Talking with other computer scientists, I will often simplify things by saying I am studying "game theory". Sometimes there are no follow-up questions, but other times people will assume you mean economic game theory or derivations thereof. I still haven't mastered handling this situation, but here's how it seems to go.

Me: "Oh, I'm more focused in combinatorial game theory, which isn't quite the same."

Them: "Combinatorial?"

Me: "Yeah, like board games."

Them: "What is there to study there?"

Me (getting all excited): "Well, we try to say things about who can win a game."

Them: "So you could figure out who's going to win a game of Monopoly?"

Me (somewhat less excited): "Well, we try to work with simpler games, like Hex or Chess."

Them: "Chess is simpler than Monopoly?"

Me: "Well, Chess doesn't have any randomness in it-..."

Them: "I think Monopoly is much easier to play than Chess."

Me: "..."

I would much rather replace this with the following conversation:

Them: "Combinatorial?"

Me: "It's similar to the economic side. ."

Them: "Oh. How do you do that?"

Me: "We investigate these things by looking at simple board games. Sometimes what looks simple can wind up being very complex."

Them: "Wow, board games can do that? That's pretty cool!"

Me (beaming): "Yeah, totally!"

I know some of these similarities and differences, but how can I phrase them well? More importantly, how can I state this in a clean way that will not legitimately spark a "whatever!" from someone with a knowledge of economic game theory? A dangerous task indeed!

## Monday, October 26, 2009

### Ranking Football Teams... Again

A few weeks ago, I posted about tournament ranking algorithms. Since then, I wrote a little algorithm (the simplest I could think of) and have run it on this year's results after each week's games are over. I'm pretty proud of what's come out of it, but I realize there are still a few issues that may need to get smoothed out.

As I'm mentioning to my software engineering students, I'd really love to have the code be very reusable so it could be used as a ranker for other sorts of tournaments. With this in mind, I've tried to use all the OO-design principles I'm handing out in class. So far, this has been working very well!

I'm definitely at a point where I want to start talking about the algorithm, and listening to complaints about the rankings, etc. The latter is easy (I have Iowa up top) but the former is a bit more scary. If I had a paper, I would simply upload it to arXiv so that it was clearly dated and submitted by me. What do I do if I just have a code base?

One goal for this algorithm is for it to actually match up with rankings from previous years. If I can closely match to those, then perhaps I can not only say something about the strength of the algorithm, but also about how people determine which teams are better.

I mentioned above that this is the simplest algorithm I could think of... so how it is then that I am modifying small parts of it? Part of the flexibility of this system is that you can plug in different functions that reflect the value of a win or loss, difference in score. Thus, using different functions, I get different results (both of the ones I have created so far put Iowa up top; please feel free to complain).

There are a lot of these ranking algorithms (tons here) but how can I determine whether any are the same as mine without looking at the source of each of them? I think I'll probably just put something up on arXiv...

## Friday, October 23, 2009

### Conferences for Combinatorial Games

One thing I'd like to provide here is a list of good options to submit combinatorial game papers to. Unfortunately, I'm just unfamiliar with these options.

Two years ago, I managed to get a paper about Atropos into the Workshop on Internet and Network Economics (WINE) 2007, though I think I was just lucky enough to be covering a lot of current keywords. Nevertheless, it was a great workshop and I saw a lot of cool results as well as met a bunch of excellent people. The audience was very interested in Atropos, and people were immediately remarking on different aspects of the game ("What happens if you change the coloring of the background?"). At the end, someone came up to me and mentioned that this presentation was going to be one of the most memorable from the workshop!

I was ecstatic, but I also realized that I was probably very lucky; it would be unreasonable that I would have such luck again. Later, my adviser discovered the Fun With Algorithms conference, which seems to be a great place to submit future combinatorial games papers. I also know that many people submit CG papers to Richard Nowakowski's Games of No Chance compilations.

Unfortunately, that is about the limit of my familiarity with combinatorial games papers. I know that results have appeared in many different conferences/journals/etc in the past. These days, however, what are the suggested avenues for publishing CGT results?

## Wednesday, October 21, 2009

### Games and Surreal Numbers

At Colby College, where I did my undergrad, a visiting math professor named Leon Harkleroad gave a talk on Surreal Numbers. Being a good math major, I went to the talk, even though my interests were sharply drifting towards computer science. In high school I had been very interested in the notion of infinity, and I was quickly drawn into the material as the discussion of omega and other non-real numbers began.

In surreal-land, numbers are built from two sets of numbers with non-overlapping ranges. The sets, say L and R are then written in the form {L | R}. Since they are non-overlapping, without loss of generality, we can say that all elements of L are less than all the numbers in R. (This is the aspect that separates surreal numbers from games, wherein L and R can overlap.) Then the "value" of that surreal number is directly between the largest value in L and the lowest value in R.

Thus, you can define an infiniteish number, ω, as: {1, 2, 3, 4, ... | }. The amazing thing about surreal numbers is that you can perform basic arithmetic on ω: 2ω, ω + 1 and ω - 1 are all attainable in surreal number land. Leon gave a great talk and was very accessible to the undergraduate level, though I didn't give it too much thought for a long while.

A year after Leon's talk, however, I was lucky enough to hear John Conway give the keynote at a small-college computer science conference in Rhode Island where I presented my first paper (in poster form). He played some dots-and-boxes against a member of the audience on a small board, promising that his opponent would soon figure out how to beat him. Unfortunately, "soon" took quite a long time and his allotted time quickly began to run out; we students were quite anxious for our poster session and for the attention to be lavished on us. After the game was solved, he swiftly started drawing some basic Hackenbush figures on a new slide. Time was of the essence, but he showed how in this game, players could derive numeric values from the game positions. The figures were nothing more than line-graphs sticking out of the ground, but the intuition behind their evaluations was clear.

Then he drew another line, and used a vertical ellipsis to show that it continued infinitely skyward. Even as he was saying, "We refer to this value..." a light in my head went off: "... as ω". The connection to surreal numbers was there.

Unfortunately, time was up. John was forced to end on that note and may have missed getting to connect that to computer science. We hurried out of the room and to the poster hall.

## Monday, October 19, 2009

### Fall Break

I forgot to mention last week, but Wittenberg has Fall Break this week; there will be no post today, though I will post on Wednesday!

## Friday, October 16, 2009

### Complexity of Games with Randomness

It is known that Hex is a PSPACE-hard game. (See this page, it is the "Hex ist PSPACE-vollstaendig" paper by Stefan Reisch.) Once we add a bit of randomness, however, the strategies suddenly become far easier! As I think I've mentioned before, if you change the game so that the current player is chosen by the flip of a coin, the winning strategy becomes simple.

I'm also interested in trying to analyze some games by removing their sources of randomness. Instead of relying on die rolls, for example, give each player the set of potential die rolls and let them choose one at a time, then removing that "roll" from their set of available moves. Once all options are chosen, either the player has no more moves or they get a new set of false rolls. I don't know of any analytical results like this, but I would love to see them!

On the other hand, I also have some interest in answering complexity questions of games that include randomness. For example: What is the probability that the current player has a winning strategy? Alternatively, rephrasing as a decision question, does the current player have a strategy that will allow them to win with probability > .5?

I don't know of any results like this. Do they exist?

## Wednesday, October 14, 2009

### Programming Hex

Oops!

Two weeks ago I assigned my software engineering students a new project: they were to add Hex to their collection of combinatorial games. To this point they had only implemented impartial games; this was to be a bit of a wrench in the mix.

Unfortunately, I picked a poor partisan game. Hex itself is an excellent game, but I forgot that it was a bit difficult to determine whether the game is over. The obvious approach to determine this comes down to graph connectivity, something beyond the scope of the project. Oops!

Luckily, I quickly recalled the proof of the Hex Theorem: when all hexes are colored, there will be exactly one winner of the game. The late David Gale does an excellent job of describing the theorem in this paper, showing the link between Hex and Brouwer's Fixed-Point theorem. I explained to the students that they should follow the boundaries from one of the corners of the board and see which corner it ends up at. For example, if they choose a corner where the blue side is on the left and red on the right, they can hug either the blue or red boundary (this might be the same if most of the board is colored).

First note that this path created by the boundary cannot end; it will exit through one of the other board corners. Note next that it cannot exit through the corner exactly opposite; the colors there are on the opposite sides! Thus, the path will veer either right or left (from the point of view of the entering corner). If, like we said, blue is on the left and red the right and while hugging the blue boundary we exit to the left, it means blue has not yet won the game. If, on the other hand, the path exits to the right, then there are blue hexes connecting the two sides of the board: blue wins! The opposite is true for red.

My students coded up their project and it works very nicely! Still, next time I think I'll assign Clobber instead!

## Monday, October 12, 2009

### Tournaments

I find myself playing in a lot of Magic tournaments. Sometimes I would find myself in big tournaments with hundreds of people. The longest I took part in lasted ten rounds, each taking a bit over an hour. For that whole time, I didn't leave the tournament room.

There are a number of ways to run such an event, and likely the easiest is to just hold a single-elimination tournament. Now, in order to determine a champion, only (log_2 (n) +1) rounds are needed with n players. The problem is that this is very unsatisfying for many players, especially those like myself, who aren't necessarily all that great and are just hoping to play a bunch of games. Tournament organizers want to attract as many players as possible and thus they use "Swiss Seating", a method of pairing opponents that have similar records. This works well and, as I recall, their format usually has (log_2 (n) + (2 or 3)) rounds. After that, the top eight players enter a single-elimination bracket to determine the overall winner.

This method is nice for a few reasons: it works while having a very small number of contests between players but still creates a sturdy ranking. Swiss seating is used for a number of combinatorial game tournaments (often with their own variations) but it can't be implemented everywhere.

American college football is plagued with controversy about who the best team is in a season. I won't get into all the details about these rankings, but there are a few major restrictions on the sport. The first is that the level of intensity and preparation for each game requires that teams can only play at most once per week. This means that teams play at most 13 games during the regular season and it means there is (probably) no time for an end-of-season tournament at the end aside from the current one-round system (the top two teams play each other).

Well, you might think, there are only about 120 total college football teams (in the top division. Overall, there are more like 650). 13 games is more than enough to come up with a swiss tournament that does a good job of picking out the top two teams. Unfortunately, swiss seating requires that teams don't know in advance who they're going to be playing. In college football, the schedule is usually prepared over a year in advance!

In order to overcome this, rankings are done with human input. This, of course, leads to the above-mentioned controversy: determining who the two top-ranked teams are leads to no end of discussion. Worse still, many of the voters used for input are people tied to teams themselves; each of the coaches gets to vote!

Are there any interesting tournament solutions that handle "bad seating" without introducing a human element? It seems feasible, except that perhaps more than just wins and losses would be needed in the data (amount a team won by, for example).

## Friday, October 9, 2009

### Metaposting

Apparently I am still learning how to blog...

In an effort to clean things up here, in lieu of posting about CGT today, I'm just going to try to fix things up a bit in the settings, etc. So far, I changed the fact that not just anyone was allowed to leave comments! Oops! Now anonymous comments are allowed. (How did I not notice that?)

You may have noticed recently that I added labels to a few posts. I'd like to go back through all the posts and try to add labels to help sort things out. That's another project for today/this weekend.

Right now I have a list of good CGT websites, but I would also like to have a list of links to rules for the different games mentioned here. I might also like to have links to different gamesters, but that might overlap a bit with the current list of links. Perhaps that wouldn't be a bad thing...

I do want this to be an actual useful resource for CGT info. Let me know what I can do to help you out!

"Gamesters" is a word I read once used to describe combinatorial gamers. I don't recall where I first saw that word. Does anyone know who coined the term?

## Wednesday, October 7, 2009

### Too much competition?

One of the great facets I find in studying game theory is that I am naturally competitive. I am never good at sitting down to play a game and not actively trying to win. I don't always strategize with the same ferocity, but rarely do I throw a game on purpose---even if doing so would be a social victory for me. It's easy to be too competitive!

I still hope to teach a combinatorial games course, and a big part of that would be playing different games during class time. Despite the fact that I would not keep track of wins and losses, I am afraid that some students may believe they are losing too often. Whether or not they are a "sore loser" this can still be very emotionally degrading. How can I do well in this course if I can't even play the games well?

Of course, you can still gain a really strong understanding of the game while losing. It's not true that being a good player is equivalent to being a good game theorist. This is likely not something that is immediately clear, though. So far I have no actual experience teaching this sort of course, so I don't know how to drive this point home.

Does anyone have any good suggestions? How can I convince students it is okay to play poorly?

People in general are competitive, and this lends well to taking a good interest in games. Unfortunately, that could play against the study if the competitive aspect becomes too great.

## Monday, October 5, 2009

### Algorithms and Game Rules

Most of my posts have had the underlying theme of writing algorithms to play games well, but coming up with the rules for games is also a good exercise in algorithms.

In fourth and seventh grades we were asked to create board games for school. A big difference between the two assignments was that in seventh grade we were asked to actually write out the rules. Although this was for a social studies class, our teacher made it plain that we needed to be very clear with these rules; don't assume the players automatically know how to do everything. She also stressed making it clear that the rules should repeat: after one person's turn, the game should move on to the next player.

Combinatorial games are beautiful in part for the simplicity of the unifying design. To describe a new game, it is enough to describe what the left and right children of any state are. Moving to the next person's turn and when the game ends are all built into the overriding definition. For generic board games, however, enough is going on during one turn that algorithmically describing the steps is more worthwhile.

Roll the dice,
move your piece clockwise around the board that many spaces,
if you land on unowned property, you may buy it
otherwise, if you land on owned property, you must pay that player the appropriate amount
otherwise, if you land on...

If I ever get teach a freshman seminar on board games, I will definitely integrate creating our own board games. Creating rules and the prototype has a lot to do with software design, and that process would be very useful!

Thank you, Mrs. Pasco. Your social studies course gave me the best computer science lesson of that school year!

## Friday, October 2, 2009

### Another Rules Change

Sometimes I wonder how much players should expect board game designers to get the rules right the first time. It seems to me that there are lots of games being published these days and it might be difficult to detect all the errors prior to publication. Still, I am often surprised at what is either a small amount of playtesting or a bad translation of playtesting to actual rules.

Having said this, I know I am already guilty of not getting to do enough playtesting when it comes around to designing new combinatorial games. It's hard to see how a game will play out unless you can find opponents, which I assume is more difficult in the academic setting as opposed to an actual game manufacturer. Still, it does seem reasonable that small pieces may be missing from a rule (for example, a stalemate rule). It's nice when the solution to these is easy enough to see and implement.

I'm having a more difficult problem resolving a conundrum with Ninja versus Ninja. I've mentioned this game before, but after posting on the topic of confusing/incorrect rules this week, it again jumped into my head.

To quickly explain the uncertainty, this page describes the different possible moves in one turn after a throw of the two four-sided dice. The goal is for your ninja to infiltrate the opposing dojo and either end your move on one of their ninja (thus eliminating them from the game) or just returning for a number of points equal to the distance your ninja was able to infiltrate. Ninja only have three turns to leave their dojo and return, so you do not often hang out in enemy territory. Instead, ninja will often use the reverse move to go deep and then turn around.

In order to prevent the easy capture of ninja, movement is restricted to either going in a straight line or an L-shaped move, and jumping of other ninja (even your own) is not allowed. The reverse allows your ninja to move in a straight line, then turn back around at one point, though this is only allowed in the opponent's dojo. The rules then suggest that a good strategy is to move your ninja directly next to an opponent's ninja, since then "there is no way for that Ninja to eliminate your piece (the two dice can only roll the numbers 2–8)".

This isn't true, however! If they roll an odd number (and are in your dojo and are deeper in your dojo than your piece) they can move further in and then reverse into your ninja. If they roll a 2 and a 3 on the dice, they can move forward two spaces, then reverse three, landing on your ninja they were initially next to.

That seems mostly legal. It plays pretty well and can be a bit hard to make happen (see all the requirements above). Sadly, it means that the strategy of moving your piece in position to attempt to block them getting back to the dojo doesn't really work.

Even more unclear and potentially damaging is the use of combining an L-move and a reversal. In this scenario, a ninja is moved a few squares sideways (not deeper into either dojo), then towards the opponent's dojo, then reverses and heads backwards. Everything seems to work fine during the reverse---so long as you don't reverse past the space you turned at. For instance, a ninja who has 6 spaces to move could move 1 square to the left, then forwards 2 squares, then backwards 3. In this way, the ninja is essentially able to land on (and kill) a piece one square left and one square backwards from it's starting position, even though it didn't roll a 2. This combination is often too strong.

My fiancee and I have changed this to not allow the reversing-with-an-L ninja to use more spaces than occur in the L-shape. Thus, the ninja cannot move further backwards than forwards. This is a pretty tame change, though it plays a lot better than allowing the rules as they appear to be stated.

Other potential rules alterations I would consider:

1) Don't allow ninja to move further backwards than forwards on any reversal. This would actually prevent the can't-kill-neighboring-ninja statement the "strategies" part of the rules claims is impossible.

2) Allow ninja to move further backwards than forwards on reversals, but only if the move does not end them on another ninja. This allows more liberal reversals, but still restricts the ninja from using it to capture other ninja.

3) Don't allow reversals at all. (I want to try playing this way.)

Perhaps there are not many Ninja Versus Ninja players out there at this point, but if anyone has played this game and had a similar problem, please let me know!

On the other hand, which other games (seem to) need a change to the rules to make them (more) playable?