Thursday, March 24, 2016

Game Description: Manalath

Thanks to an old email from Svenja, I just started playing an excellent game today: Manalath.

Manalath is played on a hex of hexes (though I've just been playing on a rectangular(ish) hex grid since that's all I have today).   
  • Each player has their own color, but can play either color piece.
  • You're not allowed to create connected components of one color of size 6 or bigger.  
  • If you (or your opponent) creates a connected component in your color of 5 pieces, you win immediately.  
  • However, if at the end of your turn, a connected component of your color of 4 pieces exists, you lose (unless you've already won).
The game is super fun.  As Juan Beltrán said after playing a bunch, "This game has really high fun per minute."

This was apparently designed by a fan of Yavalath, not by one of Cameron Browne's programs. :)  After having played Yavalath a bunch the past two years, I find this to be a bunch more fun.  There are all sorts of unexpected winning moves and also unexpected escapes. 

It definitely belongs on the table.

Upcoming CGT Events

There are a bunch of academic CGT events coming up!


June 24-27 (Fri-Mon) is the CMS Summer meeting at the University of Alberta, Edmonton.  There's going to be a special session on CGT in honor of Richard Guy's 100th birthday!  It sounds like Richard will be there!  Sweet!  It's not yet clear which day will have the special session.

Aug 10-13 (Wed-Sat): Games-at-Dal(housie) 2016 in Halifax, NS.  Last year's was really productive for me; I really hope to make it to this one.

Oct 6-9 (Thurs-Sun): INTEGERS 2016 at University of West Georgia.  Integers is back on the menu! :)


Jan 25-27 (probably): CGTC2 in Lisbon.

June or July (maybe): CGT and/or Games on Graphs in Lyon, France.

July 24-28: 2nd Mathematical Congress of the Americas in Montreal.  There is the chance of having a CGT special session here.

Monday, March 21, 2016

Two different Anti-Clobbers

In a post many years ago about Martian Chess, I mentioned trying to play a Clobber variant where each clobbering move destroys your own piece instead of the opponent's.  In the comments, Michael Albert replied that he had already studied this game a bit and was referring to it as Anti-Clobber. 

I liked that name and ran with it, and mentioned it in some talks, etc.  Then, a few months ago, I found out that there's already a different game with this name and it's implemented in CGSuite.  Clearly that's enough reason to change the name of the game I've been calling Anti-Clobber.

The actual Anti-Clobber is really like playing Clobber in reverse: each turn, instead of clobbering an adjacent opposing piece, you move a piece into an adjacent unoccupied spot and place a new opponent piece in the space you just left.  It's definitely non-trivial, interesting, and fun.  There are some sneaky moves you can make.

Good news:  This is a great game!

Bad news: It is definitely more Anti-Clobber than the game I was considering before.  It's also better for some other names I was thinking of:
  • Reverse Clobber
  • Unclobber
  • Backwards Clobber
So, I need to change the name to something different.  Here's what I've brainstormed so far:
  • Self-Clobber
  • Suicide-Clobber  (too morbid?)
Got any other good ideas?  Got any feelings towards these?  Let me know via email or in the comments.

Tuesday, March 15, 2016

Final Result: AlphaGo wins 4-1 vs Lee Sedol

AlphaGo won the final game vs Lee Sedol last night.  After Lee won the fourth game, I was really hoping he would also claim the fifth, showing that he'd figured out how to beat the new AI, even after an initial string of losses.

One of AlphaGo's early moves was seen by many as a mistake.  The AI rallied, however, to win the game.

My biggest question now is: how long until a human player can defeat AlphaGo?  By the time a human can, however, there will almost certainly be another machine able to defeat that player.  Perhaps human players will be able to see flaws somehow inherent to the new Go algorithms.  Probably by that time, a new algorithm will be spun off of the MCTS + Deep Learning combo that fixes the flaw.

I know a bit about Arimaa, a game specifically designed to be more difficult for computers to play.  I hoped that perhaps this would be a game where humans still outpaced our AI opponents.  Unfortunately, the computers seem to have overtaken the humans last year.

We had a good run.  Looks like the computers will be in control soon.

Sunday, March 13, 2016

Lee Sedol wins Game 4! (1-3 vs AlphaGo)

Lee Sedol took home the victory late last night/early this morning.  He started the game very copying his opening moves from Game 2, expecting AlphaGo to follow suit, which it did.  Lee didn't follow his old moves for very long, however.  From reading some commentary on GoGameGuru, it sounds like he went for a tough all-or-nothing strategy instead of letting the field get divided up into lots of little battles that AlphaGo could weigh and evaluate well. 

Around 78 moves in, he made a move that was really highly regarded by others, and 10 moves later, AlphaGo started making a bunch of moves that onlookers thought were very bad.  Is it because there's an inherent weakness in algorithms that utilize Monte Carlo Tree Search?  Or is it because AlphaGo couldn't properly see the strength in Lee's move at 78?  If that move only works if a few other exact plays get made, then perhaps AlphaGo never tried those paths and didn't see what was coming. 

I am really excited to see that Lee, representing Team Humans, managed to recognize and exploit a weakness in AlphaGo.  Although he might not have another winning strategy to employ in the next game (as he'll be playing as Black instead of White) I still hope he can manage to push out a win.

Saturday, March 12, 2016

AlphaGo 3-0 (vs Lee Sedol)

AlphaGo sealed the match against Lee Sedol with a third straight win.  Wow.  YouTube link to the game

Is this the end of humans competing with computers in Go?  Not exactly.  I just read a great reddit post about Chess after Garry Kasparov vs. Deep Blue in 1997.   Humans certainly didn't give up and only six years later, Kasparov drew against a more powerful Deep Junior.

I expect that humans will get more used to playing computer opponents and perhaps tip the scales back.  As more games with strong computer players are played, the new strategies can be studied and humans can adapt to them.

It'll be interesting to see how that plays out.  The next two games between AlphaGo and Lee might be indicative of how quickly humans can adapt to the new Go gamescape.

Friday, March 11, 2016

SmartGo article: AlphaGo Don't Care

Bob Hearn shared this wonderful article on by Anders Kierulf: AlphaGo don't care

I loved this piece, mostly because it really hits on an important aspect of the theoretical study of combinatorial games.  There is a real "psychology" aspect to a lot of the strategies.  I am not familiar with the Go terminology of the different move/structure types in the article (e.g. "tenuki", "extend at the bottom", and "peep").  These are human ways to describe different strategies and patterns. 

As the article reiterates: "AlphaGo don't care."  It doesn't lose universal focus to get stuck in local duels.  Each turn, it reevaluates the gameboard as a whole.  It doesn't care about the order of the previous plays, data that doesn't change the outcome of the game.

The same is true in CGT.  The options from a position depend only on the information of that position, not the history of plays nor the psychological battle between the two players.  The value of the game is irrelevant (though it may be very hard to calculate exactly).

From the article:
Lee Sedol threatens the territory at the top with 166? AlphaGo don’t care, it just secures points in the center instead. Points are points, it doesn’t matter where on the board they are.
It doesn't matter that Lee Sedol last moved near the top, AlphaGo just goes wherever it thinks it can amass the most points, not where it thinks the other player is going to focus their efforts.  The actual temperature of the regions is more important than which pieces were the most recent plays.

The third game is tonight!

Thursday, March 10, 2016

AlphaGo 2-0 (vs Lee Sedol)

AlphaGo defeated Lee Sedol again last night.  BBC has a nice story about the game.  It looks like AlphaGo's moves in this game were a bit less shocking to the 9-dan champion Lee.  My second-hand perspective on this is a bit dire; I'll admit that I'm rooting for humanity here.  If Lee has a hard time reviewing this game and figuring out where he might have made some grand mistakes, the likelihood of improvement in the next two days (before the next game) is low.  Unless he can spot a weakness in AlphaGo's play, the Holy Grail of Go might fall.

YouTube has a video of the second game.

Good luck to Lee in the next game!

Wednesday, March 9, 2016

AlphaGo 1-0 (AlphaGo vs Lee Sedol)

Last night, the computer program AlphaGo defeated professional 9-dan Go player Lee Sedol in the first game of a five-game match.  This is the first time a computer player has defeated a 9-dan player without a handicap.  The second game occurs tonight.  YouTube has the video of the first game and Wikipedia has a good article for the entire 5-game challenge.

This match is highly anticipated after AlphaGo defeated 2-dan Fan Hui back in October.  That was the first time a computer defeated a professional Go player on a full sized (19 x 19) board.  It was at this point that it became clear that AlphaGo had a chance against even the strongest human opponent.  The hype was set for the challenge with Lee Sedol.  The New York Times published a good article in the aftermath of the victory over Fan Hui.  They brought in some experts (including my colleague Bob Hearn) to explain better how AlphaGo works.

AlphaGo uses a combination of Monte Carlo Tree Search (MCTS) algorithms and newer Deep Learning techniques to win.  The MCTS algorithms have been around for about a decade or so, and brought about the first victories by computer players on 9x9 boards.  The basic idea behind this is that it randomly plays a bunch of complete games to their end, using the win/loss result of the last game to decide which game to explore next.  By playing a few thousand games and intelligently choosing which game to try next, the search can help narrow down which move to make.

Deep Learning comes in to play in two ways in AlphaGo.  First, it can approximate the win/loss value of each game early enough that the entire thing doesn't need to be played out.  (This may sound like a bad idea, but the game played by a pure MCTS algorithm is an approximation anyways, as the later moves are chosen completely at random.)  Second, it helps choose which of the games to simulate.  I don't know enough about deep learning (yet) to better describe the details of how it solves each of these problems.

This match is very reminiscent of Garry Kasparov's loss against Deep Blue in 1997.  That was the first time the reigning human world Chess champion lost to a computer.  At that time, Go was still unreachable by computer players, a Holy Grail that would require stronger computers and more sophisticated algorithms to attain.  After AlphaGo's win yesterday, perhaps the Grail will be captured soon.

As Bob Hearn points out in the NYT article:
Go was the last bastion of human superiority at what’s historically been viewed as quintessentially intellectual. This is it. We’re out of games now. This is seen by some as a harbinger of the approaching singularity.
I'm very anxious to see what happens over the course of the next week!

Update: post about the second game.

Saturday, March 5, 2016

2016 Portuguese CG Tournaments

Yesterday, Ludus ran the annual Math Games tournament for elementary, middle, and high school students.

Here's the English version of the site with lots of pictures:  (I'm only familiar with two of the games they played.)