A few weeks ago, I posted about tournament ranking algorithms. Since then, I wrote a little algorithm (the simplest I could think of) and have run it on this year's results after each week's games are over. I'm pretty proud of what's come out of it, but I realize there are still a few issues that may need to get smoothed out.
As I'm mentioning to my software engineering students, I'd really love to have the code be very reusable so it could be used as a ranker for other sorts of tournaments. With this in mind, I've tried to use all the OO-design principles I'm handing out in class. So far, this has been working very well!
I'm definitely at a point where I want to start talking about the algorithm, and listening to complaints about the rankings, etc. The latter is easy (I have Iowa up top) but the former is a bit more scary. If I had a paper, I would simply upload it to arXiv so that it was clearly dated and submitted by me. What do I do if I just have a code base?
One goal for this algorithm is for it to actually match up with rankings from previous years. If I can closely match to those, then perhaps I can not only say something about the strength of the algorithm, but also about how people determine which teams are better.
I mentioned above that this is the simplest algorithm I could think of... so how it is then that I am modifying small parts of it? Part of the flexibility of this system is that you can plug in different functions that reflect the value of a win or loss, difference in score. Thus, using different functions, I get different results (both of the ones I have created so far put Iowa up top; please feel free to complain).
There are a lot of these ranking algorithms (tons here) but how can I determine whether any are the same as mine without looking at the source of each of them? I think I'll probably just put something up on arXiv...
A Domino-Covering Problem
1 month ago