Ivars Peterson's MathTrek
September 6, 2004
One problem is that there are 117 teams in Division I-A of the National Collegiate Athletic Association (NCAA), but they play only 10 to 13 games each. Every team doesn't play every other team. Moreover, some teams play games against much weaker (or stronger) opponents than others do. Not all schedules are created equal. And, in the end, there are no playoffs among the top teams to determine a champion.
For a long time, the crowning of the national college football champion has been solely in the hands of human judges. The Associated Press (AP) set of rankings represents the views of a select group of sportswriters and commentators. The USA Today/ESPN poll represents the opinions of college football coaches. Together, the two polls determine the champion. Sometimes, the polls don't agree. And sometimes the choice doesn't make sense, instead reflecting human biases and petty politicking.
The year 1998 brought a changethe use of a complicated mathematical formula to determine which two teams play for the national championship in a climactic, end-of-season bowl game. This formula produces the Bowl Championship Series (BCS) standings, and the coaches poll automatically anoints the winner of the final bowl game matching up the number 1 and number 2 teams in the standings as the national champion.
Last year, the BCS formula collided with human expectations, and, to many people, the wrong two teams played for the championship.
The original BCS formula was a witches' brew of the two national polls, several computer rankings, and various adjustments for strength of schedule and team records.
In this system, a team's standing was derived from the sum of four numbers.
The first number is the mean ranking earned by a team in the AP sportswriters poll and the USA Today/ESPN poll.
The second number is an average of computer rankings, where seven or eight sources, selected on the basis of offering different points of view in ranking teams, provide the data.
The third number takes into account each team's schedule strength. The average winning percentage of each team's opponents is multiplied by 2/3 and added to 1/3 times the winning percentage of its opponents' opponents. All the teams are then ranked, with number 1 going to the team with the most difficult schedule. This rank is then divided by 25 to give the third number in the BCS formula.
The fourth number in the BCS sum is the total number of losses by each team.
Once these four numbers are added together, a final quantity for "quality wins" is subtracted to account for victories against top teams. This "reward" ranges from 1 for beating the number 1 team to 0.1 for beating a number 10 team.
"It's not difficult to imagine that small changes in any of the above weightings have the potential to alter the BCS Standings dramatically," Thomas Callaghan, Peter J. Mucha, and Mason A. Porter of the Georgia Institute of Technology remark in an article in the September Notices of the American Mathematical Society.
To demonstrate how weighting different factors can influence the rankings, the Georgia Tech mathematicians developed a simple ranking method based on random walkers following paths on a skewed network.
In this probabilistic model, each member of a collection of independent random walkers casts a single vote for the team it ranks as the best. Each walker occasionally considers changing its vote by examining the outcome of a single game selected randomly from those played by its favorite team, typically but not always recasting its vote for the winner of that game.
The behavior of the random walkers "is defined so simplistically that it is reasonable to think of them as a collection of trained monkeys," the researchers say. The key element is an implementation of a "my team beat yours" mentality.
Taken together, these random walkers produce a ranking of the top teams. Depending on the probability chosen for vote switching, the scheme can resemble a ranking in which team and opponent records matter much more than outcomes of particular games or one that depends strongly on which teams won and lost against which other teams.
"Although this random walker ranking system is grossly simplistic, we have found that this algorithm does a remarkably good job of ranking college football teams," Callaghan and his coworkers conclude, "or at least arguably as good as other available systems."
In the absence of sufficient detail to reproduce the official BCS computer rankings, they add, this simple random walker ranking scheme can be useful for analyzing the effects of possible changes in the BCS formula.
The mathematicians contend that the problem with the original BCS formula was that it included separate factors for schedule strength and quality wins when those factors were in fact already accounted for in the polls and computer rankings. "Adding these factors again after the polls and computer rankings are determined disastrously double-counts these effects," the researchers say.
Last spring, Callaghan and his coworkers submitted advance copies of their article to BCS decision makers. Interestingly, on July 15, BCS officials unveiled what they described as a "simpler and more precise" ratings system. Schedule strength, losses, and quality wins went out the window, leaving just the human polls and computer rankings. However, there's no evidence that these changes were directly prompted by the mathematicians' input.
But, even with the new formula, there are tweaks that could potentially produce curious results. There are three components: the two polls and the computer rankings, all equally weighted. So, the contribution of the human polls goes from one-quarter to two-thirds of the total. It's perhaps not surprising that human expectations are then more likely to be met.
Moreover, the BCS no longer uses the actual ranking in each poll. Instead, the relevant number is the percentage of possible points a team receives from the voters. As for the computer rankings, only four of six ratings systems contribute to the BCS total because each team's highest and lowest scores are dropped.
Who knows what these changes will spawn? Maybe the monkeys can help us out.
Copyright © 2004 by Ivars Peterson
2004. Of mathematics and football. American Mathematical Society press release. Aug. 11. Available at http://www.ams.org/new-in-math/press/notices-mucha.html.
2003. Simulated simians pick best football teams as well as pros. Georgia Tech news release. Nov. 18. Available at http://www.gatech.edu/news/item.php?id=213.
Callaghan, T., P.J. Mucha, and M.A. Porter. Preprint. Random walker ranking for NCAA Division I-A football. Available at http://www.math.gatech.edu/~mucha/BCS/bcsmanuscript.pdf.
______. 2004. The Bowl Championship Series: A mathematical review. Notices of the American Mathematical Society 51(September):887-893. Available at http://www.ams.org/notices/200408/fea-mucha.pdf.
Peterson, I. 1998. Who's really no. 1? MAA Online (Dec. 14).
Information about rankings of U.S. college football teams can be found via links at http://homepages.cae.wisc.edu/~dwilson/rsfc/rate/ (David L. Wilson, University of Wisconsin).
Comments are welcome. Please send messages to Ivars Peterson at firstname.lastname@example.org.
A collection of Ivars Peterson's early MathTrek articles, updated and illustrated, is now available as the Mathematical Association of America (MAA) book Mathematical Treks: From Surreal Numbers to Magic Circles. See http://www.maa.org/pubs/books/mtr.html.