How The BCS Works & How The SEC Teams Were Measured In The Current Standings

The first Bowl Championship Series standings are out. SEC fans can rejoice, at least for this week, as Alabama and LSU are ranked No. 1 and No. 2. This is unlikely to hold until the end of the season, primarily because the two schools play one another on November 5. There is also no consensus between the various parts of the BCS as to who the top two teams really are yet.

There are three components to the BCS standings: Two polls conducted by Harris Interactive and USA Today and a composite of six computer rankings. Actually, the phrase “computer ranking” is misleading. These are simply proprietary mathematical formulae developed by individuals. Each component is weighted equally and averaged to determine the final BCS score and rank.

Although the Associated Press media poll is the most frequently cited (by the media) college football poll, it no longer plays no role in the BCS standings. The AP discontinued its participation in the BCS after the 2004 season, and the BCS chartered Harris Interactive Inc., a market research company, to produce a replacement poll.

The Harris poll surveys 115 panelists — former coaches, players, administrators, and current and former media members — who are randomly selected from nominees submitted by the Division I FBS membership. Each of the 11 conferences nominates 30 individuals, from which Harris randomly selects 10 each (for a total of 110). The independent schools — Notre Dame, Army, Navy and Brigham Young — submit nominees for the remaining five spots.

The USA Today poll is a poll of current 59 college football coaches. No conference has more than six coaches on the panel. The SEC’s representatives for this year are Gene Chizik, Les Miles, Mark Richt, Nick Saban, Steve Spurrier and James Franklin.

The Harris and USA Today polls each count towards one-third of the final BCS score. This is based on the total points earned in each poll and not the ranking. In other words, a team can earn a maximum of 2,825 points in the Harris poll and 1,525 points in the USA Today poll (if a team receives every first-place vote). In last year’s final BCS standings, for example, No. 1 Auburn had 2,809 points in the Harris poll, or 98.56% of the maximum points available. Thus, a score of .9856 counted one-third towards Auburn’s final BCS ranking.

The third component is the computer/mathematical rankings. The BCS uses six different rankings, although only four scores are used for each school. The highest and lowest scores are omitted. The remaining four are weighted equally and averaged to produce the final third of the BCS score.

The mathematical rankings are all proprietary formulae, and there is little publicly available information on how most of them work. All are designed to calculate and account for differences in strength of schedule. Beyond that it’s basically a free-for-all.

Two of the six mathematical rankings have been used by the BCS since its inception in 1998: Those produced by Jeff Sagarin for USA Today and the team of Jeff Anderson & Chris Hester. Sagarin is a MIT-trained mathematician. Anderson & Hester were college roommates at Washington who started publishing their rankings with the Seattle Times in 1994.

The remaining mathematical rankings are published by Richard Billingsley, Wes Colley, Kenneth Massey and Peter Wolfe. Billingsley has been independently ranking college football teams since the 1970s. Colley is a senior research scientist at the Center for Modeling, Simulation, and Analysis at the University of Alabama-Huntsville. Massey is an assistant professor of mathematics at Carson-Newman University in Tennessee. Wolfe is a medical doctor who specializes in infectious diseases.

Of the group, Billingsley and Colley have been the most open about how their rankings work. Colley published a 23-page explanation that, unfortunately, requires a strong grasp of algebra and calculus. Billingsley has given numerous interviews where he’s attempted to explain things in more layperson-friendly terms:

My system is probably more different from the other computer systems. The other five guys are looking at it from a purely mathematical standpoint — don’t get me wrong, I applaud their systems and I have tremendous respect for what they do. But my system is not purely mathematically based. My rankings are based on rules that are put in place from a fan’s perspective, things I think that are important to rank college football teams. My rankings are closely related to human voters, an improved AP poll, if you will. It reacts to games more like a human voter but does it without biases like the name of team, the conference they play in, etc. It’s mainly concerned with wins, losses, strength of schedule (SOS) and head-to-head results. The core of my system is not something you see in most computers. It’s not necessarily better — in purely mathematical terms, it’s not as good — but the public relates very well to the system.

Billingsley, Sagarin and Massey all use the previous season’s standings as a starting point for the current year’s rankings. The other three start with a “blank slate.” Some, like Anderson & Hester, don’t release initial rankings until several weeks into the regular season. None of the rankings, per BCS orders, account for margin of victory, although Sagarin continues to produce a separate, non-BCS ranking that does account for it.

Not all six mathematical/computer rankings count towards the final BCS score. The BCS assigns points to each ranking — 25 points for 1st, 24 for 2nd, etc. — and then drops the best and worst ranking for each team. The remaining four scores are then divided by 100 to calculate the percentage that is then averaged with the two polls.

While we may never know how the six rankings work, we can examine their impact on the standings. In 2010, there was little disagreement over the No. 1 and No. 2 teams. Auburn (13-0) finished the regular season at No. 1 in the Harris poll and all six computer rankings. Oregon (12-0) was No. 1 in the USA Today poll and No. 2 in every other ranking. The only significant impact came over the No. 4 team. This was important because under BCS rules, any team that finishes in the top four is guaranteed an at-large berth. Wisconsin (11-1) finished No. 4 in both polls. The computer rankings told a different story. Only Billingsley agreed that Wisconsin was 4th, and his was the discarded high ranking. Colley and Sagarin both ranked Wisconsin 12th, and only one was discarded as the low score. Overall, Wisconsin averaged 8th in the computers. This allowed Stanford — which was ranked 5th in both polls and the computer average — to slip ahead by the slightest of margins and claim a BCS berth. (Wisconsin already qualified for the Rose Bowl as Big 10 champion.)

The previous season, 2009, the BCS faced four undefeated teams in the top four. Alabama was the consensus No. 1. Both polls ranked Texas as No. 2. The computer average, however, had Cincinnati as No. 2. Only Colley and Billingsley had Texas ranked 2nd. Massey and Sagarin actually ranked 12-1 Florida, who had lost to Alabama, as No. 2 ahead of four other undefeated teams.

This year’s initial BCS standings show substantial disagreement between the computers and the polls, which isn’t surprising given there are still a large number of undefeated teams. No. 1 LSU only finished first in the Harris poll. The Tigers are second in the USA Today poll and third in the composite computer rankings. Only Billingsley ranked LSU at No. 1, while Massey has them at No. 5. The current computer leader is overall No. 4 Oklahoma State, which is first according to Anderson & Hester, Colley, Massey and Wolfe. No. 2 Alabama finished no worse than third in any of the computer rankings and is currently Sagarin’s No. 1.

Arkansas, South Carolina and Auburn also made the initial BCS top 25. No. 9 Arkansas benefited from the computers, which ranked them 8th, two spots ahead of both polls. Arkansas’ worst finish was 17th by Massey. No 20 Auburn was also ranked higher by the computers (14th) than the polls (21st and 23rd). Conversely, No. 14 South Carolina fared about the same with the computers (13th overall) as with the pollsters (13th and 12th). The Tigers also produced one of the wider margins among the computers: Sagarin ranks them 10th while Colley has them 25th. Remember, though, the highest and lowest scores are omitted from the BCS calculation.

 

REFERENCES

COMMENTS

You must be logged in to post a comment. Please sign in or register

TOP SEC HEADLINES
Continue scrolling for more articles.