Can someone explain why the BCS computers have us ranked #12? With our schedule, does one loss make that big a deal according to the computers? Does any know or understand the equation?
We currently range between 4th and 20th in the computer rankings. But simply put, the teams we've beaten haven't turned out to be very good, or at least there's no way to confirm that they're any good yet. We haven't beaten a team that doesn't have at least one other loss, and we lost to a team that has two other losses. Our wins have come against teams with the following records:
1-6
3-3
1-5
4-2
4-3
3-3
total: 16-22
Compare that with Stanford, the only 1-loss team with an average computer ranking higher than Alabama's. Their loss came 1-loss #4 Oregon. And the records of the teams that they've beaten:
3-3
3-3
2-5
4-3
5-2
total: 17-16
Considering that all the other teams currently ranked ahead of us in the computers are undefeated, that sounds pretty reasonable to me.
popechild pretty much nailed it as to why Bama is 12th in the computer rankings, though Stanford's loss came to undefeated (not 1-loss) Oregon, who is #1 in all three human polls, #8 by the computers, and #2 in the BCS. Though each computer formula functions differently according to the creator's intent, in general, the computers only have us 12th because this early in the season, each team doesn't yet have a large enough sample size of games to be able to accurately factor in strength of schedule, and more importantly, performance vs common opponents. Consequently, simple W-L record is weighted proportionately more heavily, and a 6-win 1-loss team doesn't rank very highly when compared to 10 6-win 0-loss teams. As everybody builds a larger resume, various computer algorithms will get a better picture of the relative value between beating 1 team vs. beating another (or penalty that should be assessed for losing to 1 team vs another). As the season moves along, the cream will rise.
About the BCS in general...Without getting too technical, here's the abbreviated history of the BCS formula, and the changes made over the years as to how it has treated the computer polls:
1998: Formula devised averaging the AP and Coaches polls (Human average) and averaging three computer rankings (Computer average). Human and computer averages were combined with a strength of schedule factor, with a penalty applied for each loss. According to this formula, human judgement accounted for 1/2 of the formula, while computer number crunching accounted for the other 1/2 of the ranking. Each human poll accounted for 1/4 of the ranking and each computer poll accounted for 1/6 of the ranking.
1999: BCS committee correctly recognized that there was a wide range of rankings produced by various computer models. They therefore decided to include 5 more models (total now to 8), drop the lowest ranking and average the remaining 7. According to this formula, human judgement accounted for 1/2 of the formula, while computer number crunching accounted for the other 1/2 of the ranking. Each human poll accounted for 1/4 of the ranking and each computer poll accounted for 1/14 of the ranking.
2001: Concerned that teams were running up the score in order to improve their computer ranking, BCS dropped computer models which were heavily reliant on margin of victory (MoV) as a ranking parameter. 2 computers (NYT and Dunkel) were replaced with 2 others which did not use MoV. 8 total computer formulas still used, dropping the lowest
and the highest and averaging the remaining 6. A quality win component was added to reward victories against top 15 teams. According to this formula, human judgement accounted for 1/2 of the formula, while computer number crunching accounted for the other 1/2 of the ranking. Each human poll accounted for 1/4 of the ranking and each computer poll accounted for 1/12 of the ranking.
2002: Still concerned with inflated scores against cupcake opponents, committee outlawed MoV usage
at all. Two more formulas were dropped. One (Sagarin) changed his formula so that it didn't include MoV. The New York Times returned with a non MoV formula, and a new non MoV poll was added (total still 8). Highest/lowest dropped, remaining 6 still averaged. Quality win reward modified to only reward wins over top 10 teams. According to this formula, human judgement accounted for 1/2 of the formula, while computer number crunching accounted for the other 1/2 of the ranking. Each human poll accounted for 1/4 of the ranking and each computer poll accounted for 1/12 of the ranking.
2004: Following controversy over 2003 season when AP felt USC was #1 and LSU was #3 but LSU and Oklahoma played for the title due to LSU's strength in the computer rankings, BCS altered the formula to weight humans more heavily than computers. Now, instead of averaging the AP and Coaches and then averaging that figure with the computer average, the two human polls were considered independently. The 8 computers were pared down to 6, with the highest and lowest dropped and averaging the remaining 4. Quality wins were dropped. According to this formula, human judgement accounted for 2/3 of the formula, while computer number crunching accounted for the other 1/3 of the ranking. Each human poll accounted for 1/3 of the ranking and each computer poll accounted for 1/12 of the ranking.
2005-present: AP (media), still throwing a tantrum over their being ignored, withdrew from BCS. They were replaced by Harris Interactive. Otherwise, the formula is unchanged.
I provide that history to point out the fact that nearly every year from 2001 until 2005, the BCS altered its formula and imposed changes on the participating computer algorithms in such ways as to try to either minimize the computer influence, or to force their solutions into agreement with the human factor. In my opinion, the changes they made through 99 - to increase the computer sample size so as not to rely to heavily on any one computer solution - were appropriate. But since 2001, they've eliminated Margin of Victory (so that a 1pt win is worth the same as a 30pt win), and all but negated the weighting of the computer influence in deference to human opinion. The computers are now treated as little more than a
modifier of human opinion.
Now certainly, there are bad computer models. Even more so since the BCS has started tinkering with and mandating what participating models can and can't factor into their rankings (MoV, etc). But if the point was to include objectivity into the formula in order to offset human opinion, minimizing that objectivity and trying to force it into agreement
with that human opinion has been counterproductive.