How about a 12 plus 2 College Football Playoff?

How is Sagarin one of the worst models?

Virtually every time anyone makes an appeal to a computer on this board, it's Sagarin.

And btw - I'm not even disagreeing with you, I have to know more where you're coming from to see what you mean. I just threw out a well-known name and they tend to rank far more reasonably than the Twittie Committee has.

And as a reminder: it was the fact we had six computers in 2003 that left Oklahoma at #1 and gave us the wrong national title game. As long as people are willing to accept THAT PARTICULAR PROBLEM (which with 8 teams shouldn't be as much a problem except for the fact the #1 seed is supposed to have an advantage), it's OK. Oklahoma should have fallen in 2003, but they wouldn't have fallen below #8 even in the strictest computer model.

When comparing computer models, other than “sight test”, which I’ll touch on later, one of the few objective ways you can compare for empirical accuracy is to look at how many times each model’s ranking violates actual outcomes on the field. Example: having Alabama ranked above Florida State would be considered a violation since FSU beat Bama head to head. Of course, this particular violation is one that any decent model should be guilty of, since obviously, Bama is an objectively better team. So all models will have some violations. But in general, since better teams usually win against worse teams in head to head matchups, the better computer models will minimize violations of head to head outcomes.

With that in mind, we can look at the list of models and their violation counts maintained by Kenneth Massey at https://masseyratings.com/ranks?s=cf&top=-1 . Of the 86 models listed, Sagarin ranks 75th in terms of fewest violations among the top 25 FBS teams, 57th when considering ALL FBS teams. That’s pretty horrible.

Now to sight test… here’s sagarin’s CURRENT top 25:

1. Ohio State
2. Indiana
3. Notre Dame
4. Oregon
5. Texas Tech
6. Georgia
7. Miami
8. Texas A&M
9. Alabama
10. Mississippi
11. Vanderbilt
12. Texas
13. Oklahoma
14. Penn State
15. Utah
16. Iowa
17. USC
18. Washington
19. Missouri
20. SMU
21. Tennessee
22. Michigan
23. LSU
24. BYU
25. Clemson

You tell me. Considering where he’s ranking teams like Notre Dame, Miami, Penn State, and Clemson (you could argue for others as well), do Sagarin’s rankings pass the sight test? To me, they don’t. I’m not a big fan of “sight test” though, which is why I’ll hang my hat on the ranking violation metric. By that metric, there’s about 10-20 different rankings that stand out above the other 70. But even most of the bad models outperform Sagarin.

He gets all the mentions because he was one of the first in the game, and certainly the most widely known among the pioneers of computer rankings, having signed a deal with USA Today and published by them every week starting around 1985.
 
Last edited:
  • Like
Reactions: dtgreg
When comparing computer models, other than “sight test”, which I’ll touch on later, one of the few objective ways you can compare for empirical accuracy is to look at how many times each model’s ranking violates actual outcomes on the field. Example: having Alabama ranked above Florida State would be considered a violation since FSU beat Bama head to head. Of course, this particular violation is one that any decent model should be guilty of, since obviously, Bama is an objectively better team. So all models will have some violations. But in general, since better teams usually win against worse teams in head to head matchups, the better computer models will minimize violations of head to head outcomes.

With that in mind, we can look at the list of models and their violation counts maintained by Kenneth Massey at https://masseyratings.com/ranks?s=cf&top=-1 . Of the 86 models listed, Sagarin ranks 75th in terms of fewest violations among the top 25 FBS teams, 57th when considering ALL FBS teams. That’s pretty horrible.

Now to sight test… here’s sagarin’s CURRENT top 25:

1. Ohio State
2. Indiana
3. Notre Dame
4. Oregon
5. Texas Tech
6. Georgia
7. Miami
8. Texas A&M
9. Alabama
10. Mississippi
11. Vanderbilt
12. Texas
13. Oklahoma
14. Penn State
15. Utah
16. Iowa
17. USC
18. Washington
19. Missouri
20. SMU
21. Tennessee
22. Michigan
23. LSU
24. BYU
25. Clemson

You tell me. Considering where he’s ranking teams like Notre Dame, Miami, Penn State, and Clemson (you could argue for others as well), do Sagarin’s rankings pass the sight test? To me, they don’t. I’m not a big fan of “sight test” though, which is why I’ll hang my hat on the ranking violation metric. By that metric, there’s about 10-20 different rankings that stand out above the other 70. But even most of the bad models outperform Sagarin.

He gets all the mentions because he was one of the first in the game, and certainly the most widely known among the pioneers of computer rankings, having signed a deal with USA Today and published by them every week starting around 1985.
Dang! I need your betting slips!
 
Advertisement

Trending content

Advertisement