Tip Top 25 in helmets, smaller
                                                    Home

How to Rate College Football Teams

Strength of Schedule

Most of what you need to know about strength of schedule can be found in teams' relevant records. What ranked and nearly-ranked teams did they defeat, and who did they lose to? When sizing up a team's strength of schedule, however, simply looking at the winning percentage of a team's opponents (the NCAA Records Book's statistical measure of schedule "toughness")  just isn't very valuable. Even adding in the winning percentage of their opponents' opponents is misleading. This is primarily because a typical 8-4 MAC team, for example, is far, far different from a typical 8-4 SEC team.

Strength of Schedule Is Relative

For any teams you are considering for the top 25, the relative difference in power of the weak teams on their schedules is nearly meaningless. What matters most is the strongest opponents they played-- the rated teams (top 25), the nearly rated teams, and the decent teams (roughly the top 60 of the FBS)-- in that order. That is because those opponents are the real threats. A 6-6 Sun Belt team may be far better than a 1-11 Sun Belt team, for example, but it is very rare that either will be a threat to a top 25 team.

To illustrate this further, let's say that you are considering two teams for #25, Team A and Team B, and both teams defeated six weak opponents (bottom 60 of the FBS). Team A's weak opponents were all about #70-#90 (whether using your own rankings, Sagarin's, or some other system). Team B's weak opponents were all about #100-120. That is a big difference numbers-wise. So even if Team A's six strong opponents averaged #30-50, and Team B's six strong opponents averaged #10-30, overall Team A's schedule would come out stronger in its average. Even though Team B played 6 rated or nearly rated teams, and Team A played none! In reality, for top 25 teams, Team B's schedule is far, far tougher. It is not even close. 

But note this: if Team A and Team B are both about #90, then Team A has the far, far tougher schedule, because then all 12 teams on its schedule are rated higher, whereas Team B is still ranked higher than half of its schedule.

The strength of a team's schedule must therefore be evaluated relative to its power level. And since we are talking about top 25 teams here, what matters is opponents in the top half of the FBS. A patsy is a patsy, whether rated #80 or #110. It would be ridiculous to consider that difference equal to the difference between playing a #10 and a #40 team. Yet that is exactly what most measures of strength of schedule do (including computer rankings). For a top 25 team, the only time a bottom-half team should impact its rating is when the bottom-half team upsets or nearly upsets the top 25 team. In other words, a patsy can have a negative impact on a team's rating, but not a positive one. Beating a patsy holds no value.

Overvaluing Strength of Schedule

AP poll voters tend to do a poor job of accounting for strength of schedule. Usually they vastly undervalue it. Sometimes, however, they overstress it. Let's start with an example of the latter. Ohio State's 2009 team ended up 11-2, ranked #5. Texas Christian was 12-1, #6. Granted, this may not have had anything to do with accounting for strength of schedule; it may have just been bias for a "name" school from a major conference, and probably was. But let's give the poll voters a break and pretend that it is a strength of schedule issue that put Ohio State ahead.

Ohio State did play a stronger schedule-- they played 5 ranked opponents (average rank 13), and TCU played 4 (average rank 14.5). But is that a big enough difference to forgive the extra loss Ohio State took? Definitely not. In simple number terms, Ohio State has 100% more losses than TCU, but they only played 25% more rated teams. But you shouldn't be rating teams by simply looking at their straight record, then looking at some number that represents their strength of schedule. You need to look at their relevant record. And in the case of Ohio State and TCU in 2009, you don't need to look much beyond who the teams lost to.

TCU lost only to #4 Boise State, who is ranked higher than Ohio State anyway. And it was a close game. Ohio State, on the other hand, lost to the #22 team and to an unrated losing team! That's all you really need to know right there. The fact that Ohio State defeated 4 rated opponents and TCU "only" defeated 3 doesn't come anywhere near making up for that fact. I understand that Ohio State played much better in the second half of the season, but so did TCU. After mid-October, the closest anyone got to TCU in the regular season was 27 points-- and that included 2 rated opponents!

Undervaluing Strength of Schedule

As I said, the AP poll usually undervalues strength of schedule, largely because the voters tend to overvalue straight records. As an example, let's stick with 2009 and look at Central Michigan, who finished 12-2 and #23, and Oklahoma, who finished 8-5 and just outside the AP top 25 at #26. Again, I think non-performance factors probably had more to do with this than logic. Central Michigan represented a token MAC team in the top 25, and more importantly, it was their first ever finish in the top 25, which is a terrific story-- and most AP voters are writers, after all. Also, that 12-2 straight record is very impressive on its face, whereas Oklahoma has a decidedly less sparkly 8-5 record (though Clemson got in with 5 losses). But does CMU really deserve it over Oklahoma?

Central Michigan's losses came to 8-5 Arizona and 8-5 Boston College (and neither was a close game). Arizona was #35 in the AP poll's "also receiving votes" section, and Boston College received no votes at all. All 5 of Oklahoma's losses, on the other hand, came to top 25 teams, and in fact, all 5 of those opponents were ranked higher than Central Michigan! And 4 of the 5 were close games. Furthermore, CMU did not defeat a single opponent who received a vote on any ballot, whereas OU defeated 2 opponents in the "also receiving votes" section. Oklahoma is rated higher than all 14 of CMU's opponents by the AP poll. Had OU played CMU's schedule, they would have been favored in every game, possibly finishing 14-0. That's how much difference strength of schedule makes. There isn't a single logical factor to point to in CMU's favor here.

So, while I hate to be a grinch, the fact is that Oklahoma is far more deserving of being rated than is CMU. And Oklahoma is just one example. There's also Arizona, who defeated CMU 19-6, and whose relevant record is also easily better. Nice story or not, if CMU wants to be a top 25 team, they should earn it. In 2009, they did not do so.

Other Criteria

I'll end with a rating problem that cannot be solved by just looking at record and strength of schedule. Team A is 12-0, having beaten only one rated team (#12). Team B is 11-1, defeating 3 rated teams (#12, #13, and #15) and losing to another (#14). Who should be rated higher? Team B definitely played a tougher slate, but does it excuse the loss? As I said, with only the information given, there is no certain answer, and this kind of situation will come up a lot when you are rating teams.

When it does come up--comparable teams, one team with a better record and a weaker schedule, and another with a worse record but a stronger schedule-- you often have to separate them by using other criteria, such as performance.

Next: Other Criteria: Performance, Improvement, and Common Opponents

Sections that follow "Other Criteria":
How to Rank the Top 25 Teams
How Not to Rank Teams

Home