Home
The
AP Poll as a National Championship Selector
The
AP poll began publishing its weekly top 20 in 1936. It was not the
first ranking system or MNC selector that was based on a group's
majority opinion, but it was the first such organization that is listed
in the NCAA Records Book.
Until
the BCS came along, the Associated Press's college football poll was
the gold standard for mythical national championships (MNC). Even after the
advent of the BCS, people accept an AP poll's #1 team as national
champion just as much as a BCS champion, which is why Southern Cal is
considered to be co-MNC of 2003 with Louisiana State. The major appeal
of the AP poll is the fact that it is a majority opinion. Democracy in
action.
The
AP poll is generally a good MNC selector because it is the aggregate
opinion of many people, but historically speaking, the poll has had a
lot of weaknesses as a national championship selector, and some of its choices have been so poor that I do not recognize them as national champions at all.
Pre-Bowl "Champions"
The
biggest problem with the AP poll's historical "champions" is that
1936-1964 and 1967-1968, the AP poll ended before the bowl games, and
often before every team had even finished the regular season. During
those years, #1 teams that lost their bowl games are not national
champions at all, at least not in my eyes. Defenders of these
"champions" like to say that that's just the way the "system" was back
then, but this was not a choice made by the NCAA, but by the Associated
Press, which had no official connection to college football. Other
organizations counted bowl games all along when naming their champions,
and the NCAA itself counted all those bowl games in teams' and coaches'
all-time records in their records book. The bowls were never mere
"exhibition" games, despite what some like to say.
National
championships don't exist at all, of course, except in the eye of the
beholder. And if you want to count regular season #1 teams that lost
their bowl games as "national champions," be my guest. But such
"championships" mean nothing more to me than a post-October
"championship." And if you're going to count them in the past, you
ought to count them today as well, because nothing has substantively
changed except that the AP poll simply chose to do it one way before,
and another way now. And that makes their "national championships" very
inconsistent from one era to the next.
The following teams finished #1 in the AP poll 1936-1964, then lost their bowl games:
1950: Oklahoma
1951: Tennessee
1953: Maryland
1960: Minnesota
1964: Alabama
These aren't "national champions" of anything other than the regular season. Click on the year to see who ended up #1 in my fixed AP
poll for a given season. But #1 teams who lose their bowl games are
just the most glaring problem with ending a ranking system before the
bowls. If the AP poll had ended after the bowls all along, some of its
#1 teams would have been passed up in the final poll despite not taking
a loss.
The most famous example is 1947.
9-0 Notre Dame finished #1 in the AP poll, but then 10-0 Michigan
stomped on 7-2-1 Southern Cal 49-0 in the Rose Bowl. The AP conducted a
post-bowl poll, and sportswriters voted for Michigan, but the AP
declared that this poll "didn't count." 11 years prior, in the first AP
poll in 1936, 7-1 Minnesota finished #1,
but 8-1-1 Pittsburgh would have very likely passed them up in a
post-bowl poll thanks to a 21-0 Rose Bowl win over 7-2-1 Washington
(whom Minnesota had beaten just 14-7).
These
problems are not as egregious as the #1 teams that lost their bowl
games, because Notre Dame '47 and Minnesota '36 are legitimate national
championship picks regardless, but if you go only by the AP poll's #1
team for your national championships, Michigan '47 and Pittsburgh '36
are left uncrowned. But Michigan and Pitt both claim national
championships for those seasons, so obviously they are not sticking
with the AP poll as the final say on the matter.
And those cases point to a more basic problem with the AP poll as an MNC selector.
The Highlander Approach: There Can Be Only One
The
very nature of the AP poll's methodology makes it a highly limited MNC
selector. If the AP poll was a good enough selector, we wouldn't need
the coaches' poll to be able to recognize LSU as sharing the national championship in 2003,
Nebraska in 1997, Washington in 1991, Georgia Tech in 1990, etc. The
point is, sometimes it is not possible to select just one team as THE
mythical national champion, nor is it the right thing to do. Yet the AP
poll is like the movie The Highlander: there can be only one. Yes, theoretically 2 teams can
tie for #1 atop the AP poll, but such a result is extremely unlikely.
The
problem here is that the AP poll's real purpose is to rank the teams
from 1 to 25 (20 and 10 in earlier polls). Crowning the #1 team as a
"national champion" is just a clunky byproduct of that purpose, and not
the best way to choose a national champion. The best way to select MNCs
would be to have a vote, separate from ranking the teams, in which
co-champions were offered up as valid options. For example, the ballot
for voters in 1991 might have looked like this:
Washington
Miami-Florida
Washington and Miami-Florida
If 10 writers go with Washington, 15 with Miami, and 55 with Washington and
Miami, then the winner is both in a co-championship, without the need
for another selection organization such as the coaches' poll. But the
way the AP poll is set up, if 79 writers choose 2 teams in a tie for
#1, and just 1 writer chooses one of those teams alone at #1, that team
is the AP poll's lone national championship selection, and gets the
Bear Bryant Trophy. A suboptimal system, to be sure.
Regional Bias
Today, the AP
poll apportions its voters fairly by region, but that was not always
the case. Through most of the 1950s, any AP writer could vote, and a
preponderance of the voters came from the Great Lakes and Eastern
regions. Not surprisingly, Southern teams were often disregarded during
this time period. 9-1 Ohio State won the vote over 11-1 Georgia in
1942, 8-0-1 Notre Dame and 9-0-1 Army over 11-0 Georgia in 1946, and
9-0 Michigan State over 12-0 Georgia Tech in 1952. Perhaps those
choices were fair, but if the vast majority of voters had come from the
Southeast and Texas during those years, do you think the outcomes would
have been the same?
Regional bias
has always been statistically obvious in the voting patterns of
football writers, be it voting for teams in the AP poll or for the
Heisman Trophy. Most of the voters are writers that cover one
particular team and/or conference. And that brings up a related
problem...
Limited Viewpoints
Because most of the AP poll's voters cover one team, they are most
familiar with that team and its opponents. They may see other games,
especially those played on other days but Saturday, but most of their
time on gameday is spent with their team, and after that game they have
deadlines to meet for their Sunday papers. They don't have time to
fairly judge teams outside of their limited purview, even if they are
objective enough not to be swayed by bias for their team/region.
Limited time is itself another problem...
Limited Time and Lastgamitis
Football
writers' busiest workday is Saturday, so they don't have enough time to
craft a proper and logical top 25. Because of that, they take rating
shortcuts, like just moving teams that lost down from their previous
week's top 25. And because they turn in their ballots so soon after the
last game is played on Saturday, writers are terribly afflicted by
lastgamitis-- judging teams far too much on their last game played. If
they had more time, they could step back and get more perspective.
This is much less of a problem in the final poll today, since the
bowl season drags on so long, and writers have plenty of time to work
on their ratings and reflect on the season as a whole. But in the past,
when the AP poll ended after Thanksgiving weekend, there was no time to
get any perspective, and lastgamitis ruled supreme.
Shifting Criteria
Another problem with the AP poll is that voters' values and criteria shift like fashion over the years. Math formula ratings, of course, do not have this problem:
their criteria is always exactly the same from one era to the next. The
only criteria the AP itself has ever given to voters is a reminder to
pay attention to head-to-head results, and that has only been in recent
years. As such, different voters can have very different criteria. And
the zeitgeist-driven values of the majority render one era or season
different from another.
The AP poll has been a strong MNC selector ever since it started
ending after the bowls, but even in that time it has still voted 2
teams #1 that did not deserve to even share a national championship,
and I believe that today's voters would not have made those same
choices.
1978
In 1978, the AP poll voted 11-1 Alabama #1 over 12-1 Southern Cal, who
beat the Tide 24-14 in Birmingham. Was this because Alabama played a
tougher schedule? Far from it. Southern Cal played a vastly tougher
schedule, and in fact they played one of the toughest schedules ever
faced by an MNC contender. I go into more detail on the issue in my article on fixing the 1978 AP poll,
but the point here is that voters today put much more emphasis on
head-to-head results than they apparently did then, and so Alabama
benefited from different criteria in 1978. The coaches' poll, by the
way, got it right in 1978, tabbing USC #1.
1984
The worst choice for #1 in the history of the AP poll was Brigham Young in 1984,
the only "national champion" who did not even play a rated opponent.
Voters have had plenty of chances to make the same kind of choice (and
much better ones) before and since that time, but no other "little big
team" has ever been voted #1, or even gotten into the BCS title game,
despite a raft of them performing much better than BYU '84 while still
playing actual rated opponents. For example, in 1975
Arizona State, then a WAC team, went 12-0 while beating 2 rated
opponents, one in the top 10, but they wound up #2 to 11-1 Oklahoma.
And we have seen a number of unbeaten WAC and MWC teams in the 21st
century who were better than BYU '84, but again, none have even gotten
into the BCS title game.
BYU '84 appears to have benefited from
a wave of sentiment to finally recognize a minor conference team. Once
that was done, writers must have felt that they never had to do it
again, and they haven't even come close since.
The AP Poll vs. Other Selectors
Until
1968, when it started counting bowls, the AP poll was worse than human
MNC selectors listed in the NCAA Records Book that did count bowl
games, such as the National Championship Foundation, the College Football Researchers Association,
and the Football Writers Association of America (which gives the
Grantland Rice Award to its champions). But since 1968, the AP poll has
been much better than the NCF and CFRA, both of whom have made some
inexplicably bad MNC selections in that time.
The coaches' poll
didn't start counting bowls until 1974, but since that time it has been
the only human selector listed in the NCAA Records Book that has been
better than the AP poll. The coaches made the same bad choice of BYU in
1984 that the AP poll did, but they correctly voted Southern Cal #1 in
1978, unlike the AP poll. In addition, the coaches made the better
choice for #1 in 1991, 1997, and 2003
(details in the linked articles). Those differences aren't too big a
deal, because in each of those seasons both teams should be considered
co-MNC, but the coaches were correct about which team should be ranked
#1. Of course, in 2003, the coaches had no choice: they had to vote for the BCS champion.
The
AP poll and the coaches' poll are without doubt the most
widely-accepted MNC selectors. Since they started counting bowls, those
polls may not have always been correct, but they've been the
best, certainly better than any math/computer system.