I've already covered most of this in the preceding sections. Don't overvalue straight records, or punish teams for playing tough schedules. Don't overvalue performance, the last game played, or common opponents. Don't use the simplistic method of just dropping teams from your previous week's list that lost. Each week's games provide a whole lot of fresh data that can change the relative positions of all of the teams in your top 25.
People like to say that a team that wins shouldn't drop in the ratings, but that is not true. For example, suppose that in week one, a team that you think is going to be bad (let's say Stanford) defeats a team that you expected to be a top 5 team (let's say Michigan). Then in week two, Stanford defeats another team you expect to be top 5 (we'll call them Notre Dame). At this point, you may have moved Stanford from being unranked to being your #3 team! Now let's say that Stanford wins their next 2 games against a couple of patsies, and in that time Michigan drops to 0-4 and Notre Dame to 1-3, both now unranked. Now Stanford is 4-0, but against 4 bad teams, and so you would be fully valid if you dropped them, maybe all the way to #15 or lower (especially if they barely beat Michigan and Notre Dame). Even though they continued to win.
That's the thing: each week, it isn't just Stanford playing games. All the teams they played are playing games too, and what happens to those teams can and should impact Stanford's rating.
People also like to say that the ever-lingering positioning of teams in the preseason polls is a big problem, and that polls should therefore not start until a few weeks after the season has begun. The unfair advantage of where teams are ranked in the preseason poll is definitely a problem, but the solution doesn't have to be waiting to start the poll until later in the season. If voters ranked teams properly, it wouldn't be a problem at all. It is mostly the false idea that teams shouldn't drop when they win that makes the preseason polls a problem.
Anyway, with all the patsies most teams play these days, even if you start the poll four weeks in, there are still going to be teams whose seasons haven't really begun, and whose ranking is mere speculation. Then you'll have the same problem of teams getting an unfair advantage because they were ranked too high in the first poll. Until you are pretty late in a season, a lot of rankings are based on speculation.
Take Penn State in 2009. Their schedule was lousy, and they lost easily to the only two good teams they played in the regular season (Iowa and Ohio State). No one could really know how good they were until their bowl game, when they beat LSU. Before that, PSU's ranking all along was almost as speculative as their preseason ranking.
But the bottom line, just in case anyone out there doesn't already know: as the season progresses, stop rating teams based on how good you thought they would be before the season started. Along similar lines, and equally obvious (if equally ignored): don't rate teams more highly because they are "name" schools (Michigan, Notre Dame, Alabama, Oklahoma, Texas, Nebraska, Penn State, for example), or because you are a fan of a team or conference, or because a team comes from your region, or because you like their uniforms.
Another way you should not rank teams-- and this may sound counterintuitive-- is based on how good you think they are now. It may be all right to do so early in the season, but by the end, how good you think teams are shouldn't be much of a factor at all. The top 25 should be based on what teams have actually done, not what you think or feel they could do.
A list of how good you think teams are is a power rating. That's what most computer rankings are. A power rating is interesting, but it doesn't reflect the season as it played out so much as it reflects what you think would happen going forward (if there were more games to be played). For example, by the end of 2009, some people think Nebraska had become a top 10 team power-wise. That would be based on their almost beating Texas in the Big 12 championship game and their dismantling of then-rated Arizona in the Holiday Bowl.
But that ignores the previous 11 games Nebraska played. Maybe Nebraska was a top 10 team by the end. Or maybe they just put together 2 great games in a row, like Stanford did against Oregon and Southern Cal in the middle of the season. And people were saying then that Stanford had become a top 10 team. Then they lost 2 of their last 3 games, and barely won the third.
That's the thing about how good you think teams are: it's just your speculation. And that's true regardless of how much of an "expert" you are. We see the "experts" being wrong every single week. There are countless examples. One of my favorites is the 1993 Sugar Bowl, when all the experts said that immortal, unbeatable Miami could not possibly lose to offensively inept Alabama. But then Alabama and their #1 defense completely destroyed Miami. And after the game, it was hard to imagine it going any other way. People suddenly remembered, in hindsight, how often Miami had looked rather ordinary during the regular season, barely beating the four best teams they played (only two of whom finished ranked).
Of course, Sagarin ended up with Florida State #1 for that season (FSU lost to Miami). That's a power rating for you.
Your ratings should be based on facts and evidence, not speculation.
Anyone could say at the end of the 2009 season, "I have Virginia Tech ranked higher than Georgia Tech because they are a better team right now. Sure Georgia Tech beat them, but if they played again, Virginia Tech would win." That is absolutely nothing but speculation. However, when someone ranks Georgia Tech higher than Virginia Tech, their reasoning is not based on speculation, but upon the cold, hard fact that Georgia Tech did indeed defeat Virginia Tech.
When people defend overrating a team because they think that team is better than its record indicates, they sound like a fan making excuses. They say some losses should be dismissed, because the team is better now, or because there were key injuries, or bad weather, or bad calls. But here's the thing: all the games count.
And injuries, bad weather, and bad calls are a part of the game. If a team loses due to a key injury, then obviously a depth problem has been exposed for that team. It should definitely remain a part of their rating, because depth is a legitimate measure of how good a team is. And bad weather affects both teams. It does tend to favor the lesser-talented team, leveling the table more, but it alone doesn't cause the more talented team to lose such a game. Losing does.
Bad calls are trickier, and it's harder to blame the losses that stem from them on the teams that are victimized by them, but in the end, bad calls go into the same category as unlucky bounces of the ball. Bad luck. A loss is a loss, and a win is a win. Besides, in most cases where teams lost because of a bad call, they could have won anyway by not letting the game get to the point where a bad call could win or lose it.
Take Colorado's infamous "fifth down" against Missouri in 1990. If you rank Georgia Tech #1 for that season only because you consider that game against Missouri to be a loss, you are wrong to do so. First of all, had Colorado known it was actually 4th down rather than 3rd down, as the officials and yard markers indicated, they might not have run the same play they did on that down. Secondly, Missouri still had a chance to stop the "fifth down" play. And finally, it doesn't really matter. Colorado may have gotten lucky, but the bottom line is that they won. Officially and irrevocably. The most that you can do is consider it to be a very poor performance (Missouri was not good that year).
With every decision you make, you need to ask yourself, "Am I thinking rationally? Or am I just rationalizing?"
And that wraps this guide up. For those of you who made it all the way through, be sure to pick up your diploma on the way out the door. And if you're looking for more concrete examples of all the principles I've covered, there are always plenty in my "fixing the AP poll" articles, which I will be continually adding until I've fixed them all.