**SITE NEWS:**
We are moving all of our site and company news into a single blog for Sports-Reference.com. We'll tag all PFR content, so you can quickly and easily find the content you want.

Also, our existing PFR blog rss feed will be redirected to the new site's feed.

Pro-Football-Reference.com » Sports Reference

For more from Chase and Jason, check out their work at Football Perspective and The Big Lead.

## 2007 Standings: Simple Ranking System

A couple of years ago, Doug described the Simple Ranking System, which is a basic method of ranking just about anything. You can use it to rank NFL teams, as well as NFL offenses and defenses. Here's a quick description of the system:

To refresh your memory, it’s a system that’s been around forever and is extremely basic compared some of the other power rating systems out there. But I like it because it’s easy to understand. An average team will have a rating of zero. An above average team will have a positive rating while a below average team will have a negative rating. Every team will have a rating that is the equal to their average point margin plus the average of their opponent’s ratings, so the teams’ ratings are all interdependent: the Colts’ rating depends upon the ratings of all their opponents, which depends upon the ratings of all their opponents (some of which are the Colts), and so on.

The '07 Eagles outscored their opponents by 36 points, or 2.3 PPG. The Eagles had a really difficult schedule, playing nine games against the Patriots, Seahawks, Packers, Giants, Cowboys and Redskins. The Eagles' average opponent was 3.0 PPG better than average, so we can estimate that the Eagles must have been 5.3 theoretical points better than a league average team.

Ovr Rat SOS nwe 20.1 0.4 ind 12.0 0.3 dal 9.5 1.3 gnb 9.0 0.0 sdg 8.8 0.8 jax 6.8 0.1 phi 5.3 3.0 pit 5.2 -2.5 was 4.5 3.0 min 3.8 0.4 nyg 3.3 1.9 sea 1.8 -4.6 chi 1.2 2.1 tam 1.2 -2.8 ten 0.7 0.5 hou 0.0 0.3 cle - 1.1 -2.3 cin - 2.4 -2.1 nor - 2.5 -2.0 det - 3.6 2.6 nyj - 3.7 1.7 ari - 3.9 -4.3 den - 3.9 1.6 buf - 4.1 2.3 kan - 5.5 1.4 car - 5.8 -0.8 oak - 6.0 1.2 bal - 6.7 0.1 mia - 8.4 2.3 atl -10.6 -0.9 sfo -11.9 -2.9 ram -13.0 -2.0

The Patriots' +20.1 is by far the highest of all time. The '91 Redskins were +16.6, the '85 Bears at +15.9, and only three other teams were +15.0 or higher. The Patriots had a plus/minus differential of +19.7which is of course amazing; but incredibly, New England accomplished that against an above average schedule.

The Eagles, Redskins, Lions, Bills, Dolphins and Bears all faced rough schedules in 2007; I suspect that all those teams will be at least somewhat undervalued in 2008 because of that. Conversely, the Seahawks and Cardinals had *incredibly* easy schedules last year. Is it even going out on a limb anymore to say that Arizona will be overvalued this year?

The SRS has a lot of uses, including some predictive ability for the next season.

For example, the correlation coefficient between team winning percentage in Year N and team winning percentage in Year N+1 is 0.29; that means there's a mild correlation between winning percentages from year to year. But the correlation coefficient of each team's SRS score in Year N and the team's Year N+1 winning percentage is 0.32, a slight increase. Perhaps more importantly, teams with average SRS ratings and really high or really low winning percentages, generally come back to the pack the next season.

We can also create a formula to predict winning percentage in Year N+1. After performing a regression, the best-fit formula to predict Wins in Year N+1 is 16 * (0.503 + SRS_Yr_N * .01). That means every additional point of SRS is worth just 0.16 wins the following year. That doesn't sound very convincing, even if it may be true. No doubt part of the problem is injuries the next year are unpredictable and compress the ratings, and the SRS of course ignores any and all off-season changes. That being said, here's a list of the projected number of wins for each team:

Proj W nwe 11.3 ind 10.0 dal 9.6 gnb 9.5 sdg 9.5 jax 9.2 phi 8.9 pit 8.9 was 8.8 min 8.7 nyg 8.6 sea 8.3 chi 8.2 tam 8.2 ten 8.2 hou 8.1 cle 7.9 cin 7.7 nor 7.6 det 7.5 nyj 7.4 ari 7.4 den 7.4 buf 7.4 kan 7.2 car 7.1 oak 7.1 bal 6.9 mia 6.7 atl 6.3 sfo 6.1 ram 5.9

I think this list is nice to help keep my opinions in check. For example, Cleveland's a huge sleeper team this year, and people probably think "they went 10-6 last year, added Corey Williams, Shaun Rogers and Donte' Stallworth, and Joe Thomas and Derek Anderson are now in their second years starting. This team is going to be good." That may all be true, but you get a very different sense of the Browns when you start under the assumption that they project as a 7.9 win team, and not a 10 win team. They had a nice off-season, but they weren't nearly as good as their record last year. And even their SRS is inflated, because we know Josh Cribbs won't be anywhere near as valuable again this year. On the other side, the Eagles added Asante Samuel and were darn good last year -- it's not unreasonable to think Philadelphia can get 10 or 11 wins.

JKL's done some work that suggests that teams with high SRS ratings because of hard schedules might not be as good as teams with high SRS ratings because of a really high points differential. To the extent that such a phenomenon exists, this regression doesn't discover it. I ran through the numbers a second time using SRS rating and SOS variable to predict Year N+1 winning percentage, and the weight placed on the SOS variable wasn't even close to significant. I also ran the same regression on the 36 teams from 1993-2006 with a +8.0 or better SRS rating in Year N, and once again, their SOS variable was not close to statistically significant. I don't think this ends the discussion, but I think I'll need to see some more proof and theory in the comments as to whether or not how you got your high SRS rating matters.

I did one last study, performing a regression analysis using Year N SRS rating and Year N SOS rating to predict Year N+1 *SRS rating*. I thought that maybe SOS was somewhat consistent from year to year, and thus using just team winning percentage would hide that. It turns out that the SOS rating variable was given absolutely no weight. I also checked the correlation coefficient between SOS in Year N and SOS in Year N+1, and it was just 0.15. For the most parts, strengths of schedule really aren't consistent from year to year.

This entry was posted on Monday, June 16th, 2008 at 6:05 am and is filed under History, Statgeekery. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

For the regression with both SRS and SOS, you would never expect to see any significance for SOS even though it probably is very important in practical terms. 100% of the variance in SOS is already captured in the SRS variable, so I think it would be impossible for a regression algorithm to isolate the effect of SOS.

It might be like predicting next year's weight by regressing current height and weight. Height is a very important factor in weight, but it would be dominated by current weight and might appear insignificant in a regression.

But I suppose for your purposes, you've found out what you want to know. That is, it doesn't matter how a team got its SRS.

The following is not a homer post--I am a Saints fan.

I think one thing the SRS doesn't do well--realize the Rams had a multitude of OL injuries, as well as to Bulger & Jackson. This affects their projected wins for the 08 season (5.9). Because of playing in the weak NFC West, I see them at 7-9.

Another drawback of the SRS--the Pats scored several times at the end of games where it didn't really matter. This inflated their SRS for this year. It would be interesting to see their SRS WITHOUT the garbage time scoring (for that matter, how would garbage time scoring of all types affect ALL teams' ratings? I know it might take a while, but it would be interesting to do--at least for the 07 football year.)

You're saying that the Seahawks and Cardinals had easy schedules last year, so they're likely to be overvalued.

But you should consider that they're going to have more easy schedules in the future. If you were feeling ambitious, you'd take strength of schedule into your win progressions. This is doable in the framework of SRS.

1. Do a regression for SRS from year N to year N+1 to project 2008 simple ratings for all the teams.

2. Feed that data into the 2008 schedules to project win totals.

Do that, and you'll probably get a 9-7 Seattle and an 8-8 Arizona, or something of the sort. Sure, these are mediocre teams that get better records than they should because of an easy schedule, but I don't think that's going to catch up to them in 2008.

Hey Brian,

I didn't expect SOS to have any significance, but I wasn't sure. JKL has shown me some convincing data to suggest otherwise. And of course, it's possible that either: 1) the SOS weight should not be 1:1 relative to the points differential; or 2) the proper adjustment could be a multiplication of points differential and SOS, instead of addition/subtraction.

You got at three points there, Joseph. Let me go one by one.

1) The Rams SRS score doesn't know that Bulger and Jackson were hurt, or that their OL was injured. However, the regression analysis does. After all, most teams that have a really bad season have some significant injuries. I didn't say it in the post, but the numbers above were derived from the 1993-2007 NFL seasons. Only six teams had a -12.5 SRS rating or lower over that span. The 2000 Cardinals (7-9 in '01), the 2000 Browns (7-9 in '01), the '99 Browns (3-13 in '00), the '04 49ers (4-12 in '05), the '98 Eagles (5-11 in '99), and the '03 Cardinals (6-10 in '04). That's an average of 5.33 wins the next season. I'm sure those teams suffered a bunch of injuries, too, but none of them were even .500 the next season.

2) As for your thoughts on the weak NFC West, I'll address that in the next comment.

3) The Patriots. Addressing and defining garbage time is pretty difficult to do. With the Patriots in particular, I actually think keeping the garbage time in is pretty useful. Put it this way -- wouldn't everyone bet on NE winning 12, 13, 14, 15 or 16 games this year, over fewer than 11? To project 11.3 wins for NE seems conservative, to me.

Theoretically, if we want to exclude garbage time, we should also give a team a bonus or penalty for "ending" the game early. That is, if we want to say that all points scored by NE after being up 35-0 shouldn't count, well if they are up 35-0 at halftime, they should get credited for only half a game. We would then pro-rate the score to 70-0. That seems more fair to me than giving them 35-0 for a full game. But I think leaving the garbage time data in is preferable to either of the above options.

Yaguar,

If you go here (http://www.pro-football-reference.com/blog/?p=17), Doug has a post on predictive strngth of schedule. In other words, I don't think we should consider that Seattle and Arizona will have easy schedules this year. Sure, they've had them in the past, but the NFC West will eventualyl get stronger. The correlation coefficient of SOS from year to year was just 0.15, which as you know is pretty weak. There might be an effect there, but it's slight.

6: With a more advanced technique for predicting strength of schedule, like this one, you'd get something a little bit more significant.

All of the stuff we've talked about looks at only one year of data to predict a second year, but generally multiple years are relevant. The NFC West has been bad for a long, long time, ever since the Rams fell apart in 2002. That slightly longer-term history is meaningful. The Jets and Raiders were both 4-12 last year, but the Jets were good the year before, so you trust the Jets more to improve.

Same deal with divisions. The NFC West is just bad, and I'm not going to be the least bit surprised when Seattle skates into the playoffs at 9-7 again.

I had always assumed that because of the way the schedules are set up, first place teams play more difficult schedules than 4th place teams because they play the other first place finishers in their conference. It just dawned on me that this isn't the case -- the 4th place finisher has to play the first place finisher in their own division twice, so everything always balances out, 4 games against teams who finished first, second, third, and fourth the year before. Duh! And I guess that doesn't add anything to the discussion unless somebody else was under the same delusion that I was. I just never really THOUGHT about it before. I do understand that not all the divisions are equal ( and I'm thankful, it means my 49'ers are less likely to go 0-16 😛 )

Since the schedules rotate divisions rather than just teams, would it make sense to see if the strength of the division as a whole for N and N+1 has a better correlation? Or would it make any difference at all? I lack the math savvy to intuit the answer without testing it, but my feeling that a division probably changes its overall strength more slowly and predictably, so one could perhaps make a reasonable guess at the relative difficulty of those 8 games against other divisions as a whole even if you can't guess the difficulty of the individual games. I don't even know if that makes sense.

And I guess that doesn’t add anything to the discussion unless somebody else was under the same delusion that I was. I just never really THOUGHT about it before.I actually never realized that either. Of course, I don't think a schedule based on previous year records is likely to cause too many problems, regardless of how the games are apportioned. Even if you are facing 10 teams that made the playoffs the year before, you might still end up having a relatively easy schedule if the teams in question were either not that good in the first place (1998 Cardinals, 2006 Jets) or were just about to experience a precipitous decline (2006 Chiefs, 2006 Ravens, 2006 Bears).

I would be interested in trying to capture the N+1 SOS factor by adding order of divisional finish to the regression.

So do the teams who finished 4th in their divisions have higher predicted wins than teams that finished 1st, simply because the 1s are playing 1s and the 4s are playing 4s.

Then again, since the NFL went 8 divisions, divisional schedules only differ by two games - probably not a lot of room for a significant impact.

Did you really have to include the 2006 Jets, Alex? Three teams from that year had lower SRS scores and made the playoffs!!!

MattyP, assuming the divisions are the same, there is no advantage in finishing lower. Fourth place has two extra games vs fourth place finishers in other divisions, but they don't have any games against fourth place finishers in their own conference because it's them. And first place has two extra first place games outside their division, but they never have to play against the first place team in their own division.

But if you finish lower than you morally "should" have, then I suppose that could be an advantage. The Eagles finished fourth but according to the SRS, they were actually second. Assuming they're still a moral second place team this year, then they would in theory face 4 firsts, 2 seconds, 4 thirds, 6 fourths instead of 4, 4, 4, 4. But that's making a lot of assumptions that the league stays static. I'm guessing the effect in a season could be significant but in the long run, it's lost in the noise created by teams just naturally getting better and worse from year to year.

This is a tangent, but have you ever used all your data and wonderful math geekery to try and figure out what causes teams to go from 4-12 to 11-5 in a year or two? Or vice versa... I'm just curious if it's possible to rank a lousy team's potential, not for this year, but a few years down the road. What is the likelyhood that the Dolphins will pull a cowboys and go from 1-15 to 13-3 in 3 years?

"

Did you really have to include the 2006 Jets, Alex? Three teams from that year had lower SRS scores and made the playoffs!!!"Seriously?

I presume the Seahawks were one of those, and DVOA agrees that they were worse, but of course that's the good ol' NFC West phenomenon. Who were the other two? The Giants and Chiefs? Because DVOA thinks both were a lot better than the Jets. I wonder what's causing the DVOA/SRS discrepancy. Maybe they both just played better than they scored.

I've been curious about the SRS and I wanted to try my own calculations based on the 2005 model. Just how many times did you have to change the rankings to come at the proper formula?

Hey Jimmy,

Some information on the SRS at these two links:

http://www.pro-football-reference.com/blog/?p=37

And somewhat relevant too, since it's the same system just using different inputs:

http://forums.footballguys.com/forum/index.php?s=&showtopic=357385&view=findpost&p=7564269

So I was poking around with a rating system for football teams for the sole purpose of predicting future wins... can't seem to get much past 65% though 🙁

From eyeballing the graph, it seems like it's suggesting the most dominant team since 1970 was the Bears in 1985 because they happened to be phenomenal when there were no OTHER team stomping opponents like that. The Colts were very good in 2007 so the Pats weren't breathing rarified air all by their lonesome. Though the pats did have the highest peak rating... I'm curious what SRS says about it though. The Pats are 8.1 PPG better than the Colts here -- Who was #2 in 1985 when the Bears were crushing?

http://farm4.static.flickr.com/3121/2823666013_650831ed19_o.jpg

There's the picture of the ratings 🙂