SITE NEWS: We are moving all of our site and company news into a single blog for Sports-Reference.com. We'll tag all PFR content, so you can quickly and easily find the content you want.

Also, our existing PFR blog rss feed will be redirected to the new site's feed.

Pro-Football-Reference.com » Sports Reference

For more from Chase and Jason, check out their work at Football Perspective and The Big Lead.

2007 Standings: Simple Ranking System

Posted by Chase Stuart on June 16, 2008

A couple of years ago, Doug described the Simple Ranking System, which is a basic method of ranking just about anything. You can use it to rank NFL teams, as well as NFL offenses and defenses. Here's a quick description of the system:

To refresh your memory, it’s a system that’s been around forever and is extremely basic compared some of the other power rating systems out there. But I like it because it’s easy to understand. An average team will have a rating of zero. An above average team will have a positive rating while a below average team will have a negative rating. Every team will have a rating that is the equal to their average point margin plus the average of their opponent’s ratings, so the teams’ ratings are all interdependent: the Colts’ rating depends upon the ratings of all their opponents, which depends upon the ratings of all their opponents (some of which are the Colts), and so on.

The '07 Eagles outscored their opponents by 36 points, or 2.3 PPG. The Eagles had a really difficult schedule, playing nine games against the Patriots, Seahawks, Packers, Giants, Cowboys and Redskins. The Eagles' average opponent was 3.0 PPG better than average, so we can estimate that the Eagles must have been 5.3 theoretical points better than a league average team.

	Ovr Rat	   SOS
nwe	 20.1	   0.4
ind	 12.0	   0.3
dal	  9.5	   1.3
gnb	  9.0	   0.0
sdg	  8.8	   0.8
jax	  6.8	   0.1
phi	  5.3	   3.0
pit	  5.2	  -2.5
was	  4.5	   3.0
min	  3.8	   0.4
nyg	  3.3	   1.9
sea	  1.8	  -4.6
chi	  1.2	   2.1
tam	  1.2	  -2.8
ten	  0.7	   0.5
hou	  0.0	   0.3
cle	- 1.1	  -2.3
cin	- 2.4	  -2.1
nor	- 2.5	  -2.0
det	- 3.6	   2.6
nyj	- 3.7	   1.7
ari	- 3.9	  -4.3
den	- 3.9	   1.6
buf	- 4.1	   2.3
kan	- 5.5	   1.4
car	- 5.8	  -0.8
oak	- 6.0	   1.2
bal	- 6.7	   0.1
mia	- 8.4	   2.3
atl	-10.6	  -0.9
sfo	-11.9	  -2.9
ram	-13.0	  -2.0

The Patriots' +20.1 is by far the highest of all time. The '91 Redskins were +16.6, the '85 Bears at +15.9, and only three other teams were +15.0 or higher. The Patriots had a plus/minus differential of +19.7which is of course amazing; but incredibly, New England accomplished that against an above average schedule.

The Eagles, Redskins, Lions, Bills, Dolphins and Bears all faced rough schedules in 2007; I suspect that all those teams will be at least somewhat undervalued in 2008 because of that. Conversely, the Seahawks and Cardinals had incredibly easy schedules last year. Is it even going out on a limb anymore to say that Arizona will be overvalued this year?

The SRS has a lot of uses, including some predictive ability for the next season.

For example, the correlation coefficient between team winning percentage in Year N and team winning percentage in Year N+1 is 0.29; that means there's a mild correlation between winning percentages from year to year. But the correlation coefficient of each team's SRS score in Year N and the team's Year N+1 winning percentage is 0.32, a slight increase. Perhaps more importantly, teams with average SRS ratings and really high or really low winning percentages, generally come back to the pack the next season.

We can also create a formula to predict winning percentage in Year N+1. After performing a regression, the best-fit formula to predict Wins in Year N+1 is 16 * (0.503 + SRS_Yr_N * .01). That means every additional point of SRS is worth just 0.16 wins the following year. That doesn't sound very convincing, even if it may be true. No doubt part of the problem is injuries the next year are unpredictable and compress the ratings, and the SRS of course ignores any and all off-season changes. That being said, here's a list of the projected number of wins for each team:

	Proj W
nwe	11.3
ind	10.0
dal	9.6
gnb	9.5
sdg	9.5
jax	9.2
phi	8.9
pit	8.9
was	8.8
min	8.7
nyg	8.6
sea	8.3
chi	8.2
tam	8.2
ten	8.2
hou	8.1
cle	7.9
cin	7.7
nor	7.6
det	7.5
nyj	7.4
ari	7.4
den	7.4
buf	7.4
kan	7.2
car	7.1
oak	7.1
bal	6.9
mia	6.7
atl	6.3
sfo	6.1
ram	5.9

I think this list is nice to help keep my opinions in check. For example, Cleveland's a huge sleeper team this year, and people probably think "they went 10-6 last year, added Corey Williams, Shaun Rogers and Donte' Stallworth, and Joe Thomas and Derek Anderson are now in their second years starting. This team is going to be good." That may all be true, but you get a very different sense of the Browns when you start under the assumption that they project as a 7.9 win team, and not a 10 win team. They had a nice off-season, but they weren't nearly as good as their record last year. And even their SRS is inflated, because we know Josh Cribbs won't be anywhere near as valuable again this year. On the other side, the Eagles added Asante Samuel and were darn good last year -- it's not unreasonable to think Philadelphia can get 10 or 11 wins.

JKL's done some work that suggests that teams with high SRS ratings because of hard schedules might not be as good as teams with high SRS ratings because of a really high points differential. To the extent that such a phenomenon exists, this regression doesn't discover it. I ran through the numbers a second time using SRS rating and SOS variable to predict Year N+1 winning percentage, and the weight placed on the SOS variable wasn't even close to significant. I also ran the same regression on the 36 teams from 1993-2006 with a +8.0 or better SRS rating in Year N, and once again, their SOS variable was not close to statistically significant. I don't think this ends the discussion, but I think I'll need to see some more proof and theory in the comments as to whether or not how you got your high SRS rating matters.

I did one last study, performing a regression analysis using Year N SRS rating and Year N SOS rating to predict Year N+1 SRS rating. I thought that maybe SOS was somewhat consistent from year to year, and thus using just team winning percentage would hide that. It turns out that the SOS rating variable was given absolutely no weight. I also checked the correlation coefficient between SOS in Year N and SOS in Year N+1, and it was just 0.15. For the most parts, strengths of schedule really aren't consistent from year to year.

This entry was posted on Monday, June 16th, 2008 at 6:05 am and is filed under History, Statgeekery. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.