Posted by Chase Stuart on November 10, 2009
Yesterday, I argued that defenses shouldn't be thought of as good against the pass or good against the run; defensive statistics should be considered fluid measures, because most defenses can choose how they'll let the offense beat them.
So is this true? Is this even possible to measure? Assuming that defensive disparities (i.e., being very good at defending the run or pass and very bad at defending the pass or run) are random, then we shouldn't see teams consistently being good (or bad) in one just area of defense. One way to measure this is to break out each team into two half-seasons. To avoid injury issues, I spit teams into "odds" and "evens"; games 1, 3, 5, 7, 9, 11, 13 and 15 go into one group, and games 2, 4, 6, 8, 10, 12, 14 and 16 go into another. We would expect a metric like quarterback yards per pass to be relatively consistent in "odd" and "even" games, because averaging more (or fewer) Y/A than average is a repeatable skill; but we wouldn't expect teams that did well in even weeks on opponent missed field goals to again do well in odd weeks, because that's (generally) not a repeatable skill. So, are defensive disparities narrative statistics that merely described what happened or are they repeatable and predictable events?
I looked at every team-season since 1988, giving us 21 full seasons worth of data. I measured run defense by yards per carry allowed (relative to league average) and pass defense by adjusted net yards per attempt allowed (relative to league average). I also noted the raw number of points allowed by each defense (relative to league average). This gave me run, pass, and scoring defense grades for each half-team in the study. Assuming that defensive disparities were real and consistent, we should see them for teams in both the even and odd splits. If defensive disparities were merely a mirage, and defenses really force offenses to pick their poison, then there should be no correlation between the even and odd splits.
The correlation coefficient between yards per carry allowed in odd and even weeks is 0.28, indicating a small (but legitimate) correlation. This means teams that played the run well in one eight-week-split tended to do pretty well against the run in the other split as well. Pass defense -- measured by ANY/A -- saw a slightly lower correlation at 0.21. The strongest correlation of our three variables was in points allowed, at 0.33. None of these correlations are very strong but none are insignificant, and all are in the positive direction that we would expect.
However, that just answers half the question; it appears that defensive subgroups are slightly predictable. But are they predictable because the subsets are real, or because a greater force is driving both metrics? If a team was good against the run in both splits because that team's defense was very good, that's not inconsistent with my theory. I checked the correlation coefficient between yards per carry allowed in one half-season and adjusted net yards per pass allowed in the other half-season. It was 0.21. This means that if we're predicting how good a run (or pass) defense will be in the future, we can be nearly as accurate by using that team's pass (or run) defense in prior games than by looking at their prior performance against the run (pass)! The correlation coefficient between YPC and points allowed was 0.23, and the CC between ANY/A and points allowed was 0.26. So in terms of predicting run defense for one half-season, knowing the quality of run defense in the other season split was barely more useful than knowing how the defenses did against the pass or in points allowed.
I ran two other tests. I looked at all teams that, in one eight-game stretch, averaged 0.5 fewer YPC than average and 1.5 more ANY/A than average; i.e., great run Ds and bad pass Ds. Twenty teams fit this description, and on average, they allowed 0.69 YPC fewer than average to oppose runners but 1.88 ANY/A more to opposing passers. In the other eight-game split those same 20 teams allowed just 0.11 YPC fewer than average and only 0.31 ANY/A more than average.
Going the other way, there were 25 teams that, in one eight-game split, were very strong against the pass (allowing at least 1.5 fewer ANY/A than average) but terrible against the run (allowing at least 0.5 more YPC than average). On average, these teams held opposing passers to 2.24 ANY/A below average but got gashed by opposing runners to the tune of 0.72 YPC better than average. In the other eight-game split? These "finesse" Ds, as they were undoubtedly labeled, allowed just 0.22 more YPC than average against the run and were only -0.34 ANY/A better than average against the pass.
There's the typical regression to the mean principles involved in the two preceding paragraphs, but I think there's also pretty strong evidence that defensive coordinators do adjust and tinker with their defenses in order to fortify the weakest link. Do you guys agree?