This one is my theory on "Why rushes matter and receptions do not" in measuring running back workload and future injury risk.
I often see people complain that measuring workload by excluding receptions is inappropriate. I think this is up for debate, because it is not an apples to apples comparison. Rush attempts are far more likely to result in tackles (rather than runs out of bounds), tackles involving multiple tacklers, and tackles involving really big defenders.
But setting that aside, I think there is a far more significant reason why rush attempts matter, and receptions do not, when measuring workload. Take a look at these real games turned in by NFL running backs.
Now picture those games in your head, based only on the running back statistics. What type of backs are they? And more importantly, how did the game proceed?
These two games occurred almost exactly thirteen years apart. Running Back A is Ickey Woods in the AFC Championship game played on January 8, 1989 against the Buffalo Bills, in a game that the Bengals won to advance to the Super Bowl, 21-10. The Bengals led or were tied the whole game, and Woods' fourth quarter touchdown plunge sealed the victory. Running Back B is Priest Holmes in the final game of the 2001 season (played January 6, 2002) against the Seattle Seahawks. Holmes got some carries early, but the Chiefs fell behind 14-0 in the second quarter, and the Chiefs would play from behind the rest of the game, ultimately losing 21-18.
Which leads me back to why rush attempts matter in measuring workload, and receptions do not. It has to do with correlation with winning. High rush attempt games are highly correlated with teams that won the game. Why should that matter? Because teams that are winning tend to run the ball heavily late. The distribution of runs is not uniform throughout the game. The difference between a 20 carry back and a 28 carry back may be 8 carries, but it's probably not 2 carries every quarter. The 28 carry back probably has a quarterly distribution more like 7-5-6-10, while the 20 carry back is more uniform. The high workload rushing attempt guy, then, is getting a greater percentage of carries in the fourth quarter, when he may be tired. Those carries are more likely to come consecutively, when his team wants to run clock, and the other team knows they are going to run the ball. Running the ball is a risk every time, running the ball fatigued, though, is a greater risk to cause damage that will show up in the near future.
Let's confirm that my Ickey Woods/Priest Holmes example is not just an isolated one. Here, for example, are the winning percentages for all running back games since 1960, where a running back had between 28 and 30 touches (rushes + receptions), sorted by the rush attempt percentage for those touches:
%rushes total win pct ==================================== >90% 593 0.823 >80% 460 0.726 <=80% 128 0.539 ====================================
Or we could look at it another way, by holding the rush attempts constant but seeing how the number of receptions changes the winning percentage. Here are all backs who had exactly 25 rushes, sorted by number of receptions:
recept. total win pct ==================================== 0 to 1 228 0.803 2 to 4 218 0.766 5 plus 60 0.683 ====================================
So, we see that the percentage of rushing touches is an indicator of whether the back played in a win (and presumably, thus, with the lead late). Thus, a few more of those 25 carry backs who had a lot of receptions may have actually had a less harmful game, if they were not running the ball directly in to the teeth of the defense repeatedly in the fourth quarter.
The original idea for why attempts matter more than receptions was planted by this post by Chase Stuart a few years ago. Here's what Chase said then:
I don't think there's anything groundbreaking in the data, although it's nice to get some empirical evidence. If a team is giving its RB thirty-five or more carries in a game, chances are that: 1) that team is winning and/or going to win; and 2) it's a pretty close game. Thirty-three of the 59 games were decided by seven points or less, and only six involved 20-point victories. It makes sense that you'd only keep riding your star RB if it's a close game.
And the "keep riding your star RB" line is why I think that when we measure workload in an effort to study future injury risk, it's rush attempts that matter and receptions are extraneous information. But I'm a little slow on the draw, and it's taken me a while to sit down and formulate my thoughts. Now that we have this great game finder index and individual game data back to 1960, I figure it's time to dig back into the issue. After all, you don't want to hear any more of my musings, you probably want to see some hard data.
Well, if its getting a high workload while icing a lead that matters most in increasing injury risk, then we should see that backs who are the exception, and get a high workload rushing game while losing, stay healthier than their similar workload comrades on winning teams. And we can sort that out using the game finder. In this first case, I took all backs who had 25 or more carries in a playoff game since the 1977 post-season (hence, year N+1 would be 1978 or later). I sorted them by whether the back got those high carries in a game his team lost or won. The reason I settled on 25 before I began the research and actually looked at the data, by the way, was to be consistent with this series of posts from two years ago and this one from after the 2007 season.
81 different running back post seasons appear on this overall list. Some backs appeared on the list with more than one qualifying high carry game. If the back had a 25+ carry game in a loss as well as a win in the same postseason, he was placed in the "win" group. Backs in the loss group only had 25+ carries in games their team lost that post season.
I list the number of players in each group, average games played the next year (GP), percent of backs in that group who played every game the next season, and the percent of backs who missed at least half the games (M-8+), and those who missed at least 3/4 of the games the next season (M-12+). Here is a summary of the results:
Type No GP All M-8+ M-12+ WIN 67 12.3 0.27 0.18 0.13 LOSS 14 14.9 0.64 0.00 0.00
Okay, so far, it seems like the high carry games in wins might be more costly than in losses, but we only have 81 cases. Let's look at the final week of the regular season, same methodology.
Type No GP All M-8+ M-12+ WIN 108 12.9 0.43 0.15 0.08 LOSS 22 14.0 0.55 0.05 0.05
Similar story here, though not as pronounced. The one guy who missed over half the season after having a high carry game following a loss, by the way, was a dude named Scott Lockwood, who had 35 career carries with 30 coming in one game. A check of that game shows that the loss in question was in overtime and the Patriots led the entire game.
Let's pool that data, clean it up and eliminate duplicates (guys who appear on multiple lists). If a guy had 25+ carries in a loss in one group, and 25+ carries in a win in another, he was counted in the "win" group for pooled data. If a guy appears in both the week 16 and playoff groups with 25+ carries in a win, I removed the duplicate so he was only counted once (there were no duplicates on the loss lists). Here are the results:
Type No GP All M-8+ M-12+ WIN 159 12.7 0.37 0.16 0.09 LOSS 31 14.2 0.58 0.03 0.03
We are dealing with only 31 guys in the pooled Loss group, but they stayed healthier than their winning counterparts with similar rush totals, and the Loss group's early season injury rates are in line with previous research on running backs with moderate starting rush totals at the end of the previous season. The difference between the groups is mostly due to the really early serious (mostly knee) injuries from the high workload, winning game group. Going back to 1978, we find that several other notable injuries appear from this group. Curt Warner was the old school Jamal Lewis, as Warner and Lewis were by far the most heavily worked rookie running backs over the last month of the regular season and into the playoffs. These promising runners combined for 1 game and 10 carries in their second seasons. William Andrews was a star in the early 1980's for the Falcons. His career effectively ended with an offseason knee injury before the 1984 season, and he closed the 1983 season with a 29 carry game in a win. Former Charger Gary Anderson closed the 1988 season with 60 carries in consecutive wins to close the year, and missed the entire 1989 season with a knee injury.
I don't believe in curses or myths. And I don't necessarily believe in magic numbers, like 25 rush attempts or 28 rush attempts, where a back suddenly becomes more likely to get hurt, even though I've cited those as points where the injury rates increase. And I don't believe that every running back who has a high carry game gets hurt, far from it in fact, most go on just fine, its just that the number of "mosts" is less than other backs used more cautiously.
What I do believe is that running fatigued late in a game has the potential to cause injury, and backs who are playing for winning teams late in a game, and who have already played a full game, are more likely to be pushed over that line. Why do rush attempts matter and receptions do not? It's mostly not because of the different risk in injury occurring on the respective types of plays. It's because of what high rush attempts represent and typically say about how the back was used.
This entry was posted on Tuesday, September 22nd, 2009 at 7:29 am and is filed under Running Backs. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.