SITE NEWS: We are moving all of our site and company news into a single blog for Sports-Reference.com. We'll tag all PFR content, so you can quickly and easily find the content you want.

Also, our existing PFR blog rss feed will be redirected to the new site's feed.

Pro-Football-Reference.com » Sports Reference

For more from Chase and Jason, check out their work at Football Perspective and The Big Lead.

Increased Risk Games Revisited

Posted by Jason Lisk on February 18, 2008

Last July, I posted a lengthy two part look at running back overuse and injury. Toward the end of those posts, I introduced what I will call an educated working hypothesis about the role of workload in running back injuries.

In my opinion, it is not the raw number of carries that matters. I believe the key is the number of higher stress games a back endures over a period of time. I will refer to these as Increased Risk Games (IRG).

Based on my review of the game by game rushing attempts from 1995-2006, the tipping point where "increased risk games" kicked in and we saw increased injury rates in the games that followed was at about 25 rushing attempts in a game.

Now that 2007 is completed and fresh in our minds, I thought I would look back on the past season's running back workloads and injuries to see what new information might be added. The official NFL injury reports from the past season freshly are available, so we can cross-reference rushing attempt totals on a weekly basis with the injury reports to have more detailed information than simply looking at past seasons game by game data can provide.

Here is what I did. I made a list of all games where a running back had at least 24 official rushing attempts in a game (so one less than my guesstimate of 25 attempts in a game), between weeks 6 and 13. Week 6 because it gets us past the start of the season, where injury rates are high, and may have more to do with the previous season, or injuries carried from last year, or the preseason while players are trying to get in shape. Week 13 because it puts us four weeks from the end of the regular season. I then looked at how many appeared on the injury reports as either probable, questionable, doubtful, out, or placed on the injured reserve, in the four weeks following that game.

For comparison, I also made a list of all backs who had a game with between 13 and 23 rush attempts during the same period, but did not have a higher workload game in the 3 weeks before or after the game in question, and I looked at injury reports for those players as well.

Some players had multiple games within a short period of time where they had 24 or more carries. In the chart below, that counts only once. However, if the same player had multiple periods that were at least 3 weeks apart, I counted those as separate cases. For example, if a back had 25 carries in week 6, and 32 carries in week 11, they would be viewed independently, and both separately reported below. Here are the results, sorted by attempt totals in the week in question. If a player was healthy (not on report) for three weeks, and on the report as probable for one, he is in the P (probable) column. If a player was questionable one week, then out the next, he is in the O (out) column. In other words, each case is categorized by the worst outcome on the injury report over the next four weeks. H=healthy, P=probable, Q=questionable, D=doubtful, O=out, I=out for rest of year or placed on IR in the next four weeks.

rush att       no.      H      P      Q      D      O      I
================================================================
13-23          50       25     8     11      0      4      2
24-27          17        8     5      1      0      2      1
28+            17        4     5      1      0      3      4
================================================================

And for those that don't want to do the quick math, here are some percentages (I'll refer to the three groups as the moderate, intermediate (24-27 carries), and high (28+ carries) risk groups:

Appeared on Injury Report at least once in next four weeks:
Moderate: 0.50
Intermediate: 0.53
High: 0.76

Listed as Out at least once (or placed on IR) in the next four weeks:
Moderate: 0.12
Intermediate: 0.18
High: 0.41

Suffered Season Ending Injury within the next four weeks
Moderate: 0.04
Intermediate: 0.06
High: 0.24

I should point out the "moderate group" is over-populated with backs recently coming back from injury, and some may continue to appear on the injury report. For example, Adrian Peterson (of Minnesota) appears in the high risk group before his knee injury (and counted as an "out" against that group), and then appears in the moderate risk group after returning from injury. He was listed as questionable the week he returned to play (and qualified for the moderate risk group), and he still appeared on the injury report as "probable-knee" the next week (thus not qualifying as healthy under my method), even though the injury did not get worse with the reduced workload. If anything, those numbers probably understate the difference that moving from a "moderate" risk number of carries to an "intermediate" risk number of carries would have on the same back.

The "High Risk" guys who suffered a season ending leg injury within 4 weeks of a 28+ attempt game were Larry Johnson, Cedric Benson, Justin Fargas, and Willie Parker. The "Intermediate Risk" guy was Derrick Ward, who had 24 attempts, and had been nursing leg injuries before that game. The "Moderate Risk" guys were Priest Holmes and Ronnie Brown. In my earlier post, I noted that no running back who played in his team's first 6 games, and had no high workload games, had suffered a season-ending leg injury within the next 2 games. Well, Ronnie Brown changed that, as his two highest games were 23 carries each. I'll note that like Ward, he was listed on the injury report as "probable-foot" the week he went down with the knee injury. If anything, I think the 2007 serious injury rate for backs with moderate workloads is higher than normal, because its not every year that a 34-year old back is going to attempt an ill-fated comeback and then have a coach that has no qualms about immediately giving him 20 and 19 carries when he has not played for two years.

A year and a half ago, Doug wrote a post entitled Questions (without answers) about running back workloads. In it, he asked alot of good questions. I'm going to try to go through some of those questions now, evaluate where we are, and follow it up with some ideas and questions of my own going forward.

Question 1 - forgetting the empirical evidence for a minute, what is the theoretical basis of these ideas?

Okay, I am not a doctor. I do have a biology degree, and took a fair amount of physiology courses, which along with 50 cents, probably qualifies me for a can of soda. So I'll tread lightly here, and am interested in input from others. For now, I'll just say that my theory would be it has something to do with running fatigued and carrying 300-lb lineman, after spending a whole game carrying 300-lb lineman. If a player is not getting adequate oxygen supply to tissue in the leg, it may cause damage. That damage may not appear right away in the form of a significant injury, until the tissue is subjected to continued use, and the injury may appear random (knee gives way on a sweep, for example). The player may also overcompensate for the original damage or discomfort, thus leading to poor mechanics and an acute injury in another body part (such as a sore foot changing how cuts are made, leading to a knee injury).

Also, should the injury question be separated from the loss-of-effectiveness question?

I tried to separate these out, because injuries (at least those that cause missed time) are easy to spot, while loss of effectiveness is more debateable. Is the back ineffective because of other factors, changes in team ability, etc, or is it because the back himself is physically different?

However, at the end of Overuse and Injuries, Part II, I identified the five backs with the biggest risk of injury at the start of 2007. Here is what they did last season:

player            GP     Att    Yards     YPC     TD
============================================================
Steven Jackson    12     237    1002     4.23      5
Shaun Alexander   15*    231     782     3.39      5
Rudi Johnson      11     170     497     2.92      3
Larry Johnson      8     158     559     3.54      3
Ladell Betts      16      96     350     3.65      1
TOTAL             62*    892    3190     3.58     17
============================================================

*includes 2 playoff games for Alexander

And yes, two of those players suffered immediate leg injuries (as did others not on my list), and another broke a foot in game 8. But what was even more noticeable about these players was the loss of effectiveness even before any injury appeared--I was looking at projecting injuries, and predicted a group of five players that were even worse from a performance stand point. For contrast, all other running backs in 2007 regular season and playoffs combined for 50,580 yards on 12,069 attempts, for an average of 4.19 yards per attempt. Only Jackson finished near the mean. Of the sixty-six running backs who had at least 60 rushing attempts in 2007, Rudi Johnson was dead last in terms of yards per carry, and LJ and Shaun Alexander joined him in the bottom 10. This is not simply regression to the mean.

In looking at these issues, we need to try to separate them because it may not be right to assume they are always tied together, but there may be a fair amount of overlap between these two questions.

Question 2 - Could we learn something by looking at not only raw numbers of carries but also the distribution of those carries?

Yes. This is where the recent history of increased risk games come in. I think that the recent distribution of carries matters, while historic workload doesn't matter all that much (unless it already caused a breakdown). For example, LaDainian Tomlinson had 63 rushing attempts in his first 2 NFL starts in 2001. He also averaged 33.5 rushing attempts per game his final year at TCU, by far the highest average for any back selected in the first two rounds of the NFL draft for the period for which pro football reference has college stats. If we are in 2007, and he has not had a breakdown, wouldn't it be safe to assume that those carries in fact did not cause harm? And thus, career rush totals are relatively worthless, and in fact, for "low mileage" backs, may give teams a false sense that they can work these backs more heavily with less risk?

This view would be consistent with the research done by Doug in his three part series on running back deterioration.

Question 2A - if it is true that the offseason isn’t a long enough time to heal whatever damage is done during the year, then should we weight late-in-the-season carries heavier than early-in-the-season carries?

Yes, if what we are looking at is the risk of injury the next year. The immediate workload is what appears to matter. I believe you can almost throw out anything before that, so long as there was no injury or loss of effectiveness.

Question 2B - is a consistent N carries per game less damaging than the same number of carries accumulated in a less consistent way? That is, is a 30-carry game twice as damaging as a 15-carry game? Or moreso?

It depends on what N equals. If N is greater than 25, then it is more damaging to be consistent, than to have a few weeks off. If N is less than 22, then that is probably better than going 30,14,30,14. At this time, in looking over my prior research as well as that posted here, I would assess the risk of a serious leg injury going up by as much as four to five times (from about 5% to about 25%) in going from carries 20 to 28. I'll also revise my previous estimate slightly on what qualifies as an increased risk game, based on viewing this year's info, and looking over the previous info again. The risk does begin to increase at around 25 attempts, but it shoots up even more dramatically in the 28-30 attempt range, so that 25 attempts is not as bad as 28, on average. However, interestingly, 35+ attempt game do carry a high risk, but no higher than say, games with 28-30 attempts.

And finally, the last question:

In other words, say the following factors contribute to the chance of Portis getting hurt this year: his own unique physical characteristics (genetics, etc), his workload from last year, and dumb luck. What are the relative weights on those three (or however many) factors?

In the comments to Doug's post, I answered 1) dumb luck, 2) genetics, and 3) workload. I'd like to change my answer. I obviously don't know the answer to what the distributions are, but I am fairly certain that at this point, a team/coach can only control and account for one of those things with any reasonable certainty.

Luck is certainly involved, but how do we quantify it? I have no doubt that every player is unique, and genetics plays a role in what one athlete can accomplish versus another. I just don't think at this point I could predict with any certainty who is better able to handle a workload. How many would have predicted that Chester Taylor, Jamal Lewis, Kenny Watson, and Thomas Jones would have been the four backs (out of 17) who had a 28+ carry game but never appeared on the injury report thereafter in 2007. Player X has broken down in the past, he may not now--Clinton Portis had a two game stretch of 66 attempts, and only landed on the injured list once as probable, while backs with no injury history went out with injury when carrying a similar amount of times.

Has Tomlinson been lucky or just blessed genetically? How about Carnell Williams, who started his career in similar fashion, but then immediately suffered an overuse-type foot injury, probably came back on it too soon, and now after a knee injury, his career is uncertain at this point? Unlucky, or bad genetics? how do we know for sure?

Which leaves the one thing a team can control--a running back's usage in a game. Of course, this might require team's to plan ahead for various game scenarios, and to construct their roster in a manner consistent with such a goal. I don't even think teams need to go to a "running back by committee" per se, where multiple backs evenly split the carries. Feature running backs have been able to handle a consistent 18-23 carries a game without getting injured anymore than other backs. I think teams just need to be willing to give other running backs on the roster the additional carries if the team is going to run it 30+ times. Rather than an across the board "committee" approach, teams should utilize a "committee" approach in select specialized areas, and utilize the most talented back more fully in others.

In conclusion, I want to add that I have purposely avoided using a word such as abuse, which may have ethical implications, instead focusing on overuse, which deals with only the physical. I am not suggesting that there are never circumstances when a team should not use a back for a high number of carries in a game. Running backs are grown males and presumably there is some benefit (in exchange for decreased health and longevity) in getting more carries, whether it be fame, bigger potential paydays, or something else. However, I do think coaches do not fully grasp the elevation of risk involved in order to properly evaluate the risk vs. reward of a given situation. Many of the high workload situations that I've looked at that resulted in serious injury were unnecessary because the reward did not justify the risk--generally because the team could have shifted carries without appreciably reducing the win expectation, while it increased the injury risk dramatically. I do think the number of high carry games could be dramatically reduced without significant impact on the win expectation in a particular game (and may increase the win expectation overall by making it more likely the running backs are at full strength).

With that, I'll leave off with some new questions going forward.

1) If games of 25 or more, and particular 28 or more carries, increase the injury risk, are multiple increased risk games in a short span simply separate and independent events that both carry a separate increased injury risk, or is there a cumulative/additive effect, much like steps in a chemical reaction, each of which may be necessary to produce the final result?

2) Do players who are playing with an existing injury have a lower threshold (in terms of attempts) at which the injury rates begin to rise? If so, what is it, and how much riskier is it to play a back with an existing lower body injury?

3) If it is true (as it appears to me based on limited sample size) that backs who are 24 or under tend to get hurt at higher rates than backs ages 25 to 28, when subjected to similar work rates, is this because some of the young backs are simply unable to handle the workload, and would get hurt even if they did not get their first high workloads until their mid-20's (in other words, they are "weeded out" before they get to 25), or is it because there are physical maturity differences that would allow the same back 25 or older to handle the workload?

4) We've seen that career workload doesn't seem to matter, and career prior carries is a much smaller factor than age in predicting future performance, but how far back in time do carries matter in determining injury risk? two games, four games, eight games, a season, more? Is the answer different depending on whether we are talking injury versus loss of effectiveness?

This entry was posted on Monday, February 18th, 2008 at 10:01 am and is filed under General. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.