SITE NEWS: We are moving all of our site and company news into a single blog for Sports-Reference.com. We'll tag all PFR content, so you can quickly and easily find the content you want.

Also, our existing PFR blog rss feed will be redirected to the new site's feed.

Pro-Football-Reference.com » Sports Reference

For more from Chase and Jason, check out their work at Football Perspective and The Big Lead.

Archive for July, 2006

Golden Era of Coaching?

Posted by Chase Stuart on July 30, 2006

Something that's always interested me was how many active coaches owned Super Bowl rings at any given time. For example, in 1995 only three active coaches had won a Super Bowl -- Bill Parcells, George Seifert and Don Shula. In 1983, 1984 and 1985, the same group of six coaches (Tom Flores, Joe Gibbs, Tom Landry, Chuck Noll, Shula and Bill Walsh) were the only guys on the sidelines with Super Bowl rings. Flores had already won in '80 and Walsh in '81, so no new coaches won in '83 or '84, and none of the six coaches were fired or retired.

At this point I should probably clarify a few things.

  • I'm counting a Super Bowl winning season as that season, not the following January or February when the team actually won. So Weeb Ewbank gets his ring for coaching the 1968 Jets, and not the 1969 Jets.
  • I'm only counting seasons when the coach already had won the Super Bowl. So the NFL gets no credit because John Madden coached in 1970, since he didn't win the Super Bowl until 1976.
  • I'm not counting the year when the coach first won a Super Bowl. This is just artistic preference, since I think it gets at what I want better. So when listing all the coaches that had won a Super Bowl for the year "2005", I'm not going to count Bill Cowher. If you like, feel free to insert the phrase "the summer of" before each year. That way it's easy to remember that during "the summer of 2005", Bill Cowher had zero rings.

As it turns out, the summer of 2006 ranks 1st on the list, along with three other years. As of right now, eight head coaches have won the big game: Bill Belichick, Brian Billick, Bill Cowher, Joe Gibbs, Jon Gruden, Mike Holmgren, Bill Parcells and Mike Shanahan. In (the summers of) 2004 and 2005, you've got the same list except with Dick Vermeil instead of Bill Cowher. Here's the full list of the number of active coaches that had won Super Bowls as of each given season:


    Year Number of coaches
    1967 1
    1968 0
    1969 2
    1970 2
    1971 3
    1972 4
    1973 5
    1974 3
    1975 3
    1976 4
    1977 5
    1978 4
    1979 3
    1980 3
    1981 4
    1982 5
    1983 6
    1984 6
    1985 6
    1986 7
    1987 8
    1988 7
    1989 5
    1990 6
    1991 5
    1992 5
    1993 5
    1994 4
    1995 3
    1996 4
    1997 5
    1998 5
    1999 6
    2000 3
    2001 5
    2002 5
    2003 7
    2004 8
    2005 8

Here's the list from 1987: Mike Ditka, Tom Flores, Joe Gibbs, Tom Landry, Chuck Noll, Bill Parcells, Don Shula and Bill Walsh. So for you trivia buffs, Joe Gibbs and Bill Parcells are the only two coaches active during the four NFL seasons where the most active coaches had won a Super Bowl.

As you'd expect, the numbers are smaller in the early years, and after Lombardi won the first two Super Bowls and left coaching, no active coaches had won a Super Bowl. There was also a dip in 1995 and then in 2000, when Mike Ditka, Jimmy Johnson and Bill Parcells left the coaching ranks.

Five times a head coach won the Super Bowl but then did not coach the next season: Vince Lombardi (1967), Bill Walsh (1988), Bill Parcells (1990), Jimmy Johnson (1993) and Dick Vermeil (1999). Interestingly enough, only Walsh left on top, as Lombardi (Washington), Parcells (New England, New York, Dallas), Johnson (Miami) and Vermeil (Kansas City) all returned with new teams, but didn't win another Super Bowl. That might change, of course, this year in Dallas.

1999 was an interesting year. You've got six head coaches -- Mike Ditka, Mike Holmgren, Jimmy Johnson, Bill Parcells, George Seifert and Mike Shanahan -- that had won Super Bowls. But you also had four other coaches in the league that would go on to win one: Brian Billick, Bill Cowher, John Gruden and Dick Vermeil. That might seem like a lot, but there have been times where five active coaches hadn't yet won a Super Bowl, but would go on to.

In 1995, Belichick, Cowher, Holmgren, Shanahan and Barry Switzer manned the sidelines without Super Bowl rings, but all would one day win one (To the dismay of Browns fans, Belichick was the only one that wouldn't win it with his team at that time). Not surprisingly, the early years of the Super Bowl Era also had a quintet of active coaches that would go on to win their first Super Bowl rings. In 1969, Tom Landry, John Madden, Chuck Noll, Don Shula and Hank Stram hadn't yet won Super Bowls. By 1970, Stram had won of course, but Don McCafferty joined the league in 1970, and promptly won Super Bowl V while coaching the Baltimore Colts. (George Seifert is the only other coach to win the Super Bowl in his first season. Of course, two Hall of Fame coaches -- Shula and Walsh -- left those teams in pretty good shape.)

Here's the full list showing how many coaches had Super Bowl rings at the time, how many had none but would go on to win one, and how many active that season would ever (past or future) win one.


    Year Had Would At any
    Won Win time
    1967 1 4 5
    1968 0 4 4
    1969 2 5 7
    1970 2 5 7
    1971 3 4 7
    1972 4 3 7
    1973 5 2 7
    1974 3 2 5
    1975 3 1 4
    1976 4 2 6
    1977 5 1 6
    1978 4 1 5
    1979 3 3 6
    1980 3 3 6
    1981 4 3 7
    1982 5 3 8
    1983 6 2 8
    1984 6 2 8
    1985 6 2 8
    1986 7 1 8
    1987 8 0 8
    1988 7 1 8
    1989 5 3 8
    1990 6 1 7
    1991 5 2 7
    1992 5 4 9
    1993 5 3 8
    1994 4 4 8
    1995 3 5 8
    1996 4 3 7
    1997 5 3 8
    1998 5 3 8
    1999 6 4 10
    2000 3 4 7
    2001 5 3 8
    2002 5 2 7
    2003 7 1 8
    2004 8 1 9
    2005 8 1 9

For those curious, Bill Parcells was the "1" in 1986 that would go on to win a Super Bowl. After Parcells' Giants won that season, there were no active coaches in 1987 that hadn't yet won who would go on to win. Think about that for a minute. Doug wrote Friday about how many QBs could win the Super Bowl. Teams give chances to new coaches all the time in the hopes that they'll lead them to Super Bowl victory, but none of the new guys that year would ever do it. That changed when Mike Shanahan joined the head coaching ranks in 1989.

That brings us to the question of the day. Is this the golden era of coaching? Eight active coaches have won a Super Bowl, and they have fourteen rings among them. This might not be the best group of coaches the NFL has ever seen. But if it's not, it's pretty close.

15 Comments | Posted in History

Records in close games

Posted by Doug on July 28, 2006

This is just a quick one: team's records in close (decided by seven or fewer points) and non-close games:


TM YR Close NonClose
=========================
pit 2005 2- 4 9- 1
sdg 2005 2- 5 7- 2
ind 2005 3- 0 11- 2
nyg 2005 3- 3 8- 2
den 2005 4- 2 9- 1
car 2005 4- 3 7- 2
was 2005 5- 5 5- 1
gnb 2005 2- 8 2- 4
sea 2005 6- 2 7- 1
atl 2005 3- 4 5- 4
ari 2005 2- 5 3- 6
buf 2005 1- 4 4- 7
det 2005 2- 5 3- 6
tam 2005 6- 3 5- 2
dal 2005 6- 5 3- 2
cin 2005 4- 1 7- 4
kan 2005 4- 2 6- 4
cle 2005 4- 5 2- 5
chi 2005 5- 1 6- 4
hou 2005 1- 6 1- 8
mia 2005 5- 3 4- 4
ten 2005 1- 4 3- 8
bal 2005 3- 4 3- 6
phi 2005 5- 5 1- 5
nor 2005 2- 5 1- 8
nyj 2005 2- 4 2- 8
oak 2005 2- 4 2- 8
min 2005 4- 1 5- 6
jax 2005 8- 2 4- 2
stl 2005 4- 4 2- 6
nwe 2005 5- 1 5- 5
sfo 2005 4- 4 0- 8

I've attempted to sort the list so that teams that did relatively better in non-close games are at the top. Though it's not important, the actual formula I used was this:


(NonCloseWins - NonCloseLosses) - (CloseWins - CloseLosses)

Fact 1: the correlation between close game winning percentage in Year N and overall winning percentage in Year N+1 is .08.

Fact 2: the correlation between non-close game winning percentage in Year N and overall winning percentage in Year N+1 is .30.

Fact 3: the correlation between overall winning percentage in Year N and overall winning percentage in Year N+1 is also .30.

Fact 4: if you regress Year N+1 overall winning percentage on the two independent variables: Year N close game winning percentage and Year N non-close game winning percentage, the first one (close games) is highly insignificant.

In other words, from a predictive standpoint, the first column is just noise. If you're thinking about, say, the Patriots and their 2006 prospects, instead of saying, "Well, they were 10-6 last year, they gained Players A, B, and C, and lost players X, Y, and Z," you should probably be saying, "Well, they were 5-5 last year, they gained Players A, B, and C, and lost players X, Y, and Z."

20 Comments | Posted in General

New professional football league

Posted by Doug on July 27, 2006

This isn't typically a news blog, but this story is of great interest, and I'm a little surprised that I hadn't heard anything about it before. A group of former university presidents, chancellors, and athletic directors are forming a new spring football league.

Their main schtick is that they will play in (unspecified for now) college stadiums, and that the teams will have some sort of territorial rights to the players from that college and nearby colleges. Did the USFL have a similar element to it, or am I making that up?

The season will run from April to June (starting in 2007), the players will be "owned" by the league rather than the individual teams, and they will make about $100,000 per year. Not too shabby. Now here is the weird part:

All 44-48 players per team must have graduated from college and exhausted their college eligibility.

I understand the college eligibility part --- they don't want to step on the NCAA's or the NFL's toes --- but why require graduation? They say it's "an incentive for current college players to graduate," but I just don't see that working out. Let's see, I'm a marginal NFL prospect who will probably get drafted on the second day. Oh by the way, I hate school because I never had any business being there in the first place. I'd have to take a heavy load in my last semester to graduate. Here are my choices:

A. quit school, work full-time with a speed coach, and try to shave .08 off my forty time before the combine.

B. get serious about academics so that I'll be eligible to play in the minor leagues.

If this blog is like every other spot on the internet, I must have dozens of lawyers reading this. Is it even legal to require a college degree for a job in which it is clearly not necessary?

Aside from that nonsense, I'm all for it.

25 Comments | Posted in General

They’ll never win a Super Bowl with _______ at quarterback

Posted by Doug on July 26, 2006

I was having a conversation recently with a friend who is a Broncos fan. He was pretty fired up about Jay Cutler and gave me the old "they are never going to win a Super Bowl with Jake Plummer at quarterback" line.

I have never thought much of that comment, and I told him so. "C'mon, they got to the AFC Championship Game last year. You mean to tell me that Jake Plummer's presence makes it impossible that they could have won two more games?" With rare exceptions --- last year's Denver team not being one of them in my opinion --- any team that got to a conference championship game could have won the Super Bowl had just a few things gone a little differently.

At least that's the way I've always thought about it. But I could be wrong.

In the comments to yesterday's post, the infamous commenter with the handle "monkeytime" posted a set of quarterback rankings generated by a system that differed just a bit from mine:


1t. Terry Bradshaw
1t. Joe Montana
3t. Troy Aikman
3t. Tom Brady
5t. Bart Starr
5t. Roger Staubach
5t. Bob Griese
5t. Jim Plunkett
5t. John Elway
10t. several tied.

In case you somehow missed it, that's a list of Super Bowl winning quarterbacks. And his point has some validity. Let's try it from the other end. Who are the worst quarterbacks to win a Super Bowl? In rough order,

1. Trent Dilfer
2. Mark Rypien
3. Jim McMahon
4. Brad Johnson

Then comes who? Plunkett? Warner? Theismann? Simms? The order is debateable, obviously, but it doesn't matter. If you don't agree with my top four, then substitute in the your own as you read along. Now the question is: how many Super Bowl losing quarterbacks are worse than Brad Johnson? How many are worse than Rypien?

Kerry Collins, Neil O'Donnell, Stan Humphries, David Woodley, Vince Ferragamo, Craig Morton, Joe Kapp, Chris Chandler at the very least. And then there's Ron Jaworski, Boomer Esiason, Steve Grogan, Rich Gannon, Drew Bledsoe, Jake Delhomme (probably unfair to include him) and Daryle Lamonica. I think most people would agree that there are eight Super Bowl losing quarterbacks who are worse than the second-worst Super Bowl winning quarterback. So maybe there is something to this "can't win with ________ at quarterback" thinking after all, as long as it's not taken too literally.

Or maybe not. Was Phil Simms really any better than Chris Chandler? Was Joe Theismann any better than Steve Grogan? Was Plunkett better than Esiason? Or do we just think they're better because we (intentionally or unintentionally) judge quarterbacks by how many Super Bowls they've won?

Discussion questions:

1. [edited for clarity] How many quarterbacks in the NFL right now, were they the starter for the 2006 Denver Broncos, would be "capable of winning the Super Bowl in February of 2007"?

2. Is Jake Plummer one of them?

3. Was Jim McMahon a better quarterback than Neil O'Donnell? How do you know?

44 Comments | Posted in General

Ranking quarterbacks BCS style III

Posted by Doug on July 25, 2006

Here are parts I and II.

Yesterday I described the Big Chief Conference. Another interesting case is the Big Cowboy Conference, consisting of Danny White, Gary Hogeboom, Steve Pelleur, Steve Walsh, Troy Aikman, Jason Garrett, Quincy Carter, and Chad Hutchinson. Roger Staubach is also in there, but since I only looked at seasons since 1978 he is not particularly relevant.

We all know that Troy Aikman's stats were not dazzling, but most people are willing to give him a pass because of the conservative system he was playing in. But Aikman has an 0-7 record, which means that Aikman never outproduced the quarterbacks playing in (roughly) the same conservative system with (roughly) the same supporting cast. You are free to make of that whatever you want --- I am certainly not going to argue, here or elsewhere, that Quincy Carter is a better quarterback than Troy Aikman --- I am just explaining why Aikman ranks last in this conference.

Let's check the interconference record of the Big Cowboy Conference. Danny White played no interconference games. Neither did Aikman, Pelleur, or Garrett. Hogeboom was 3-2 against Jack Trudeau (twice), Chris Chandler, Timm Rosenbach, and Neil Lomax. Walsh was 1-6 against Jim Harbaugh, Bobby Hebert, and Erik Kramer. Carter's lone out of conference contest was a close loss against Vinny Testaverde. Hutchinson, believe it or not, has a 3-0 out of conference record featuring very ugly wins over Kyle Orton, Kordell Stewart, and Chris Chandler.

Overall, the Big Cowboy Conference sports an 8-8 interconference record. So they are viewed by the computer as an average conference. Hence, the teams at the top of the conference (White and Hogeboom) rank high, and those at the bottom (Aikman) rank low.

Whatever you think of this exercise, you might have some fun finding some other interesting Conferences. This detailed "game-by-game" account will help. There's a lot of stuff there, so I'll give you an example here:


77. Jon Kitna 7 -0.36
cin 2003 vs. Carson Palmer ( -2.02) cin 2004: 1.65
cin 2001 vs. Scott Mitchell ( -0.81) cin 2000: 0.54
cin 2001 vs. Akili Smith ( -1.16) cin 2000: 0.23
sea 1999 vs. Warren Moon ( 0.50) sea 1998: 0.11
sea 1998 vs. Warren Moon ( 0.50) sea 1998: -0.65
sea 2000 vs. Matt Hasselbeck ( 0.20) sea 2001: -0.77
sea 1998 vs. Warren Moon ( 0.50) sea 1997: -1.37

Kitna played 7 games and his overall rating was -.36, which means that, according to this scheme, he is .36 adjusted yards per pass worse than an average QB. Then you see a list of all the games he has played. Kitna's 2003 Bengal stats get compared to Carson Palmer's 2004 numbers, and that is a game. Kitna won that game by 1.65. The -2.02 in parentheses is Palmer's rating. If I managed to hyperlink everything properly, you should be able to click on Palmer and see exactly how in the heck he ended up with a -2.02. And so on.

You'll notice that Kitna's 1998 season gets compared to Warren Moon's 1998 season. But, because we are allowing one-year lagged comparisons, Kitna's 1999 season gets compared to the same Moon season. I probably should eliminate those kinds of things. But it's not trivial to do and I'm not convinced that there is a high probability that it will lead to "better" rankings. Anyway, I'll do it eventually and report back.

6 Comments | Posted in BCS, Statgeekery

Ranking quarterbacks BCS style II

Posted by Doug on July 24, 2006

Here is Part I. In that post, I promised a set of quarterback rankings, and I promised that they'd be meaningless. I'm going to deliver on both promises.

As I talk my way through this, I'll continue with the college football analogy. A Quarterback is analogous to a team. A comparison of the stats of two quarterbacks on the same team in the same or consecutive years is analagous to a game. So I'll say, for example, that Donovan McNabb beat Mike McMahon in 2005. He beat him by 2.2 adjusted yards per pass, which would roughly be the equivalent of a 20-point win in a college football game.

Now that the stage is set, let's talk about Bill Kenney. Kenney has a record of 10-1. Sounds good, right? It even works out nicely with the college football analogy. He's the West Virginia Mountaineers. But here is where the analogy breaks down. The Mountaineers compiled their record against 11 different teams of varying quality levels. Bill Kenney did not. He was 9-0 against Todd Blackledge, 1-0 against Steve DeBerg, and 0-1 against Steve Fuller.

But it gets worse. Blackledge played no other games. Kenney is the only quarterback that directly compares to him. So what does the computer think of Blackledge? Well, it has a good idea that he's quite a bit worse than Kenney. But will it conclude that Kenney is good and Blackledge is average? Or will it conclude that Kenney is average and Blackledge is bad? That hinges on Fuller and Deberg. Fuller only played Kenney and Mike Livingston, and Livingston only played Fuller.

So what we have here is a conference --- let's call it the Big Chief Conference --- consisting of Kenney, Fuller, Livingston, and Blackledge. They've played a bunch of intraconference games, so the computer has some confidence in its ability to rank order those teams. But where do they rank nationally? That depends on how strong the conference is.

They played exactly one out of conference game. It was against Steve Deberg. Fortunately for the Big Chief Conference, Bill Kenney represented well in that game, beating Deberg by a fair margin. Deberg was pretty darn good, compiling a 12-6 record against what may have been the toughest schedule in the nation, including games against John Elway, Dan Marino, Joe Montana, Steve Young, Vinny Testaverde, and Dave Krieg.

Because of that, Kenney ranks #3 and Blackledge ranks #74 out of 127 quarterbacks who played at least five games. In 1988, Deberg threw for 2935 yards with 16 touchdowns and 16 interceptions. Had he thrown instead for 3400 yards and only 13 interceptions, then he would have beaten Kenney, and Kenney would have be ranked #60 and Blackledge #124.

So for Bill Kenney, a few hundred yards and a handful of interceptions in one Steve Deberg season are the difference between #3 and #60. That's a problem. That's not the only problem with this method, but it's the most serious one. The quarterbacks simply aren't sufficiently well connected.

If the NFL had a rule stating that no quarterback is allowed to appear in more than eight games in a season, then I think this method would produce a meaningful set of rankings. But there is no such rule, and the rankings produced by this system are worthless. Only because I promised to, I will post them here. Tomorrow I'll wrap this up by looking at some of the more interesting rankings, and I'll post a chart that details every quarterback's full "schedule." For now, here are the rankings.

Fine print: only seasons since 1978 were included, and only seasons with more than 150 pass attempts. Comparisons were based on adjusted yards per attempt, which was further adjusted for the quarterback's age at the time.

5 Comments | Posted in BCS, Statgeekery

College football thoughts

Posted by Doug on July 21, 2006

Yesterday's promise of a creative set of quarterback rankings is on hold while I work out the programming. Meanwhile, I'll do a Friday rant.

When I was a kid, I followed the NFL, college football, college basketball, and major league baseball religiously. I also kept pretty close tabs on the NBA. If I had time, I would still be doing that. But when I turned about 18, I found that I no longer had enough time. Ever since then, I have had to make some tough decisions.

There was a period when I was an NBA freak, but right now I am only vaguely aware of what's going on with those guys. I have had stretches of being a rabid baseball fan and stretches where I haven't paid any attention at all. I completely lost interest in the NFL in the early 90s. That interest was revived by fantasy football in the mid-90s and is obviously still raging today. I honestly don't know if I would give a whit about the NFL right now if it weren't for fantasy football, but that's not really important. College hoops was my #1 love when I was young, but it is almost completely off the radar now. I don't even check the standings until Christmas. If I find my Oklahoma State Cowboys looking like a contender, then I'll start paying attention. Otherwise not.

College football was totally out of the picture for almost the whole decade of the 90s. But two things brought it back to my attention.

First, Sooner football was re-born, which gave me something to care about: hating the Sooners. I started watching Sooner games, hoping to see them lose. Then I found myself watching other teams' games, hoping to convince myself that one of them would be able to beat the Sooners later in the year.

Second, the BCS introduced me to some interesting mathematics. In the process of learning about the math behind the so-called computer algorithms, I gradually got to know the teams a little better and started following the action. The process was somewhat similar to that described in the ten thousand stories rant.

Hatred and math are a potent combination. My interest in college football seems to be growing by the year and it is now solidly in the #2 position. I'm sure there are lots of blogs devoted to college football. If I knew where they were I'd point you to them. But I don't. With the exception of some insight into the inner workings of the BCS algorithms, I don't think I can offer anything that those blogs don't offer. I truly am nothing more than a casual fan. So I will probably refrain from blogging too much on the college game.

Nonetheless, I bought my annual college football preview magazine this week and can't help but share some of the thoughts that went through my head while I flipped through it.


  • I love the NFL, but the TV timeouts kill me. There is a point where the ratio of commercials to game action is so high that it's just not worth watching. I don't know exactly where that point is, but sunday afternoon NFL is getting mighty close to it. And I'm not sure Monday Night Football isn't already past it. In this regard, watching a college game (at least during the regular season) is very refreshing. Sometimes, you can even watch three straight drives or more without a break. I almost feel like I'm getting away with something when that happens.
  • On the flip side, college halftimes are eternal compared to the snappy NFL halftimes. Because of this, the college game gives back almost its entire advantage over the NFL from the previous point.
  • Memo to SEC fans: your conference is nothing special. Just like the other major conferences, it is great some years, mediocre some years, and weak some years. Sometimes it has a great team but no depth. Sometimes (like last year) it has a lot of good teams, but no great one. It's just like every other conference. I will, however, concede that SEC fans are the best fans in the country. From what I can gather, the atmosphere at a big SEC game is unparalleled. I mean that sincerely, SEC fans. Just don't go thinking that makes your teams any better than they are.
  • Question: when your team's coach has success and then moves on to a better program (I'm thinking of Les Miles here), are you supposed to root for him because he used to coach your squad? Or are you supposed to root against him because he left?
  • If you ask anyone what they think about the BCS, they'll say, "I hate the BCS." But the funny thing is, everyone seems to hate something different about the BCS. Some hate the fact that computers are involved. Some hate that Notre Dame has a special deal. Some hate that it kills the tradition of the bowls. Some hate that the at-large teams are selected on the basis of drawing power rather than merit. Me, I don't hate the BCS. I view it as being a litte better than the old system and a lot worse than an 8-team playoff. But given the glacial pace of change in college football, it's about as good as we can expect right now.
  • On a related topic, March Madness is great and all, but college football's system allows for something that you just cannot get in college basketball: early-season games with serious national title implications. When LSU and Auburn play in September, it is essentially a playoff game. When Duke and UNC play in February, it's totally meaningless.
  • Doesn't it seem like the Big 12 and SEC always have a strong division and a weak division? The Big 12 North continues to be horrible while the South has two legitimate national title contenders. It used to be the other way around. The SEC is more balanced right now, but for most of its two-division history, the best two teams in the conference have come from the same half. If I were in charge, there would be some kind of clause (details to be negotiated) that would allow for, e.g., a Texas-OU rematch in the Big 12 championship game if both of those teams were in the top 10 and the North winner was outside the top 20. I realize this would take away, just a little, from what I wrote in the bullet above, but it's a price I'd be willing to pay to get some decent conference championship games.
  • Some real shockers in the preseason top 10 this year: Ohio State, Texas, USC, Oklahoma, Florida, Notre Dame, Florida State. Not far off are Georgia, Penn State, Michigan, and Miami. I still don't understand how some people can bash major league baseball for having a system where teams "buy championships" and not hold college football to the same standard. Oklahoma and Texas dominate Big 12 football to the same extent that the Yankees and Red Sox dominate the AL, and the reason is the same. I don't know this, but I'd bet the ratio of Florida State's football budget to Duke's is at least as great as the Yankee/Devil Ray ratio. But you never hear anything about the "problem" in college football.

23 Comments | Posted in College

Ranking quarterbacks BCS style

Posted by Doug on July 20, 2006

Yesterday's post was a prelude to this one, in which I will dust off one of my favorite old pieces of silly research. I've written about it before here and elsewhere, so apologies to those of you who have seen it before, but bear with me.

The main idea is that the team-dependent nature of every player's statistics makes it difficult to compare the stats of players on different teams. But we might be able to compare players across teams by comparing only players on the same team and then bootstrapping our way up. Here is what I wrote two years ago:

Wide Receiver is the only position where even small groups of players are actually competing against each other under nearly identical circumstances. Domanick Davis and Brian Westbrook are competing for statistics under very different sets of circumstances and for that reason it’s extremely difficult to say with any degree of certainty who is better. Likewise, Rod Smith and Laveranues Coles are in different environments so simply comparing their stats isn’t necessarily a reliable way of determining who’s better.

But the same does not apply to Rod Smith and Ashley Lelie. Smith and Lelie are working in the same system with the same quarterback, the same offensive line, even the same game conditions. Raw numbers probably are a good way to determine to what extent Smith is better than Lelie. Likewise, Coles and Rod Gardner [remember, this was written in 2004] can be fairly compared. Every season, every team has a group of 3 to 5 guys that can, for the most part, be rank-ordered by their numbers. This situation is unique to wide receivers.

But how does this help us compare Rod Smith to Laveranues Coles? Think college football. USC didn’t play Auburn this season. So who was better? Well, we know USC is good because, among other reasons, they crushed Oklahoma, who we suspect was pretty good; they beat Texas, for example. We know Auburn was good, in part, because they beat Tennessee, Georgia, and LSU, all solid teams. While there is unfortunately no direct evidence to help us settle the Auburn/USC debate, there are piles and piles of indirect evidence. Every game played by either team, or the opponents of either team, or the opponents of those teams, serves as a tiny sliver of indirect evidence about how good USC and Auburn were. And many very intelligent people have devoted lots of their time and talent to convincing computers to assimilate all this information.

So why not put this technology to work ranking wide receivers? Rod Smith “played” Ed McCaffrey several times, and McCaffrey was good. He also played Anthony Miller and Willie Green and Eddie Kennison (remember that?). And McCaffrey has played Jerry Rice and Stephen Baker, Willie Green has played Mark Carrier and Don Beebe, Eddie Kennison has played — well, who hasn’t Eddie Kennison played? Likewise, there is loads of indirect evidence — mind you, much of it is extremely indirect — about how good Laveranues Coles is compared to Michael Jackson and Marvin Harrison and Troy Brown and even Randy Moss.

So the question for today, prompted by a comment by MDS a couple of posts ago, is: can we apply the same idea to quarterbacks?

And the answer is no. There just aren't enough cases of more than one quarterback getting significant playing time for the same team. It would be like trying to run the BCS rankings in early September. Most of the real contenders have only played a couple of 1-AA or Sun Belt Conference teams. Possibly some teams haven't even played a game yet. Brett Favre and Peyton Manning, for instance, would both have "records" of 0-0.

But what if we stretch the assumptions just a little? Let's allow ourselves to compare two quarterbacks who played for the same team in the same year or in consecutive years. This will open things up a bit. Jeff Garcia, for instance, can be compared to Ken Dorsey and Tim Rattay (SF 2003 vs. 2004), to Kelly Holcomb and Tim Couch (CLE 2004 vs. 2003), to Trent Dilfer (CLE 2004 vs. 2005), and to Steve Young (SF 1998 vs. 1999).

I said above that Garcia can be compared to those others, but that's a bit disingenous. I mean, we can compare Garcia to Jim Thorpe if we want to, but that doesn't mean the comparison is meaningful. After I wrote the previous article, I found out that many people didn't even like the idea of comparing two receivers on the same team. They are certainly not going to like the idea of comparing Kurt Warner (1999 Rams) to Tony Banks (1998 Rams). And I'm not going to blame them.

But I'm going to press on. Just as some college football teams have lucky or unlucky scheduling quirks --- some of OU's opponents last year played against a healthy Adrian Peterson and some did not, for example --- some quarterbacks are going to be unfairly advantaged or disadvantaged by this scheme. Even if it's more extreme than the college football example, we've got no choice but to live with it or quit reading right here.

As I write this, I am only part way through the programming, but I'm far enough to know that it's not going to yield a reasonable set of rankings. Even with the dubious extended definition of comparability, there still are not enough pairs. Brett Favre now has two comparables, and Peyton Manning has one (a loss to Jim Harbaugh). Even if you bought into the method for wide receivers, it's just not going to work for quarterbacks.

Nonetheless, when I get the programming done, I'll post a set of rankings just for laughs. Even though they will be meaningless, they might point us in the directions of some interesting facts. The wide receiver exercise produced some rankings, but its real value (at least to me) was that it caused me to closely examine the careers of some players I hadn't really thought much about. Hopefully this will do the same.

6 Comments | Posted in BCS, Statgeekery

A good QB and a bad QB on the same team

Posted by Doug on July 19, 2006

This is just a quick data dump. It's nothing too fascinating, but it could make for some interesting trivia.

I looked at all cases since 1978 of two quarterbacks on the same team each throwing at least 150 passes. Then I computed the difference between their adjusted yards per attempts. Here are the biggest differences:


=========+==========================+==========================+=======
Year TM | Good QB ATT AYPA | Bad QB ATT AYPA | DIFF
=========+==========================+==========================+=======
1981 ram | P Haden 267 4.9 | D Pastorini 152 0.7 | 4.2
2002 stl | M Bulger 214 7.9 | J Martin 195 4.3 | 3.6
1995 cle | V Testaverde 392 6.6 | E Zeier 161 3.1 | 3.5
2002 stl | M Bulger 214 7.9 | K Warner 220 4.4 | 3.5
2004 nyg | K Warner 277 7.0 | E Manning 197 3.5 | 3.4
1994 det | D Krieg 212 7.7 | S Mitchell 246 4.3 | 3.4
1996 sea | J Friesz 211 7.2 | R Mirer 265 4.0 | 3.3
1983 sdg | D Fouts 340 7.4 | E Luther 287 4.1 | 3.2
1985 kan | B Kenney 338 6.8 | T Blackledge 172 3.6 | 3.2
1987 nyg | P Simms 282 7.1 | J Rutledge 155 3.9 | 3.2
1983 sea | D Krieg 243 7.5 | J Zorn 205 4.5 | 3.0
1980 den | C Morton 301 5.6 | M Robinson 162 2.6 | 3.0
1997 car | S Beuerlein 153 6.3 | K Collins 381 3.4 | 2.9
1996 sfo | S Young 316 7.2 | E Grbac 197 4.4 | 2.8
1981 nyg | P Simms 316 5.5 | S Brunner 190 2.8 | 2.7
1984 min | T Kramer 236 5.6 | W Wilson 195 2.9 | 2.6
2001 det | C Batch 341 5.8 | T Detmer 151 3.2 | 2.6
1998 kan | R Gannon 354 6.0 | E Grbac 188 3.5 | 2.6
1998 nor | B Tolliver 199 6.7 | K Collins 191 4.1 | 2.5
1984 atl | S Bartkowski 269 6.8 | M Moroski 191 4.3 | 2.5
1994 ram | C Chandler 176 7.6 | C Miller 317 5.2 | 2.4
1985 nwe | S Grogan 156 7.4 | T Eason 299 5.0 | 2.4
1985 was | J Schroeder 209 6.1 | J Theismann 301 3.8 | 2.4
1999 cin | J Blake 389 5.9 | A Smith 153 3.6 | 2.3
1988 min | W Wilson 332 7.5 | T Kramer 173 5.3 | 2.2
2005 phi | D McNabb 357 6.3 | M McMahon 207 4.1 | 2.2
1983 den | S Deberg 215 6.5 | J Elway 259 4.3 | 2.2
1999 bal | T Banks 320 6.1 | S Case 170 3.9 | 2.2
1997 cin | B Esiason 186 8.2 | J Blake 317 6.0 | 2.2
1998 phi | K Detmer 181 4.6 | B Hoying 224 2.5 | 2.1
1995 nyj | B Esiason 389 4.5 | B Brister 170 2.4 | 2.1
1987 chi | J McMahon 210 6.7 | M Tomczak 178 4.6 | 2.1
1984 kan | B Kenney 282 6.4 | T Blackledge 294 4.3 | 2.0
2000 cle | T Couch 215 5.3 | D Pederson 210 3.4 | 2.0
1986 rai | J Plunkett 252 6.8 | M Wilson 240 4.9 | 2.0

With less than 200 attempts, just a couple of interceptions can brutalize your adjusted yards per attempt, so let's re-do the list using plain old yards per attempt instead:


=========+==========================+==========================+=======
Year TM | Good QB ATT YPA | Bad QB ATT YPA | DIFF
=========+==========================+==========================+=======
1983 sea | D Krieg 243 8.8 | J Zorn 205 5.7 | 3.1
2002 stl | M Bulger 214 8.5 | J Martin 195 6.2 | 2.3
1983 sdg | D Fouts 340 8.8 | E Luther 287 6.5 | 2.2
1991 sfo | S Young 279 9.0 | S Bono 237 6.8 | 2.2
2004 nyg | K Warner 277 7.4 | E Manning 197 5.3 | 2.1
1981 ram | P Haden 267 6.8 | D Pastorini 152 4.7 | 2.1
2002 stl | M Bulger 214 8.5 | K Warner 220 6.5 | 2.0
1995 cle | V Testaverde 392 7.4 | E Zeier 161 5.4 | 2.0
2000 cle | T Couch 215 6.9 | D Pederson 210 5.0 | 1.9
1996 sea | J Friesz 211 7.7 | R Mirer 265 5.8 | 1.9
1984 min | T Kramer 236 7.1 | W Wilson 195 5.2 | 1.9
1997 ari | J Plummer 296 7.4 | K Graham 250 5.6 | 1.8
1994 det | D Krieg 212 7.7 | S Mitchell 246 5.9 | 1.8
1984 atl | S Bartkowski 269 8.0 | M Moroski 191 6.3 | 1.7
1984 kan | B Kenney 282 7.4 | T Blackledge 294 5.8 | 1.6
1999 cin | J Blake 389 6.9 | A Smith 153 5.3 | 1.6
1995 nyj | B Esiason 389 5.8 | B Brister 170 4.3 | 1.6
1992 atl | W Wilson 163 8.4 | C Miller 253 6.9 | 1.5
2002 was | P Ramsey 227 6.8 | S Matthews 237 5.3 | 1.5
1985 cle | G Danielson 163 7.8 | B Kosar 248 6.4 | 1.5
1988 was | M Rypien 208 8.3 | D Williams 380 6.9 | 1.5
2005 phi | D McNabb 357 7.0 | M McMahon 207 5.6 | 1.4
2002 bal | J Blake 295 7.1 | C Redman 182 5.7 | 1.4
1998 car | S Beuerlein 343 7.6 | K Collins 162 6.2 | 1.4
1996 sfo | S Young 316 7.6 | E Grbac 197 6.3 | 1.4
1980 den | C Morton 301 7.1 | M Robinson 162 5.8 | 1.3
1985 buf | B Mathison 228 7.2 | V Ferragamo 287 5.8 | 1.3
1993 min | S Salisbury 195 7.2 | J McMahon 331 5.9 | 1.3
1998 phi | K Detmer 181 5.6 | B Hoying 224 4.3 | 1.3
1994 cin | J Blake 306 7.0 | D Klingler 231 5.7 | 1.3
1985 pit | D Woodley 183 7.4 | M Malone 233 6.1 | 1.3
2000 stl | K Warner 347 9.9 | T Green 240 8.6 | 1.3
1981 nyg | P Simms 316 6.4 | S Brunner 190 5.1 | 1.3
1994 hou | B Richardson 181 6.6 | B Tolliver 240 5.4 | 1.3

6 Comments | Posted in General

When do good quarterbacks become good?

Posted by Doug on July 18, 2006

Yesterday's post on Alex Smith prompted some interesting reader comments. A reader named MDS pointed out that most of the excuses one could make for Alex Smith would also apply to Tim Rattay. And Rattay's numbers weren't as bad. That's something I want to explore further in future posts, but for now I'll turn my attention to this comment from a reader named Ben:

Typically (i think, please confirm), 2nd year is when most quarterbacks make the leap

So let's attempt to confirm for Ben.

The metric I'm going to use is Adjusted Yards per Attempt, which was devised by Pete Palmer, John Thorn, and Bob Carroll in a very good book called The Hidden Game of Football. It is computed as follows:


Adj YPA = (PassYd + 10*PassTD - 45*INT) / PassAtt

It's just yards per attempt with a 10-yard bonus for each touchdown thrown and a 45-yard penalty for each interception.

For each year since 1978, I found the 40 quarterbacks who threw the most passes and I ranked them all by adjusted yards per attempt. Then I looked at the first five season of every quarterback who debuted between 1978 and 2001. For each of those seasons, if that quarterback was among the top 40 in passes attempted, I computed his adjusted yards per attempt rank (1=best, 40=worst) for that year. Here are Brett Favre's first five years, for example:


====== Season ====
Quarterback 1 2 3 4 5
========================================
Brett Favre xx 17 32 13 2

The double-x indicates that he was not among the league's top 40 passers in his rookie year. In his second season he ranked 17th, in his third he ranked 32nd, and so on. Remember that all ranks are out of 40.

Here is a long list of quarterbacks. It is sorted by career passing attempts, so that, with a few exceptions, the best quarterbacks are near the top of the list.


====== Season ====
Quarterback 1 2 3 4 5
========================================
Dan Marino 5 1 11 9 10
Brett Favre xx 17 32 13 2
John Elway 34 24 25 16 5
Warren Moon 14 24 25 19 3
Drew Bledsoe 33 28 34 15 9
Vinny Testaverde 28 37 33 17 35
Joe Montana xx 15 7 7 7
Dave Krieg xx 10 19 2 17
Boomer Esiason xx 3 2 20 1
Kerry Collins 32 13 39 32 21
Steve Deberg 37 22 33 21 27
Jim Everett 24 27 7 4 11
Jim Kelly 12 21 12 8 3
Troy Aikman 40 33 5 9 3
Phil Simms 27 34 19 xx
Mark Brunell xx 22 8 5 9
Peyton Manning 31 5 6 11 13
Randall Cunningham xx 19 16 14 24
Rich Gannon xx xx 31 22
Steve Young 34 29 xx xx 1
Jake Plummer 24 23 40 35 12
Chris Chandler 29 xx xx 38 23
Jeff George 28 30 32 21 15
Jim Harbaugh xx xx 35 9 28
Steve McNair xx 1 29 20 17
Brad Johnson xx xx 10 16 xx
Ken O'Brien 21 1 13 13 21
Bernie Kosar 22 8 1 9 14
Trent Green xx 18 1 25
Steve Beuerlein 15 9 9 xx
Jeff Blake xx 11 20 16
Neil O'Donnell 13 11 13 18 8
Neil Lomax 30 18 8 4 14
Bobby Hebert 15 xx 11 23 12
Trent Dilfer xx 29 35 19 25
Donovan McNabb 39 27 14 14 12
Chris Miller xx 30 13 19 12
Jon Kitna xx 28 18 31 37
Jay Schroeder 12 15 22 24 26
Jeff Garcia 19 5 7 18 14
Aaron Brooks 9 22 16 8 18
Gus Frerotte xx 21 9 22 xx
Mark Rypien 11 6 20 2 24
Daunte Culpepper 4 16 28 4
Jim McMahon 10 17 3 9 34
Tom Brady xx 13 22 9 8
Stan Humphries xx 38 19 26
Doug Williams 23 33 18 6 20
Elvis Grbac xx 4 36 23 37
Bill Kenney 2 29 13 12 13
Wade Wilson xx xx 40 xx
Kordell Stewart xx xx 28 36 38
Tony Banks 21 20 30 12 32
Rodney Peete 20 10 31 8 36
Scott Mitchell xx 9 37 5
Kurt Warner xx 1 2 2 37
Jeff Hostetler xx xx
Mike Tomczak xx 36 33 8 32
Brian Griese xx 23 3 34 11
Erik Kramer xx 29
Bubby Brister xx xx 19 17 18
Matt Hasselbeck xx xx 28 7 6
Doug Flutie xx xx 33 39
Marc Wilson xx 34 xx 19 26
Rick Mirer 34 32 33 39 xx
Don Majkowski 15 27 11 24 34
Drew Brees xx 32 35 4 14
Jay Fiedler xx
Tim Couch 32 25 33 34 19
Billy Joe Tolliver 38 32 17 29 xx
Steve Bono xx xx xx xx xx
Mark Malone xx xx 22
Jack Trudeau 40 17 xx 28 16
Dave Brown xx 22 25 38

Now, here is a table showing how many quarterbacks ranked in the top 10 and the top 20 for the first time in their first year, in their second year, and so on. The second line, for example, shows that 20 quarterbacks ranked in the top 10 for the first time and 26 ranked in the top 20 for the first time in their second year.


YR T10 T20
=============
1 6 21
2 20 26
3 12 23
4 11 14
5 8 6

So it looks like Ben's intuition was on the money. While there are many exceptions (as always), year two does indeed look like a year when many quarterbacks make the leap.

A bit of fine print is called for here. My database does not know when a player's rookie year was. It only knows when he played his first game, which is why I keep using the term "first year" instead of "rookie year." It is unclear, for example, whether 2005 should be regarded as Philip Rivers' second season or his third. Some might even argue that it should be counted as his first.

If we ignore all seasons where the quarterback did not qualify (i.e. did not rank among the top 40 passers of the year) we do get a different picture:


YR T10 T20
=============
1 25 53
2 22 29
3 10 14
4 8 7
5 7 3

So exactly half of the quarterbacks who ranked in the top 20 at some point during their first five years did so in the first year in which they got substantial playing time.

3 Comments | Posted in General

Alex Smith

Posted by Doug on July 17, 2006

Mr. Obvious: Alex Smith was bad last year.

Voice of Reason: Yeah, but he was a rookie.

Mr. Obvious: Yeah, but he was really bad.

Voice of Reason: Yeah, but he had no help at all. The 49ers were a total wreck last year.

Mr. Obvious: Yeah, but he was really, really bad.

Let's take a quick look at just how bad Alex Smith was last year. I will look at all quarterbacks since 1978 who threw at least 100 passes in their debut year. There are 85 quarterbacks meeting those criteria. Smith threw 11 interceptions in 165 attempts, for a rate of 6.7%. That's not the worst of the 85. Tommy Maddox, Dave Wilson, and Steve DeBerg all had higher interception rates in their debut years. But all those guys played in times when interceptions were more prevalent than they are now. When you divide each player's interception rate by the league interception rate, Smith's does look the worst.


Young QB YR TM IntRate LgRate Index
================================================
Alex Smith 2005 sfo 6.7 3.1 216.0
Tommy Maddox 1992 den 7.4 3.9 193.1
Ryan Fitzpatrick 2005 stl 5.9 3.1 192.0
Ryan Leaf 1998 sdg 6.1 3.3 186.6
Eric Zeier 1995 cle 5.6 3.0 183.3
Jake Plummer 1997 ari 5.1 3.0 167.2
Kurt Kittner 2003 atl 5.3 3.2 162.3
Dave Wilson 1981 nor 6.9 4.3 162.0
Mark Rypien 1988 was 6.2 3.9 160.0
Gus Frerotte 1994 was 5.0 3.1 159.6
Troy Aikman 1989 dal 6.1 3.9 159.2
Craig Krenzel 2004 chi 4.7 3.2 148.6
Peyton Manning 1998 ind 4.9 3.3 148.4
Heath Shuler 1994 was 4.5 3.1 144.5
Kerry Collins 1995 car 4.4 3.0 144.2
Eli Manning 2004 nyg 4.6 3.2 143.7
Craig Whelihan 1997 sdg 4.2 3.0 139.2

Index is the player's interception rate divided by the league rate and then multiplied by 100. Smith's 216 indicates that his intercpetion rate was about 2.16 times greater than the league rate last year.

If you look at yards per attempt (compared similarly to the league average), Smith ranks 75th of the 85 young quarterbacks. If you look at touchdown rate, he ranks 84th out of 85. If you add those ranks, you get 244, which ties him for the worst among the 85.


==== ranks =======
Young QB YR TM YPA INT TD TOT
================================================
Alex Smith 2005 sfo 75 85 84 244
Ryan Leaf 1998 sdg 79 82 83 244
Kurt Kittner 2003 atl 85 79 80 244
Craig Krenzel 2004 chi 71 74 73 218
Eric Zeier 1995 cle 73 81 63 217
Akili Smith 1999 cin 76 48 82 206
Eli Manning 2004 nyg 81 70 51 202
Jack Trudeau 1986 ind 80 38 78 196
Steve Deberg 1978 sfo 78 63 54 195
Dave Wilson 1981 nor 32 78 85 195
Kyle Mackey 1987 mia 74 51 66 191
Craig Whelihan 1997 sdg 60 69 58 187
Kyle Orton 2005 chi 82 45 60 187
Ryan Fitzpatrick 2005 stl 62 83 40 185
Steve Fuller 1979 kan 69 42 72 183
Troy Aikman 1989 dal 63 75 41 179
Chris Weinke 2001 car 70 34 75 179
Joey Harrington 2002 det 72 58 47 177
David Carr 2002 hou 57 40 77 174
Scott Zolak 1992 nwe 66 32 76 174
John Elway 1983 den 47 59 64 170
Steve Young 1985 tam 27 68 74 169
Scott Brunner 1980 nyg 77 49 37 163
Tim Hasselbeck 2003 was 59 57 45 161
Billy Joe Tolliver1989 sdg 64 41 55 160
Steve Walsh 1989 dal 53 35 71 159
Charlie Frye 2005 cle 44 53 62 159
Kyle Boller 2003 bal 61 60 38 159
Jeff Komlo 1979 det 48 64 44 156
Oliver Luck 1983 hou 52 66 35 153
Brent Pease 1987 hou 40 43 69 152
Mike Pagel 1982 bal 65 10 70 145
Rick Mirer 1993 sea 55 37 50 142
Quincy Carter 2001 dal 45 54 43 142
Boomer Esiason 1984 cin 83 9 49 141
Koy Detmer 1998 phi 67 17 57 141
Kerry Collins 1995 car 34 71 36 141
Tommy Maddox 1992 den 41 84 15 140
Neil Lomax 1981 stl 29 30 81 140
Tom Hodson 1990 nwe 51 23 65 139
Donovan McNabb 1999 phi 84 27 26 137
David Woodley 1980 mia 68 44 24 136
Gus Frerotte 1994 was 50 76 7 133
Heath Shuler 1994 was 37 72 23 132
Cade McNown 1999 chi 38 61 30 129
Kelly Stouffer 1988 sea 36 21 67 124
Rodney Peete 1989 det 7 55 59 121
Peyton Manning 1998 ind 30 73 16 119
Cody Carlson 1988 hou 20 67 28 115
Chad Hutchinson 2002 dal 35 33 46 114
Drew Bledsoe 1993 nwe 56 36 22 114
Vinny Testaverde 1987 tam 33 25 56 114
Mike McMahon 2001 det 58 1 52 111
Chris Chandler 1988 ind 18 62 29 109
Michael Vick 2001 atl 14 15 79 108
Byron Leftwich 2003 jax 16 52 34 102
Ken Karcher 1987 den 54 31 17 102
Jay Schroeder 1985 was 24 4 68 96
Tim Couch 1999 cle 42 28 25 95
Jim Everett 1986 ram 23 65 4 92
Ken O'Brien 1984 nyj 26 18 48 92
Jeff George 1990 ind 39 39 13 91
Jake Plummer 1997 ari 5 80 5 90
Bernie Kosar 1985 cle 43 7 39 89
Tony Banks 1996 stl 12 56 20 88
Bobby Hebert 1985 nor 31 3 53 87
Warren Moon 1984 hou 11 13 61 85
Patrick Ramsey 2002 was 17 47 21 85
Doug Williams 1978 tam 46 12 27 85
Phil Simms 1979 nyg 28 46 10 84
Jeff Garcia 1999 sfo 19 20 42 81
Mark Rypien 1988 was 3 77 1 81
Shaun King 1999 tam 49 16 8 73
Steve Beuerlein 1988 rai 21 11 33 65
Don Majkowski 1987 gnb 25 6 31 62
Ben Roethlisberger2004 pit 2 50 6 58
Dieter Brock 1985 ram 13 19 18 50
Neil O'Donnell 1991 pit 22 8 19 49
TJ Rubley 1993 ram 8 29 11 48
Charlie Batch 1998 det 9 5 32 46
Jim McMahon 1982 chi 15 14 14 43
Jim Kelly 1986 buf 6 22 12 40
Aaron Brooks 2000 nor 4 26 9 39
Marc Bulger 2002 stl 1 24 2 27
Dan Marino 1983 mia 10 2 3 15

As you can see, Smith is not in good company at the top of that list. But you don't have to scan too far down to see that some all-time greats appear in the top quartile of the list.

Two conclusions.

1. Smith wasn't just "rookie bad" last year. He was historically bad. Or at least his numbers were. Yes, the 49ers were a disaster last season, but there are a lot of quarterbacks on that list whose teams weren't very good and none of them posted numbers as bad as Smith's.

2. Despite that, I think the voice of reason is right. We can't write him off yet.

18 Comments | Posted in General

Rookie head coaches

Posted by Chase Stuart on July 14, 2006

Seven teams this year have rookie head coaches. The Detroit Lions (Rod Marinelli, age 57), Houston Texans (Gary Kubiak, 45), Green Bay Packers (Mike McCarthy, 43), Minnesota Vikings (Brad Childress, 50), New Orleans Saints (Sean Payton, 43), New York Jets (Eric Mangini, 35) and the St. Louis Rams (Scott Linehan, 43). As a Jets fan, I've heard a lot of talk about how Mangini is too young to expect the Jets to succeed in 2006. And it's true, he's the youngest of the new coaches hired this year, and the youngest coach currently in the NFL. I'm going to put the age issue on the backburner for now, and look to see how rookie head coaches actually do.

This chart shows all guys who have ever coached a game in the NFL since 1950 and an incomplete list of coaches from before then. Any season in which a guy coached at least one game was considered his rookie season.

Year	Win%	#Coaches
1 0.417 259
2 0.451 216
3 0.499 177
4 0.544 141
5 0.541 117
6 0.527 102
7 0.525 86
8 0.522 71
9 0.551 63
10 0.568 49
11 0.507 42
12 0.533 34
13 0.600 27
14 0.583 25
15 0.525 20
16 0.575 17
17 0.441 16
18 0.545 14
19 0.621 11
20 0.514 11
21 0.656 9
22 0.626 8
23 0.563 7
24 0.685 4
25 0.604 4
26 0.527 4
27 0.473 4
28 0.464 4
29 0.375 4
30 0.600 3
31 0.500 3
32 0.550 3
33 0.525 3
34 0.571 1
35 0.643 1
36 0.857 1
37 0.357 1
38 0.643 1
39 0.429 1
40 0.536 1

To be sure we're clear on what this list means, 49 head coaches have coached at least 10 seasons in the NFL, and during that 10th season their aggregate winning percentage was 0.568. And in case you're wondering, yes, George Halas did coach for a really, really long time.

I think this list is a good starting point, but there are a few problems with it. Sure we don't know if Rod Marinelli is going to be the next Bill Callahan or the next Bill Belichick. So maybe that 0.417 winning percentage is a good starting point for our projections for the teams with rookie coaches.

But that's not really what I want to get at. I think Eric Mangini's going to be a pretty good coach, but I don't know if he's going to be good immediately. What I want to know is what's the learning curve look like for good coaches.

This is a tricky question, of course. I'll start by looking only at guys who coached at least five seasons as a predicate for "good", and see how they did for each season of their career. (Note: After year 5, the data will be the same as the above chart.)

Year	Win%	#Coaches
1 0.460 117
2 0.514 117
3 0.555 117
4 0.571 117
5 0.541 117

That's not looking very good for my "Mangini is going to be fine right away" theory. It's only a marginal improvement from the last table.

Let's look only at coaches who debuted after 1978. I'm also going to eliminate the seven coaches who started their careers in mid-season, since they just don't feel like good comparisons to the Manginis and Marinellis of the world. That leaves us with 49 coaches that have coached at least five seasons. So how did they do?

Year	Win%	#Coaches
1 0.464 49
2 0.528 49
3 0.559 49
4 0.571 49
5 0.532 49
6 0.519 44
7 0.505 36
8 0.504 28
9 0.508 25
10 0.559 18
11 0.511 15
12 0.479 12
13 0.653 9
14 0.586 8
15 0.453 4
16 0.542 3
17 0.396 3
18 0.719 2
19 0.313 1
20 0.250 1
21 0.438 1
22 0.594 1
23 0.231 1

That doesn't look any better. Now it's time to rig the data. I won't cheat, but I'll pick some arbitrary cutoffs to see what's the best way to make rookie coaches look good. And believe it or not, it's actually not that hard.

It seems as though rookie coaches are improving over the years. If we move the cutoff date from 1978 to 1991, we're left with 25 coaches. Those rookie coaches had an aggregate winning percentage of 0.511, which looks impressive. But it gets better: since 1997, ten rookie head coaches combined for a .559 winning percentage. And the last four rookie head coaches that have gone on to coach at least five seasons all had winning rookies in their first years.

Here's the breakdown for all coaches since 1991.

Year	Win%	#Coaches
1 0.511 25
2 0.553 25
3 0.546 25
4 0.534 25
5 0.508 25
6 0.459 21
7 0.489 15
8 0.495 12
9 0.541 10
10 0.650 7
11 0.544 5
12 0.438 3
13 0.750 2
14 0.750 2

The two guys who comprise the bottom two rows end met in the Super Bowl this year, and both began their coaching careers in 1992. Compared to the previous table, this one makes you scratch your head: the same coaches have better records as rookies than in their fifth year. Aggregate winnings percentages for years six through eight are all below .500, despite the feeling that those coaches should be some of the very best in the NFL.

One last one, looking at the 10 rookie coaches since 1997.

Year	Win%	#Coaches
1 0.559 10
2 0.613 10
3 0.569 10
4 0.569 10
5 0.563 10
6 0.484 9
7 0.325 5
8 0.531 2
9 0.364 1

Once again, the rookie coaches look very successful. The game has undoubtedly changed a lot with all the advances in technology and the ever-increasing size of coaching staffs. It's hard to say though, whether this small but recent sample is more applicable than a larger one. And of course, had I gone back to 1996 I would have included Vince Tobin's 7-9 season and Tony Dungy's 6-10 year. But here's the list of those ten recent coaches.

Coach		Year	Team	W	L	T
Jim Fassel 1997 nyg 10 5 1
Steve Mariucci 1997 sfo 13 3 0
John Gruden 1998 rai 8 8 0
Brian Billick 1999 rav 8 8 0
Dick Jauron 1999 chi 6 10 0
Andy Reid 1999 phi 5 11 0
Jim Haslet 2000 nor 10 6 0
Mike Martz 2000 ram 10 6 0
Mike Sherman 2000 gnb 9 7 0
Herm Edwards 2001 nyj 10 6 0

There's only two bad seasons on that list, and both those coaches spent first rounders on rookie QBs those years. Donovan McNabb panned out, and Andy Reid's been a success; Cade McNown didn't, and Dick Jauron was fired after the 2003 season.

The last list is promising of course, but remember that we cherry picked only the coaches that made it for five years. The last 50 coaches to enter the NFL at the beginning of a season (which takes us back to 1992), still had an aggregate winning percentage of only 0.467.

Random trivia

  • Amazing fact of the day: In 1986, Rod Dowhower coached the Colts' first 13 games, and he couldn't manage to win one of them. Ron Meyer replaced him for the last three games, and Indianapolis went 3-0. Rick Venturi couldn't return the favor though a few years later. Meyer was fired after going winless in his first five games for the Colts in 1991. Venturi went 1-7 to finish the year.
  • Everyone knows Don Shula went 14-0 in 1972. But other coaches have come close to matching that.
    • George Halas went 13-0 with the Bears in 1934 (but lost the championship game).
    • The 1961 Houston Oilers (the answer to another very good trivia question) went 1-3-1 with Lou Rymkus as head coach. But Wally Lemm took over and the Oilers won 10 straight games, including the AFL championship.
    • Curly Lambeau, after whom a certain football stadium is named after, went 12-0-1 in 1929.
  • On the other side of the spectrum, most of us remember that John McKay's Bucs went 0-14 in 1976, Tampa Bay's first year in the league. And while that's still a record for futility, he's not alone in having double digits losses and no wins. We saw Rod Dowhower already (0-13), so here are a few others:
    • Dick Nolan, whose son had a tough year on the sidelines in 2005, went 0-12 for the Saints in 1980. Interim coach Dick Stanfel went 1-3. Insert your own joke here.
    • You've got to feel for Phil Handler, who coached the Arizona Cardinals during World War II. He went 1-29 his first three seasons, including one year where the Cardinals and Steelers merged to form one team.
    • 1976 was a really, really bad year to be a coach in New York. Bills' interim coach Jim Ringo took over a 2-3 team and went 0-9. Giants HC Bill Arnsparger went 7 winless games to start the season before John McVay took over. And Lou Holtz -- yes, that Lou Holtz -- resigned after going 3-10 during his first year with the Jets.
  • In 1991, a couple of guys who would eventually coach the Jets entered the coaching ranks. One went 10-6, and one went 6-10. Things are looking a little different for Rich Kotite and Bill Belichick these days, though.
  • 1989 was an interesting year for rookie head coaches. There were only two, but both went on to win Super Bowls. They couldn't have had more different starts, however: George Seifert went 14-2 and won the Super Bowl. Jimmy Johnson went 1-15. As bad as that was, Johnson's worst decision in 1989 might have come a few months earlier. After drafting Troy Aikman first overall in the NFL draft, Johnson took Miami QB Steve Walsh (the two won an NCAA championship in 1987) in the first round of the supplemental draft. That cost the Cowboys their first round pick in 1990, which of course would have been the number one pick in the draft. It turns out it's better to be lucky than good, though. The first pick in the draft that year was Jeff George, and the Cowboys (already with Aikman) would have probably taken RB Blair Thomas, who went second overall. Instead Dallas traded up and took a different RB at the end of the first round.

7 Comments | Posted in General

Home Field Advantage II

Posted by Chase Stuart on July 13, 2006

Is there such a thing as a "dome field advantage?" Whenever a dome team has a strong season and happens to be very good at home, sportswriters get to write about the special dome field advantage. Supposedly, it's tougher to win in a dome because of the loud crowd noise, and maybe the artificial turf and the absence of natural elements. So do the facts bear this our? Do dome teams do better than regular teams at home?

According to the data, the answer is no. That's an important caveat, though. The numbers just show one thing: home wins minus road wins, for all teams that played in a dome. It's certainly possible that some special advantage exists for dome teams that wasn't examined in this study. But at the end of this post I'll throw out a theory on why there might actually be a "dome team disadvantage."

Since 1983, eight teams have played in a dome. The Atlanta Falcons (1992-2005), Detroit Lions (1983-2005), Houston Oilers (1983-1996), Indianapolis Colts (1984-2005), Minnesota Vikings (1983-2005), New Orleans Saints (1983-2004), Seattle Seahawks (1983-1999) and the St. Louis Rams (1996-2005).

There was a bit of noise in the data, so I eliminated the '95 Rams and last year's Saints teams. When the Rams relocated to St. Louis in 1995, the Edward Jones Dome wasn't complete. So the Rams first four home games were played at Busch Stadium, and the remainder in the Dome. The Rams went 3-1 at Busch and 1-3 indoors. Due to Hurricane Katrina, the Saints played three games indoors at the Alamodome in San Antonio (1-2), and four home games outdoors at Tiger Stadium in Baton Rouge (0-4) last year. The remaining "home" game was played at Giants Stadium, the star of yesterday's post.

The eight dome teams won 188 games more at home than on the road during the relevant time period, spanning 139 seasons. That comes out to an average of 1.35 more home wins per season.

The Houston Texans (2002-2005) play in Reliant Stadium, which has a retractable roof. I'm not sure which games were played with the roof open and which with the roof closed. The Dallas Cowboys (1983-2005) play at Texas Stadium, which is an open-air stadium -- basically a dome with a hole in the center. I didn't know whether to count either Texas team as a "dome" team or "regular" team, so I just put them in a separate category. Interestingly enough, my classification shouldn't matter: over 26 seasons the two teams won 35 more home games than road games, an average of 1.35 more wins per season.

So how does that compare to the rest of the NFL? The league has increased from 28 to 32 teams since the first year in the study. Over the 22 seasons (remember, the 1987 strike season data were excluded), that amounts to 649 seasons. NFL franchises have won 881 more games at home than on the road, for...an average of 1.36 more wins per season. If you eliminate all the dome teams, the Cowboys and Texans, and the 2005 Saints and 1995 Rams, NFL teams average 1.37 more wins per season at home than on the road.

We have 482 non-dome seasons, and 188 dome seasons. I'm not sure what a "significant" sample size would be, but considering how close the two averages were (1.35 and 1.37), at the least the burden of proof should shift to those who think dome teams have an advantage.

I promised you some theory in addition to the numbers. We've seen that dome teams appear to have the exact same home field advantage as regular teams. I say appear, because all the data show is that dome teams win the same additional number of games (compared to regular teams) at home than on the road. But is it possible that dome teams are actually worse at home but the numbers don't show it?

Isn't the general feeling that dome teams aren't as good on the road because they're not used to the conditions? This would artificially inflate a dome team's rating under my current system, because a team grades better at home the worse it does on the road. If dome teams actually perform poorly on the road -- as we might expect -- then the HFA of dome teams should should be greater than the league average, if dome teams are equally strong at home. This leads us to one of three conclusions:

  • Dome teams are actually weaker at home; they just look equal because they have trouble winning on the road.
  • Dome teams actually aren't weaker at all on the road; it just seems that way because we hear it all the time. The flipside of all the above numbers is that dome teams only lose 1.35 less games on the road than at home each year.
  • Something else. Maybe it's a small sample size. Maybe dome teams are below average at home when they're bad, but above average at home when they're good. Maybe schedule somehow factors in here. Maybe there's some other force driving the numbers that I haven't isolated. Who knows.

There's also the argument that dome teams are still actually better at home, but the numbers don't show it. Let's take a quick look at a few case studies. We'll use HFA rating as a shorthand for home wins minus road wins.

Atlanta's HFA rating was 9.5 during the 8 years the Falcons played at Fulton County Stadium; playing indoors the last 14 seasons, Atlanta's won 22.5 more games at home. The Houston Oilers' HFA was 21 in thirteen seasons at the Astrodome; the Titan's HFA rating was 9 in 9 years. But on the other hand, the Seahawks won just 23 more home games than road games in sixteen seasons in the Kingdome. Since moving to Qwest Field, Seattle's HFA is 12 in six years.

Here's an interesting one. From 1983-1994, the Raiders and Rams both won 9 more games when playing in Los Angeles than when on the road. The Raiders moved to Oakland and despite the notoriety of Raider Nation and the Black Hole, have an HFA of only 11 in 11 years. The Rams, in ten season indoors, won 15 more at home.

If there's such a thing as a Dome Field Advantage, it's certainly hard to quantify it. My guess is that when teams like the '98 Vikings, '99 Rams or the '05 Colts have a dominant offense and look unstoppable at home, it's a nice story to think it's the dome that helps. But in general, great teams almost always look pretty good, and usually look unbeatable at home no matter where they play. The same people that talk up how hard it is to play against a dome team because of the noise, probably mention how difficult it is to win in the cold against the Packers, Broncos and Chiefs. Even teams with no special weather advantage -- warm weather teams like Arizona, Tampa Bay and Dallas -- have above average HFA factors.

We know that over the last 22 years, home teams have won 58.5% of all games. I'll end with a breakdown of HFA separated out by total team wins.

Wins	HFA	Teams	HFA/Teams
1 -4 6 -0.67
2 12 14 0.86
3 29 25 1.16
4 48 44 1.09
5 76 54 1.41
6 94 60 1.57
7 79 69 1.14
8 112 70 1.60
9 121 77 1.57
10 94 73 1.29
11 79 55 1.44
12 66 43 1.53
13 46 22 2.09
14 4 15 0.27
15 2 4 0.50

3.5 -1.5 1 -1.50
4.5 5.5 3 1.83
5.5 -0.5 1 -0.50
6.5 9.5 3 3.17
7.5 -2.5 1 -2.50
8.5 7 4 1.75
9.5 2 2 1.00
10.5 3.5 3 1.17
Totals 881 649 1.36

7 Comments | Posted in Home Field Advantage

Home Field Advantage

Posted by Chase Stuart on July 12, 2006

In 2009, the Jets and Giants will move into a new joint stadium. It probably will be named after some corporation (JetBlue?), which is a big change from where both teams currently play: Giants Stadium. Jets fans don't like that their team plays in a stadium named after another team, and have claimed for years that it's negatively impacted the team's success. Giants fans, of course, think their stadium gives the team a great home field advantage.

While we don't know whether the name of where the Giants play home games will negatively affect New York's success, we have all the data we need to examine how Giants Stadium has been to the Jets and the Giants. That wouldn't make for much of a blog post though, so let's take a look at home field advantage throughout the entire NFL.

The Jets moved into Giants Stadium in 1983. So have the Jets really been harmed by being "homeless"? Measuring home field advantage may not be easy, but I think a team's home wins minus a team's road wins is a pretty accurate metric. Last year, Denver went 8-0 at home and 5-3 on the road, for example. Cincinnati went 5-3 at home, and 6-2 on the road. Ignoring the small sample size, that's strong evidence that the Broncos are a better home team than the Bengals.

If you just look at a franchise's winning percentage at home, you're going to overvalue the good teams. By subtracting a team's road wins from its home wins, you should get a strong idea of how good that team is at home (more on this and the NFC North at the end of this post).

So how do the Jets fair? Over the past six years, the Jets and Giants have the same number of home wins (27), while the Giants (23) have two more road wins than the Jets. This doesn't prove anything of course, but it's safe to say that the Jets and Giants were pretty equivalent in terms of football ability from 2000-2005. And while both teams won 27 games at home, it's arguable that the Jets actually enjoyed the better home field advantage since Gang Green was the worse team (based on the overall records).

Of course, I soon realized that six years wasn't enough. But this gives us a glimpse of two key ideas: home field advantage isn't consistent from year to year, and you should always be careful with your sample sizes. As Doug showed here we should always be careful with splits.

If you look at the last four seasons, the Jets have won nine more home games than road games; the Giants just three. If we go back ten years, the Jets have won 8 more at home than away, Big Blue won 8.5 more. So whatever cutoff you use may seem arbitrary and a different cutoff could very well get you a different result.

But let's use 1983, when the Jets left Shea Stadium. Because so many NFL teams have changed cities, this list of full of caveats, most of which are in the footnotes. I didn't put footnotes next to Jacksonville (1995-2005), Carolina (1995-2005) and Cleveland (1983-1995; 1999-2005), but you should note that those data are not from a full 22 seasons. One other note: I didn't include any data from the strike season of 1987.

HFA is the Home Field Advantage factor, which is simply total home wins minus total road wins. Ties were counted as half a win.


Team HFA

Kansas City 56
Denver 52
Detroit 41.5
Green Bay 40
Minnesota 39
Cincinnati 37
Tampa Bay 37
Buffalo 37
Chicago 35
Seattle 35
Pittsburgh 33.5
Dallas 33
Atlanta 32
Miami 31
Arizona1 28
New England 26
Washington 25.5
San Diego 24
Baltimore2 22.5
New York (N) 22
Houston3 21
San Francisco 20.5
Philadelphia 20.5
Indianapolis4 19
St. Louis5 16
Jacksonville 16
Cleveland 12.5
Oakland6 11
New York (A) 10.5
Tennessee7 9
LA Raiders8 9
LA Rams9 8
Carolina 8
New Orleans 6
St. Louis10 6
Houston11 2
Baltimore12 -1

1 Arizona (1994-2005) and Phoenix (1988-1993) Cardinals. For the St. Louis Cardinals, see footnote 10.
2 Baltimore Ravens (1996-2005). For the Baltimore Colts, see footnote 12.
3 Houston Oilers (1983-1996). For the Tennessee Titans see footnote 7; the Houston Texans, footnote 11.
4 Indianapolis Colts (1984-2005). For the Baltimore Colts, see footnote 12.
5 St. Louis Rams (1995-2005). For the Los Angeles Rams see footnote 8.
6 Oakland Raiders (1995-2005). For the Los Angeles Raiders see footnote 9.
7 Tennessee Titans (1997-2005)
8 Los Angeles Rams (1983-1994)
9 Los Angeles Raiders (1983-1994)
10 St. Louis Cardinals (1983-1986)
11 Houston Texans (2002-2005)
12 Baltimore Colts (1983)

I'll let you guys comment on the list, but there's one more thing to mention. The old NFC Central (1983-2001) and current NFC North is very well represented on this list: three teams in the top five, and two more in the top ten (of course Tampa Bay's in the NFC South these days).

There's probably a good bit of synergy on this list. If Green Bay is dominant at home and bad on the road, when Minnesota plays Green Bay, the Vikings will probably lose at Lambeau Field but win at home. That will artificially inflate the Vikings HFA factor. If all the NFC North teams have strong home field advantages -- and most fans probably think the Bears and Packers do -- that will drive all the NFC North teams up this list. The Bucs are an interesting study too. Tampa ranked in the top five in HFA factor from 1983-2001, with 34 more home wins than road wins. That's an average of 1.89 wins more per year playing at home. Since joining the NFC South, the Bucs have won only three more games in four years at home.

21 Comments | Posted in Home Field Advantage

Breaking down “Yards Per Carry” II

Posted by Chase Stuart on July 11, 2006

Reading yesterday's post on YPC, left me with the following question. Is it more impressive to rush 50 times for 250 yards, or 300 times for 1250 yards? That's a simple question with lots of complicated answers.

Let's first look at all RBs from 2002-2004 performed the following season.

In 2004, twenty-six RBs ran between 51 and 100 times, and as a group they averaged 4.18 YPC. Seven RBs had 250-300 carries, and that group averaged 4.19 YPC. I'm not sure exactly what you would say about the talent levels of the two groups. Are they similar because the players averaged the same YPC? Or is the high carry group better because those runners earned more carries, which is a reflection of how good they are?

You're probably expecting me to tell you that the low carry group averaged 5.0 YPC in 2005, while the high carry group averaged 3.0 YPC. Or maybe the reverse. Either way, I set you up for nothing. In 2005, the 26 members of the original low carry group averaged 4.15 YPC, and the original high carry group averaged 4.14 YPC. Here's the full chart:

	   2003	   2004	   2005
01-50 4.03 4.17 3.96
51-100 3.64 4.05 3.95
101-150 4.37 4.26 4.22
151-200 3.61 4.34 3.95
201-250 4.31 3.79 3.81
251-300 4.51 4.28 4.13
301+ 4.35 4.36 4.37

Remember exactly what this is saying. It means that all RBs with 151-200 carries in 2002 averaged 3.61 YPC in 2003 (on however many carries). The two high carry groups (251-300 and 301+) in Year N (2002, 2003 or 2004) averaged the most yards per carry in year N+1 (2003, 2004 or 2005). That's pretty interesting, although maybe not entirely surprising. It appears to reaffirm what we thought before: the best RBs get the most carries. And assuming that a player's ability remains relatively constant from year to year, then it makes sense that the RBs with the most carries one year would average the most yards per carry the next.

Here's the same table as above but with carries listed instead of YPC.

	   2003	   2004	   2005
01-50 2087 2131 1330
51-100 1116 1661 1997
101-150 795 1527 965
151-200 731 699 780
201-250 1414 1474 1386
251-300 2407 728 1278
300+ 2822 2866 2774

Don't be alarmed at the high number of carries from that first group. From 2002-2004, 251 RBs that were in the 01-50 carries group while only 31 RBs over those three seasons totaled 300+ carries. What's more important is that only three RBs with 01-50 carries in Year N then rushed for 900 yards in Year N+1. Here's the list, with the first year on the left and the second season on the right.

Player		Rush	Yards	YPC	Rush    Yards	YPC
Rudi Johnson 17 67 3.94 215 957 4.45
Reuben Droughns 6 14 2.33 275 1240 4.51
Willie Parker 32 186 5.81 255 1202 4.71

To be honest, I'm a bit surprised that only one runner per year came out of nowhere to have a big year. You probably want to temper your enthusiasm on any unproven runner unless you've got a really good reason to like him.

  • A couple more ways to look at YPC data

Yesterday I wrote that the RBs with the fewest carries having the lowest yards per carry average (as a group) was probably counterintuitive. But just because those runners as a group had a low average, that doesn't mean it's hard for individual runners to have a high average.

Over the four seasons, the top 26 runners in yards per carry all had fewer than 40 carries. On the flip side, none of the bottom 100 runners in yards per carry even had 50 carries. So yeah, you're going to get some extreme results when you look at these small sample sizes.

But we can still play around with the numbers a little bit. First, let's look at all RBs with a small number of carries one year, and a large number the next. There are thirty-three RBs in NFL history that rushed less than 100 times in Year N, and 250 times or greater in Year N+1.

Name			YearN	Rush	Yards	YPC	Rush	Yards	YPC
Lamont Jordan 2004 93 479 5.2 272 1025 3.8
Willie Parker 2004 32 186 5.8 255 1202 4.7
Reuben Droughns 2003 6 14 2.3 275 1240 4.5
Emmitt Smith 2003 90 256 2.8 267 937 3.5
Troy Hambrick 2002 79 317 4.0 275 972 3.5
Deuce McAllister 2001 16 91 5.7 325 1388 4.3
Fred Taylor 2001 30 116 3.9 287 1314 4.6
Shaun Alexander 2000 64 313 4.9 309 1318 4.3
Lamar Smith 1999 60 205 3.4 309 1139 3.7
James Allen 1999 32 119 3.7 290 1120 3.9
Jamal Anderson 1999 19 59 3.1 282 1024 3.6
Ahman Green 1999 26 120 4.6 263 1175 4.5
Stephen Davis 1998 34 109 3.2 290 1405 4.8
Duce Staley 1997 7 29 4.1 258 1065 4.1
Anthony Johnson 1995 30 140 4.7 300 1120 3.7
Garrison Hearst 1994 37 169 4.6 284 1070 3.8
Harvey Williams 1993 42 149 3.5 282 983 3.5
Erric Pegram 1992 21 89 4.2 292 1185 4.1
Barry Foster 1991 96 488 5.1 390 1690 4.3
Cleveland Gary 1991 68 245 3.6 279 1125 4.0
Gaston Green 1990 68 261 3.8 261 1037 4.0
Ottis Anderson 1988 65 208 3.2 325 1023 3.1
Greg Bell 1987 22 86 3.9 288 1212 4.2
Charles White 1986 22 126 5.7 324 1374 4.2
Curt Warner 1984 10 40 4.0 291 1094 3.8
Earnest Jackson 1983 11 39 3.5 296 1179 4.0
Curtis Dickey 1982 66 232 3.5 254 1122 4.4
Wendell Tyler 1980 30 157 5.2 260 1074 4.1
Terdell Middleton 1977 35 97 2.8 284 1116 3.9
Wilbert Montgomery 1977 45 183 4.1 259 1220 4.7
Otis Armstrong 1973 26 90 3.5 263 1407 5.3
Lydell Mitchell 1972 45 215 4.8 253 963 3.8
Ron Johnson 1971 32 156 4.9 298 1182 4.0
Totals 1359 5583 4.11 9440 38500 4.08

I'm not sure what you would have predicted, but the same runners that averaged 4.1 YPC on an average of 42 carries ran equally well with an average of 286 carries the next year. But that's misleading if it makes you place more value on small sample sizes.

Inside the group there wasn't much consistency: only one-third of the RBs averaged within half a yard per carry of their YPC average from Year N. The correlation coefficient, explained here of the YPC for the RBs in Year N and Year N + 1 was just 0.16. This means that the YPC average of the RBs in the second year can be "explained by" 3% their YPC average in the first year, and 97% other stuff. This is a longwinded way of saying a small bit of data (less than 100 carries) just doesn't tell you very much. What about a bigger piece of data?

There are 188 RBs in NFL history that recorded at least 250 carries in consecutive seasons. How do their numbers compare? The high workload RBs averaged 4.32 YPC in Year N, and 4.28 YPC in Year N+1. But as we saw above, that could be the result of lots of RBs cancelling each other out.

You'd expect the correlation coefficient to be higher than 0.16 here, and it is. But it's only 0.39; that means that even with RBs that we know a lot about, only 15% of each RB's YPC in Year N+1 can be "explained by" his YPC average from the previous year.

Before we get to the last way to measure the data, an analogy might help here.

Let's say you flip a regular coin ten times, and it lands on heads ten times in a row. You'd probably still say the coin is only 50% likely to land on "heads" on the next flip. Because even though a coin will only land on ten straight heads once every 1,000 times, the odds that your regular coin was actually a weighted coin is a lot less than one in a thousand.

But now assume you take a coin with heads on both sides and put it in a bag with two other coins. If you pull out a coin without looking, flip it twice, and it lands on two heads, you won't think the odds are 50/50 anymore that the next flip will produce a heads. Because even though getting two straights heads isn't very unlikely, it's more likely that you've grabbed the coin with two heads.

Once you think of football statistics like that, the following analogy is pretty simple. If Joe Runningback runs 75 times for 500 yards, you can either chalk it up to a combination of good luck and a small sample size, or you can rationalize the result by claiming that Joe Runningback's pretty good. It then becomes a question of what's more likely: that an average RB could do what Joe did, or that Joe's actually a very good runner? And that's why when you deal with small sample sizes, your own personal beliefs on a player become very important in how you interpret the data.

This gets us to the last idea for the day. There have been a few running backs in the NFL, so here's how I narrowed the list. Any RB that debuted before 1970, had fewer than 200 career carries or is still active was thrown out. I then looked at all RBs who had 51-100 career carries at the end of either their first or second season while averaging at least 4.00 yards per carry. That left us with 58 RBs who fit our basic profile: runners who had success on a small number of carries very early in their career.

What we'll add in is their draft position. Presumably, the round in which a player was drafted can serve as a predicate for "I think Joe Runningback is a good or bad runner."

I'll let you guys comment on the data. The numbers on the left represent the group's career-to-date totals after the season in which each RB passed the 50 carry mark (either their first or second); the second set of numbers show how the group performed for the remainder of their careers.

Round	# RBs	Rush	Yards	YPC	Rush	Yards	YPC
1 9 680 3,229 4.75 5,099 21,270 4.17
2-3 10 751 3,519 4.69 6,164 26,316 4.27
4-6 13 975 4,419 4.53 6,743 25,689 3.81
7+ 26 575 2,997 5.21 3,106 13,068 4.21
2,981 14,164 4.75 21,112 86,343 4.09

For those curious, the correlation coefficient of each RB's original YPC average to his YPC average for the remainder of his career was 0.43; the correlation coefficient for draft round (with the number 10 used for any undrafted player or player drafted after round 9) with his remaining YPC average was -0.23. We'd expect a negative correlation here: the lower (better) the round a player was drafted in, the higher his expected yards per carry average.

7 Comments | Posted in General, Statgeekery

Breaking down “Yards Per Carry”

Posted by Chase Stuart on July 9, 2006

If you only had one stat available and you had to pick between two different running backs, you'd probably want to know how many yards each runner averaged per carry. A player's rushing yards is largely a function of his total carries, and that number is dependent on certain things beyond his control (the quality of the other RBs on the team, his coach's philosophy, and the scoreboard, for example). But yards per carry helps to level the playing field, and gets at exactly what we want to know.

Fortunately, we have lots of statistics available, so we don't need to use only yards per carry. So if you were building an NFL team you'd take Clinton Portis before Rock Cartwright, despite Cartwright's 3.0+ advantage in yards per carry last season.

This gets us to the problem of small sample sizes. When Shawn Bryson averaged 5.28 YPC on 50 carries in 2004, it was easy to dismiss his success. Many would claim that achieving a high yards per carry average on a small number of carries is easy. (Of course, when Willie Parker rushed for 5.81 YPC on 32 rushes, those people probably said he wasn't as good as his numbers either.)

You'll hear this argument a lot: it's easy to record a high YPC if you don't have many carries. Maybe they don't really mean easy, but at least easier. But just because people say it, doesn't make it true. It's undoubtedly true that it is easier to get all sorts of extreme YPC numbers with a small number of carries, and that includes really high (and really low) YPC averages. And RBs that are running well usually get more carries going forward than runners that don't do so well. So what do the numbers say?


Carries 2002 2003 2004 2005
01-50 3.92 3.75 4.01 3.34
51-100 4.34 3.86 4.18 4.15
101-150 3.93 4.26 4.11 3.78
151-200 3.88 3.97 4.17 3.87
201-250 3.94 4.11 4.11 3.97
251-300 4.28 4.48 4.19 4.14
301+ 4.41 4.43 4.36 4.49

Average 4.15 4.18 4.19 4.07

The above table includes every RB's YPC average the past four years. If a RB had 87 carries, his totals were put in the "51-100" category. Some of these groups are really small -- in 2003, only three RBs had between 251-300 carries. The low carry group is by far the biggest, because about 80 RBs a year have 50 carries or less. Most of the other groups have between 10 and 25 players.

So what does this table tell us? The RBs with the fewest carries also have the lowest yards per carry average. Now be careful. This does not mean that it's harder for RBs with fewer carries to obtain a high YPC average. It just means that RBs with fewer carries also tend to average fewer yards per carry. They also may have a higher YPC average (more on that tomorrow).

As you could probably noticed, the NFL average jumps around a bit. The average YPC is relatively constant so it's probably not absolutely necessary to normalize each player's production by year, but it feels like the right thing to do. Here's the same data as above but instead of showing each group's average YPC, we're looking at each group's average YPC as a percentage of the league average yards per carry.


2002 2003 2004 2005
01-50 94.4% 89.6% 95.8% 82.1%
51-100 104.6% 92.2% 99.7% 102.1%
101-150 94.5% 101.8% 98.1% 92.9%
151-200 93.4% 94.9% 99.5% 95.2%
201-250 95.0% 98.2% 98.0% 97.6%
251-300 103.2% 107.2% 100.0% 101.7%
301+ 106.2% 106.0% 104.0% 110.2%

I think that table captures what we want a bit better. The runners with the fewest carries clearly average the fewest yards per carry. The RBs with the most carries also average the most yards per carry.

So what does this all mean? Well, it might not mean much. It's logical to assume that coaches give the most carries to the best RBs. Less talented RBs won't get as many carries, and we shouldn't expect them to do well just because they are only running two or three times a game. Something else might skew the data as well. If Joe Scrub is lucky enough to rush 120 times for 600 yards, his coach might give him an extra 40 carries. Even if he only averages three yards per carry on those additional touches, his YPC for the season will be 4.50, and he'll be in the 150+ bracket. So the RBs in the 150+ bracket will get a boost while the RBs in the 101-150 group will "lose" Joe Scrub's stats.

Now that we know it's not common for running backs to average a high number of yards per carry without a lot of carries, let's take a quick look at all RBs last year that averaged at least 4.5 yards per carry with fewer than 100 rushes.


Name Car Yards YPC
Rock Cartwright 27 199 7.37
Jason McKie 3 22 7.33
Dan Kreider 3 21 7
Aveion Cason 10 65 6.5
Darren Sproles 8 50 6.25
Michael Pittman 70 436 6.23
Michael Turner 57 335 5.88
Justin Fargas 5 28 5.6
Terry Jackson 2 11 5.5
Damien Nash 6 32 5.33
Maurice Hicks 59 308 5.22
Adrian Peterson 76 391 5.14
Ron Dayne 53 270 5.09
Ryan Moats 55 278 5.05
Bryan Johnson 1 5 5
Shawn Bryson 64 306 4.78
Leonard Weaver 17 80 4.71
Bruce Perry 16 74 4.62
Mack Strong 17 78 4.59
Chris Perry 61 279 4.57
B.J. Askew 13 59 4.54
Patrick Pass 54 245 4.54

Darren Sproles, Michael Turner, Justin Fargas, Maurice Hicks, Adrian Peterson, Ryan Moats, Leonard Weaver and Chris Perry are all young and talented runners that haven't earned much playing time yet in the NFL. All play behind some pretty good runners, but are only an injury away from seeing significant playing time.

Michael Pittman, Ron Dayne and Shawn Bryson are NFL vets that have seen some success after changing teams, but have had generally underwhelming NFL careers. Bryson and Pittman are both versatile RBs with career YPC averages of 4.0+ and soft hands, but neither looks to be a starter anytime soon.

There's only one RB on that list that is a projected starter in 2006, and it's Ron Dayne. Dayne's always been a controversial running back that seems to polarize NFL fans. Will he revive his career in Denver? Should we care more about the much larger sample size (most of his career in New York where he was a bust) or the significantly smaller but more relevant one (his success playing in Denver last year)? It's hard to say, and I don't think he was as impressive as his 2005 stats indicated, but the above data makes me think we probably shouldn't dismiss those numbers too quickly.

10 Comments | Posted in General, Statgeekery

Wide receivers, quarterbacks, and consistency II

Posted by Doug on July 7, 2006

Following up on yesterday's post. . .

I decided it might make sense to open up the investigation a bit. What I plan to do is to consider each team's coach, their main quarterback, main running back, top two wide receivers and top pass-catching tight end (I'd like to include offensive linemen, too, but don't have sufficient data). Ideally, I'd like to see what happens to the team's offensive output when each of those factors is changed one at a time. For example, is a team's performance more consistent if they change only the coach or if they change only the quarterback? Is a team's offensive output more variable, ceteris paribus, when they switch running backs or when they switch wide receivers?

This is fairly easy to do, but it turns out that we're going to run into sample size problems. Only four teams since 1978, for instance, have kept the same main quarterback, running back, and tight end, and top two wide receivers, but changed coaches. (How many of those four can you name off the top of your head? Two of them were Super Bowl champs; you will figure those out if you give it a moment's thought. I'll be impressed if anyone can name the other two without looking it up.)

But in a lot of cases, the team's "top pass-catching tight end" is a very minor part of the offense. The same could even be said of the team's second wide receiver in a lot of cases and even of the running back in extreme cases. I'd like to make the question a little more flexible, allowing the tight end, for example, to change if he wasn't a big part of the offense in Year 1. In other words, I want to try to see what happens if three of the four most important (skill position) parts of the offense stay the same and one changes. It's not exactly clear how to best implement that, but once I get it figured out I should have some respectable sample sizes and still stay within the spirit of the original question.

I'll think about it and get back to you on it in about ten days.

That's right. I'm taking next week off. I'll be traveling a bit and internet access will be spotty. But fear not, I have deputized my friend Chase, who claims to have five full days of posts ready for you. I'll be around enough to monitor what's happening, but Chase will be running the show.

12 Comments | Posted in General, Statgeekery

Wide receivers, Quarterbacks, and consistency

Posted by Doug on July 6, 2006

I recently picked up a copy of a book called The Wages of Wins, by David Berri, Martin Schmidt, and Stacey Brook. The authors are economists and they apply their economic outlook to the world of sports. I generally find this sort of thing interesting because the economist worldview has always struck me as very sensible. And because I like sports.

I want to make clear that this post is not a review of the book. I may or may not do that in a future post, but right now I have not read enough of it, nor have I read it carefully enough, to construct an adequate review. But I did get an idea while reading through it, and that's a sign of a promising book. The purpose of this post is merely to share the idea.

Most of the book is about baseball and basketball, but there is one chapter devoted to football. Its title is How are quarterbacks like mutual funds? It starts with a game-by-game look at Brett Favre's 2004 season, in which he threw 20 touchdowns and 4 interceptions in the odd-numbered games, and 10 touchdowns and 13 picks in the even-numbered games.

The title question is then answered:

The Favre story suggests that NFL quarterbacks are not quite as consistent as NBA players. Like mutual funds, past performance is no guarantee of future returns.

The authors then go on to build a metric of quarterback performance. The discussion will surely get sidetracked if I tell you exactly what it is, so I'm not going to do that. I'll just tell you that it includes passing attempts and yards, rushing attempts and yards, interceptions and fumbles. Using that metric, the authors measure the year-to-year consistency of quarterbacks and show that quarterbacks are, in fact, not very consistent. More precisely, quarterbacks' ratings according to this measurement constructed from yards, attempts, and turnovers are not very consistent. It's not uncommon, for example, to see a quarterback's rating go from the top quintile in the league to the bottom quintile --- or vice versa --- in consecutive years. I had not realized how variable quarterback stats are.

The authors observe that one reason for this inconsistency is that the rating they attach to a quarterback in a given year is dependent on the performance of a lot of players besides the quarterback himself. This point is well-known to all aficianados of football numbers; it is simultaneously the reason football is so fun to watch and the reason it's so hard to analyze. Any measure that penalizes Brett Favre when Donald Driver fails to get open or rewards Favre when Driver steals a sure interception out of the defender's hands is going to make Favre look more inconsistent than he really is. And that's not a criticism of Berri, Schmidt, and Brook's system in particular, because every system does it.

Favre's rating is a function of his performance, his teammates' performance, and random noise (yeah, there's some other stuff in there, too. I'm trying to keep it simple.). A rating of, say, Dwyane Wade, based on his stats would also be a function of his performance, his teammates performance, and random noise. But since Favre has 10 teammates and Wade has only four, it would be reasonable to suspect that Favre's rating is diluted by factors other than Favre's performance to a greater extent than Wade's is. Even if Favre's performance is rock steady from game to game and year to year, his rating, because there is so much other junk mixed into it, might still tend to vary a lot.

To summarize: Berri, Schmidt, and Brook do a very nice job of showing that quarterbacks' statistics are inconsistent (compared to those of basketball and baseball players) from year to year. But the open question is whether quarterbacks' performances are inconsistent from year to year, or if the variance in the statistics is due to the weak relationship between statistics and performance.

And the issue is not limited to quarterbacks, of course. In fact, quarterbacks might be the least affected by their teammates' performance. This is a wild guess, but I'd say that Donald's Driver's statistics are more influenced by Brett Favre's performance than Favre's are by Driver's performance. If Driver isn't doing his job, Favre can at least try find someone else to throw to. But Driver has no such option if Favre isn't holding up his end of the bargain.

I am going to shift the focus from quarterback to receivers now, and try to come up with a very rough estimate of a first pass at an extremely preliminary vague notion of an idea for how to maybe possibly determine how much of a wide receiver's production is attributable to the receiver himself, the quarterback, and everyone else on the team.

The idea goes like this. Look at all wide receivers who played at least 8 games in two consecutive seasons, and divide those receivers into three groups:


  1. those who were on the same team both years, and the team had the same starting quarterback both years;

  2. those who were on the same team both years, but with different starting quarterbacks each season;

  3. those who were on different teams in the two different years.

It'd be nice to also have a different-team-same-quarterback group, but history just doesn't provide us enough examples of that. Anyway, we press on.

The next step is to compute the year-to-year correlation in receiving yards per game for each of the groups. As many of you already know, a correlation coefficient is a number between -1 and 1 that measures the strength and direction of the linear relationship between two quantities. A positive correlation indicates two quantities that vary together (i.e. when one goes up, so does the other) while a negative correlation indicates two quantities that vary inversely. A correlation of 1 (or -1) means that the two quantities are perfectly linearly related. That is, one quantity can be predicted exactly if the other is known. A correlation of 0 means that there is no linear relationship at all between the two, so knowing one is of no use to you in predicting the other.

As this pertains to pairs of consecutive seasons of the same wide receiver, the correlation coefficient tells us roughly how easy it is to predict a receiver's stats this year using only his stats from last year. Here are the numbers:

Group 1 - same team, same QB: correlation = .75

Group 2 - same team, different QB: correlation = .64

Group 3 - different team: correlation = .44

What does this mean? The standard way to interpret these numbers is as follows (from the first statistics book I grabbed off the shelf):

[the square of the correlation coefficient] is the proportion of the total variation in the y's that can be attributed to the linear relationship with x]

Sometimes the square of the correlation coefficient is described in terms of "explanatory power": it's the percentage of the variation in y's that is "explained by" variation in x. The squares of those numbers are: .56, .41, and .19. So roughly speaking, what we have is this:


  • For receivers on the same team with the same quarterback, their numbers this year are "explained by" 56% their numbers from last year, and 44% other stuff.

  • For receivers on the same team but with a different quarterback, their numbers this year are "explained by" 41% their numbers from last year, and 59% other stuff.

  • For receivers on different teams, their numbers this year are "explained by" 19% their numbers from last year, and 81% other stuff.

I really don't know what that means, except in a strict mathematical sense. It's tempting, but not mathematically justifiable, to try to make some conclusions about the role of the quarterback and the rest of the team based on differences between some of the numbers above. I am, for now, just going to say what I know is true: that receiver's stats are definitely more predictable if they stay on the same team, and even more predictable if that team keeps the same quarterback. Not an earth-shattering revelation I realize, but hey, it's just a blog post.

It might be interesting to play this game with other factors too, like coaches for instance. That is, build a same-team / same-quarterback / different-coach group and compare it to a same-team / different-quarterback / same-coach group. I'll put that on the ever growing to-do list.

5 Comments | Posted in General, Statgeekery

Is less more?

Posted by Doug on July 5, 2006

A few weeks ago, footballoutsiders linked to my ten thousand seasons series. A general theme among the comments was that the simulation was inaccurate because it was based on season-long power ratings instead of last-few-weeks power ratings. Because teams' true strengths vary so much during the course of a season, I should have used a smaller but more recent sample instead of using all the data. Less is more. I thought about that for awhile and pondered the possibility that those folks might have a good point.

That caused me to try to build a power rating system based on at-the-time strength of schedule, which I was unable to do. But a by-product of the effort was this post about at-the-time strength of schedule. Interestingly, the majority of the respondents to it felt that taking a five-week slice of data introduced too much variability into the numbers. Use all the data. More is more. I thought about that for awhile and pondered the possibility that those folks might have a good point too.

So I decided to do a quick check. I looked at all games in weeks 10--13 during the years 1990--2005. For each game, I recorded the following information:


  • the difference between the two teams' full-season at-the-time ratings according to the simple rating system.

  • the difference between the two teams' last-5-weeks at-the-time ratings according to the same system.

So if it's week 12 of 2005 and San Francisco is playing Tennessee, we look at their week 1--11 ratings (which rate the Titans as about 5 points better) and their week 7--11 ratings (which rate the 49ers a couple of points better). I chose to look only at weeks 10 through 13 because week 10 is late enough to show some differentiation between the full-season and at-the-time ratings, and week 13 is early enough that most teams haven't given up or started to rest their regulars or whatever.

Now that we've got all the data collected, we run a logit regression to build a formula that will predict the winner of each game. Result: the at-the-time rating was not significant (in the official statistical sense). That means: if you know the full-season ratings, then there is not sufficient evidence to conclude that knowing the last-5-weeks ratings helps you predict the winners of this week's games.

If you build a formula that uses just at-the-time ratings, it will predict about 62% of the games correctly. If you build a formula that uses just full-season ratings, it will predict about 66.4% of the games correctly. If you build a formula that incoporates both, it will predict about 66.6% of the games correctly.

Interesting.

One problem here is that the simple rating system does not take home field advantage into account. It could be modified to do so, but I've never bothered because NFL teams always play the same number of home and road games during the course of a season. But that's not true in a 5-week stretch, so the last-5-weeks ratings have a bit of noise included in them. I'm not sure how much of a difference that makes, but it might make some.

Assuming the above paragraph doesn't invalidate the study, this looks like pretty clear evidence that, in this case, less is not more.

7 Comments | Posted in General, Statgeekery

At-the-time strength of schedule

Posted by Doug on July 3, 2006

The Steelers and the Saints had the same strength of schedule last year. The Steelers' opponents' record, after throwing out the games against Pittsburgh itself, was 121-119. The Saints' opponents' record was the same.

But it seems that Pittsburgh's opponents were playing a lot better at the time than New Orleans' were. If you look at each opponent's record in just the two weeks before and the two weeks after they played the team in question, you get a different picture of their strength of schedule.

For instance, here is Pittsburgh's:


1 ten beat bal, lost to stl
2 hou lost to buf, lost to cin
3 nwe beat oak, lost to car, lost to sdg, beat atl
5 sdg beat nyg, beat nwe, beat oak, lost to phi
6 jax lost to den, beat cin, lost to stl
7 cin lost to jax, beat ten, beat gnb, beat bal
8 bal beat cle, lost to chi, lost to cin, lost to jax
9 gnb lost to min, lost to cin, beat atl, lost to min
10 cle lost to hou, beat ten, beat mia, lost to min
11 bal lost to cin, lost to jax, lost to cin, beat hou
12 ind beat hou, beat cin, beat ten, beat jax
13 cin lost to ind, beat bal, beat cle, beat det
14 chi beat tam, beat gnb, beat atl, beat gnb
15 min beat det, beat stl, lost to bal, beat chi
16 cle lost to cin, beat oak, beat bal
17 det lost to cin, beat nor

Here's how you read that mess:

In week 1, Pittsburgh played Tennessee. In the weeks surrounding the Pittsburgh game, Tennessee beat Baltimore and lost to St. Lous. In week 14, Pittsburgh played Chicago. In the surrounding weeks, Chicago beat Tampa, Green Bay, Atlanta, and Green Bay again.

If you tally it all up, you'll find that Pittsburgh's opponents were 32-24 in the two weeks before and after playing the Steelers. Note in particular the last weeks of the season. From week 12 on, Pittsburgh's opponents were beating almost everyone except Pittsburgh (and other teams that Pittsburgh played during that stretch).

New Orleans' opponents, on the other hand, were not doing as well around the time they played the Saints.


1 car beat nwe, lost to mia
2 nyg beat ari, lost to sdg, beat stl
3 min lost to tam, lost to cin, lost to atl
4 buf lost to tam, lost to atl, beat mia, beat nyj
5 gnb lost to tam, lost to car, lost to min
6 atl beat min, lost to nwe, beat nyj
7 stl lost to sea, lost to ind, beat jax
8 mia lost to tam, lost to kan, lost to atl, lost to nwe
9 chi beat bal, beat det, beat sfo, beat car
11 nwe lost to ind, beat mia, lost to kan, beat nyj
12 nyj lost to car, lost to den, lost to nwe, beat oak
13 tam beat atl, lost to chi, beat car, lost to nwe
14 atl beat det, lost to car, lost to chi, lost to tam
15 car beat atl, lost to tam, lost to dal, beat atl
16 det lost to gnb, lost to cin, lost to pit
17 tam lost to nwe, beat atl

When you add it up, Pittsburgh's at-the-time strength of schedule was 32-24 and New Orleans' was 21-33. Even though their overall strengths of schedule were identical, it might be the case that the Steelers were actually playing a tougher slate.

Here is the full list.


TM YR Local Overall Diff
=========================================
sfo 2005 35- 20- 0 126-114- 0 +0.111
nwe 2005 33- 21- 0 124-116- 0 +0.094
pit 2005 32- 24- 0 121-119- 0 +0.067
dal 2005 31- 21- 0 127-113- 0 +0.067
jax 2005 30- 25- 0 115-125- 0 +0.066
nyj 2005 29- 22- 0 123-117- 0 +0.056
buf 2005 28- 24- 0 117-123- 0 +0.051
ind 2005 28- 25- 0 115-125- 0 +0.049
min 2005 29- 26- 0 117-123- 0 +0.040
nyg 2005 31- 26- 0 121-119- 0 +0.040
bal 2005 29- 25- 0 124-116- 0 +0.020
cle 2005 29- 27- 0 120-120- 0 +0.018
kan 2005 29- 26- 0 123-117- 0 +0.015
gnb 2005 28- 25- 0 124-116- 0 +0.012
sea 2005 25- 30- 0 107-133- 0 +0.009
det 2005 27- 27- 0 118-122- 0 +0.008
ten 2005 27- 28- 0 119-121- 0 -0.005
phi 2005 28- 26- 0 126-114- 0 -0.006
sdg 2005 29- 23- 0 136-104- 0 -0.009
den 2005 27- 26- 0 125-115- 0 -0.011
chi 2005 24- 29- 0 112-128- 0 -0.014
oak 2005 28- 27- 0 126-114- 0 -0.016
hou 2005 26- 27- 0 123-117- 0 -0.022
car 2005 23- 30- 0 110-130- 0 -0.024
stl 2005 23- 31- 0 114-126- 0 -0.049
mia 2005 22- 32- 0 110-130- 0 -0.051
was 2005 27- 28- 0 132-108- 0 -0.059
tam 2005 21- 33- 0 110-130- 0 -0.069
cin 2005 22- 32- 0 117-123- 0 -0.080
atl 2005 21- 34- 0 118-122- 0 -0.110
nor 2005 21- 33- 0 121-119- 0 -0.115
ari 2005 20- 33- 0 119-121- 0 -0.118

My original intent in looking into this was to develop a rating system that would use this kind of strength-of-schedule number instead of the full season number. I ran into some technical troubles, but I'll get them sorted out sooner or later.

17 Comments | Posted in General