Posted by Doug on July 20, 2006
Yesterday's post was a prelude to this one, in which I will dust off one of my favorite old pieces of silly research. I've written about it before here and elsewhere, so apologies to those of you who have seen it before, but bear with me.
The main idea is that the team-dependent nature of every player's statistics makes it difficult to compare the stats of players on different teams. But we might be able to compare players across teams by comparing only players on the same team and then bootstrapping our way up. Here is what I wrote two years ago:
Wide Receiver is the only position where even small groups of players are actually competing against each other under nearly identical circumstances. Domanick Davis and Brian Westbrook are competing for statistics under very different sets of circumstances and for that reason it’s extremely difficult to say with any degree of certainty who is better. Likewise, Rod Smith and Laveranues Coles are in different environments so simply comparing their stats isn’t necessarily a reliable way of determining who’s better.
But the same does not apply to Rod Smith and Ashley Lelie. Smith and Lelie are working in the same system with the same quarterback, the same offensive line, even the same game conditions. Raw numbers probably are a good way to determine to what extent Smith is better than Lelie. Likewise, Coles and Rod Gardner [remember, this was written in 2004] can be fairly compared. Every season, every team has a group of 3 to 5 guys that can, for the most part, be rank-ordered by their numbers. This situation is unique to wide receivers.
But how does this help us compare Rod Smith to Laveranues Coles? Think college football. USC didn’t play Auburn this season. So who was better? Well, we know USC is good because, among other reasons, they crushed Oklahoma, who we suspect was pretty good; they beat Texas, for example. We know Auburn was good, in part, because they beat Tennessee, Georgia, and LSU, all solid teams. While there is unfortunately no direct evidence to help us settle the Auburn/USC debate, there are piles and piles of indirect evidence. Every game played by either team, or the opponents of either team, or the opponents of those teams, serves as a tiny sliver of indirect evidence about how good USC and Auburn were. And many very intelligent people have devoted lots of their time and talent to convincing computers to assimilate all this information.
So why not put this technology to work ranking wide receivers? Rod Smith “played” Ed McCaffrey several times, and McCaffrey was good. He also played Anthony Miller and Willie Green and Eddie Kennison (remember that?). And McCaffrey has played Jerry Rice and Stephen Baker, Willie Green has played Mark Carrier and Don Beebe, Eddie Kennison has played — well, who hasn’t Eddie Kennison played? Likewise, there is loads of indirect evidence — mind you, much of it is extremely indirect — about how good Laveranues Coles is compared to Michael Jackson and Marvin Harrison and Troy Brown and even Randy Moss.
So the question for today, prompted by a comment by MDS a couple of posts ago, is: can we apply the same idea to quarterbacks?
And the answer is no. There just aren't enough cases of more than one quarterback getting significant playing time for the same team. It would be like trying to run the BCS rankings in early September. Most of the real contenders have only played a couple of 1-AA or Sun Belt Conference teams. Possibly some teams haven't even played a game yet. Brett Favre and Peyton Manning, for instance, would both have "records" of 0-0.
But what if we stretch the assumptions just a little? Let's allow ourselves to compare two quarterbacks who played for the same team in the same year or in consecutive years. This will open things up a bit. Jeff Garcia, for instance, can be compared to Ken Dorsey and Tim Rattay (SF 2003 vs. 2004), to Kelly Holcomb and Tim Couch (CLE 2004 vs. 2003), to Trent Dilfer (CLE 2004 vs. 2005), and to Steve Young (SF 1998 vs. 1999).
I said above that Garcia can be compared to those others, but that's a bit disingenous. I mean, we can compare Garcia to Jim Thorpe if we want to, but that doesn't mean the comparison is meaningful. After I wrote the previous article, I found out that many people didn't even like the idea of comparing two receivers on the same team. They are certainly not going to like the idea of comparing Kurt Warner (1999 Rams) to Tony Banks (1998 Rams). And I'm not going to blame them.
But I'm going to press on. Just as some college football teams have lucky or unlucky scheduling quirks --- some of OU's opponents last year played against a healthy Adrian Peterson and some did not, for example --- some quarterbacks are going to be unfairly advantaged or disadvantaged by this scheme. Even if it's more extreme than the college football example, we've got no choice but to live with it or quit reading right here.
As I write this, I am only part way through the programming, but I'm far enough to know that it's not going to yield a reasonable set of rankings. Even with the dubious extended definition of comparability, there still are not enough pairs. Brett Favre now has two comparables, and Peyton Manning has one (a loss to Jim Harbaugh). Even if you bought into the method for wide receivers, it's just not going to work for quarterbacks.
Nonetheless, when I get the programming done, I'll post a set of rankings just for laughs. Even though they will be meaningless, they might point us in the directions of some interesting facts. The wide receiver exercise produced some rankings, but its real value (at least to me) was that it caused me to closely examine the careers of some players I hadn't really thought much about. Hopefully this will do the same.