Posted by Doug on February 16, 2007
Two days ago I presented a gray ink test for football players. The name is borrowed from Bill James' analogous test for baseball players and the purpose of the thing is to put a single number on the quality and quantity of a particular player's league-leading or near-league-leading seasons. Having had a couple of days to wade through the lists and reflect upon them, I have a couple of thoughts.
First, I decided the system was potentially a little too sensitive to the particular stats of the #10 (or #5) player. If there is a huge gap between #9 and #10, or between #10 and #11, then the stats of the #10 guy don't really reflect what I think they're supposed to reflect, which is the approximate production of a guy in roughly that position. So I decided to smooth things out a little by averaging the stats of the 9, 10, and 11 guys to get the baseline #10 production. Likewise, I averaged the 4, 5, and 6 players when using a #5 baseline.
But that's pretty minor.
The major thing I noticed is that the really great seasons get a lot of credit. The point of this metric, of course, was to give credit to just those seasons, but I think it might go too far.
For example (working with receiving yards and a baseline of #10), Harold Carmichael's 1973 earns him 799 points while Chad Johnson's 2006 nets him 175. Both led the league in receiving, but one gets 624 points more than the other. I specifically said last time that its ability to distinguish between a truly great league-leading season (like Carmichael's) and a fairly weak league-leading season (Johnson in 2006 just happened to be at the top of a group of 6 guys separated by fewer than 100 yards) was one of the selling points of this method. But I'm wondering if I haven't overdone it.
And in 1978, Carmichael got 306 points for finishing third in the league in receiving. Isaac Bruce got roughly half that (158) for leading the league in receiving in 1996. Is that right? One could certainly argue that it is. Even though he led the league, Bruce was just a handful of catches from finishing out of the top 10, the fact that he was at the top of a homogeneous pack instead of in the middle or at the bottom is not very important. But still, he did lead the league.
While the lists produce a pretty nice mixture of receivers from all eras, I look at some of those 70s seasons --- like the Carmichael seasons mentioned above, Cliff Branch's 692 points in 1974, and Drew Pearson's 684 points for finishing second in the same season --- and wonder if they aren't being over-credited. In 1974, there were 26 teams, most of which only really utilized two receivers at the most. In 2006, there are 32 teams, many of which use several wide receivers extensively. The #10 receiver in 1974 was #10 of around 50 or 60 "meaningful" receivers, while the #10 receiver in 2006 is #10 of about 80. Am I wrong about this?