Yesterday's post gives me a good excuse to dust off one of my favorite pet ideas: using paired-comparison algorithms --- like those in use by the BCS --- to rank wide receivers. Just as with yesterday's method, the main idea is that, because their statistics are at the mercy of a number of team-specific factors, wide receivers should only be directly compared to other wide receivers on the same team. But receivers on different teams can be compared indirectly in the same way that two college football teams who never played each other can be compared. Essentially, you look at common opponents. I wrote this up at the sabernomics blog about 18 months ago:
Wide Receiver is the only position where even small groups of players are actually competing against each other under nearly identical circumstances. Domanick Davis and Brian Westbrook are competing for statistics under very different sets of circumstances and for that reason it's extremely difficult to say with any degree of certainty who is better. Likewise, Rod Smith and Laveranues Coles are in different environments so simply comparing their stats isn't necessarily a reliable way of determining who's better.
But the same does not apply to Rod Smith and Ashley Lelie. Smith and Lelie are working in the same system with the same quarterback, the same offensive line, even the same game conditions. Raw numbers probably are a good way to determine to what extent Smith is better than Lelie. Likewise, Coles and Rod Gardner can be fairly compared. Every season, every team has a group of 3 to 5 guys that can, for the most part, be rank-ordered by their numbers. This situation is unique to wide receivers.
But how does this help us compare Rod Smith to Laveranues Coles? Think college football. USC didn't play Auburn this season. So who was better? Well, we know USC is good because, among other reasons, they crushed Oklahoma, who we suspect was pretty good; they beat Texas, for example. We know Auburn was good, in part, because they beat Tennessee, Georgia, and LSU, all solid teams. While there is unfortunately no direct evidence to help us settle the Auburn/USC debate, there are piles and piles of indirect evidence. Every game played by either team, or the opponents of either team, or the opponents of those teams, serves as a tiny sliver of indirect evidence about how good USC and Auburn were. And many very intelligent people have devoted lots of their time and talent to convincing computers to assimilate all this information.
So why not put this technology to work ranking wide receivers? Rod Smith "played" Ed McCaffrey several times, and McCaffrey was good. He also played Anthony Miller and Willie Green and Eddie Kennison (remember that?). And McCaffrey has played Jerry Rice and Stephen Baker, Willie Green has played Mark Carrier and Don Beebe, Eddie Kennison has played --- well, who hasn't Eddie Kennison played? Likewise, there is loads of indirect evidence --- mind you, much of it is extremely indirect --- about how good Laveranues Coles is compared to Michael Jackson and Marvin Harrison and Troy Brown and even Randy Moss.
After that, I actually implemented the system, but I did it using an algorithm that was not well-suited to the task. Since then, I've learned a lot and have modified the algorithm to produce a set of rankings that at least passes the laugh test.
Before I tweaked the method, though, it produced some real head-scratchers. The most notable was that it ranked Joey Galloway as the 9th-best receiver of all time (this is before last season, mind you). Ranking him that high is indeed a bit ridiculous, but he ranks highly in the simpler method proposed yesterday as well. If you examine his career, you will find that he has been seriously underappreciated throughout his career. Here is a passage that explains the method a bit more while discussing Galloway in particular.
Well, first let me describe the method in slightly different terms, still using the college football analogy. How do you objectively rank college football teams? Whether you're using a fancy computer scheme or having a casual water-cooler conversation, the method is essentially the same: you start with the team's record and you adjust it to account for strength of schedule.
Galloway has played games against Brian Blades, Mike Pritchard, James McKnight, Ricky Proehl, Tim Brown, Terry Glenn, Darnay Scott, Sean Dawkins, and Rocket Ismail, and he has won most of those games. So he has a good record. What about strength of schedule? One argument made by those who champion computer ranking systems for college football is that, in order to know how Auburn compares to USC, you have to know how Oregon compares to Arkansas, how Arizona compares to Ole Miss, and so on. Intutition only gets you so far. Our brains are only capable of processing so much information, so we lazily lump Oregon and Arkansas together as "mediocre" and we call Arizona and Ole Miss "bad." Many people won't even go that far, and lump all four into a category called "unranked," even though there are significant differences among them. Same thing here. You've probably spent lots of time thinking about how Tim Brown and Cris Carter compare. You probably haven't spent any time thinking about how Mike Pritchard and Ike Hilliard compare.
Well, it turns out that maybe Brian Blades and Ricky Proehl and Mike Prtichard and Darnay Scott and Rocket Ismail and Terry Glenn aren't half bad. I won't bore you with the particulars, and I won't try to convince you that any of these guys is Steve Largent, but all them had a fair amount of success at various points in their careers. And in almost every case, they had more success competing against other receivers than they did competing against Galloway.
Two years ago, Miami (Ohio) started appearing in the top 5 of some of the computer polls. Critics thought it was ridiculous and mocked it accordingly. But when it comes down to it, why were they ranked so darned high? Because they were 13-1. "Yeah, but who did they beat?" ask the mockers, "Bowling Green and Marshall. Pfft.." But if you take a close look at it, Bowling Green and Marshall weren't so bad. Ultimately, Miami was ranked so high because they beat almost every team they played, and because the teams they played were beating other teams as often as not. I shouldn't stretch this parallel too far, but you've probably figured out that Galloway is Miami (Ohio), Brian Blades is Marshall, and so on.
Galloway has very frequently led his team in receiving, and the other receivers on his teams --- while admittedly seeming more Bowling Green-ish than Florida State-like --- have at several different times led other teams in receiving. If you think my initial premise is reasonable, then isn't this evidence that Galloway is pretty good? To press the point, what's the evidence that Galloway isn't good? His stats? As we are all aware, stats are, in part, the products of context. To take one example of that, the quarterbacks on Galloway's teams have been: Mirer, Mirer, Moon (age 41), Moon (age 42), Kitna, Aikman (his final season), Quincy Carter, Chad Hutchinson, Quincy Carter, Brian Griese. What wide receiver is going to put up numbers with those guys throwing to him? Apparently, none. But Galloway has done the best job of making something of a bad situation.
This entry was posted on Thursday, May 4th, 2006 at 4:16 am and is filed under BCS, Statgeekery. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.