**SITE NEWS:**
We are moving all of our site and company news into a single blog for Sports-Reference.com. We'll tag all PFR content, so you can quickly and easily find the content you want.

Also, our existing PFR blog rss feed will be redirected to the new site's feed.

Pro-Football-Reference.com » Sports Reference

For more from Chase and Jason, check out their work at Football Perspective and The Big Lead.

# Archive for December, 2006

## Shula’s record

Back in August I reviewed the new ESPN Pro Football Encyclopedia. Left on the cutting room floor was my commentary on the Foreword by Joe Theismann. In this half-page piece of work by Theismann, he lists the three records that he "think[s] will never be touched, not with the way the game is played today."

The first is Brett Favre's record for consecutive starts by a quarterback. The second is the Dolphins 17-0 season, which Theismann "guarantees" is safe. Finally he writes this:

The final record I see standing forever is Don Shula's mark of 347 career wins. No one will coach as long as Shula did. There's too much pressure, too much impatience by owners.

When I first read that I did some quick back-of-the-envelope calculations. For some reason Jon Gruden was the first guy to pop into my mind. Gruden has averaged 9 wins per year and he is now 43 years old. At his current rate he's almost 30 years away from the record. Obviously he's a longshot but it's not completely unthinkable.

Now play the same game with Bill Cowher. At his current clip he's 17 or 18 years away from the record, which would mean he would be 66 or 67. I'm not sure Cowher's chances of getting this record are less than, say, Tomlinson's chances of breaking Emmitt Smith's career rushing record. Tomlinson would need to maintain his current pace until he's 33 to catch Emmitt. Is 66 older for a coach than 33 is a for a running back? Possibly, but I'd argue that 10.5 wins per year is an easier pace to maintain at an advanced age than 1500 rushing yards per year is.

I understand Theismann's points about the game being different now than it was in Shula's time. But even if Cowher's relationship with the Steelers goes sour, it's not like he'll have a hard time finding work. It's not clear that he would be able to replicate the same success elsewhere, but it's also not clear that he wouldn't. And even if he only averages 9-and-a-half wins per year instead of 10-and-a-half, he could still retire with the record before or shortly after his 70th birthday.

Think of it this way: what has to happen in order for Cowher *not* to break the record? Roughly speaking, it would take one or both of the following (and yes, I realize they are somewhat interrelated):

1. He loses the ability to coach at his current level before age 68ish.

2. He loses the desire to coach at his current level before age 68ish.

Now, what does it take for LT to *not* break Emmitt's record?

1. He loses the ability to play at his current level before age 33.

2. He loses the desire to play at his current level before age 33.

I'd say that in Cowher's case, #2 is as likely as #1, or nearly so. In Tomlinson's case #1 is overwhelmingly more likely than #2. If you want to break a record, desire is the easy part; it's ability that ruins most peoples' attempts to break the all-time rushing record.

In other words, for all the major records, there are many, many people with the desire to break it, but only a few with the skill. Since the playing records are much more dependent on skill, they are in a sense safer.

That said, if you're going to name a record that will never be broken, Shula's record is one that gives you about a 99.9% chance of not looking foolish; by the time it is broken, if ever, everyone will have forgotten that you said it would never be broken. Emmitt's rushing record almost certainly will be broken before Shula's, not necessarily because it's a more difficult feat to match, but because running back careers are shorter than coaching careers, and therefore a whole lot more guys will have the chance to give it a shot. Right now, Cowher is probably the best shot to break Shula's record; a running back who is a high school senior right now could break Emmitt's record before that. Adrian Peterson's *son* could surpass Emmitt before a guy like Eric Mangini challenges Shula's record.

16 Comments | Posted in General, History

## Throw out the records

I've heard it said that, since teams in the same division know each other well and always have incentive regardless of record, in-division games tend to be closer than out-of-division matchups.

Here is a data dump from which you can draw your own conclusions. This includes all games since the merger from week 6 through the end of the regular season.

**DIFF** is the number of games that separated the two teams in the standings at the time the game was played. For instance, when the Colts and Texans met last Sunday, the Colts were 11-3 and the Texans were 4-10, making the DIFF 7.

All numbers are presented from the vantage point of the team with the better record. In the DIFF 7 column, for instance, you can see that intradivision games are won by the team with the better record 80.4% of the time while interdivision games are won by the team with the better record 87% of the time.

=== interdivision =====|=== intradivision =====

DIFF N Win% AvMargin N Win% AvMargin

======================================================

0.5 151 0.500 0.4 101 0.520 0.4

1.0 693 0.541 2.1 568 0.547 1.7

1.5 116 0.591 3.4 88 0.523 1.2

2.0 523 0.602 3.5 474 0.560 2.3

2.5 89 0.669 5.0 75 0.653 4.5

3.0 382 0.666 5.2 338 0.660 5.5

3.5 39 0.667 3.3 48 0.708 8.4

4.0 253 0.644 5.9 257 0.716 7.2

4.5 27 0.667 9.1 29 0.828 13.6

5.0 137 0.737 8.8 140 0.707 6.5

5.5 15 0.933 18.1 18 0.944 13.5

6.0 88 0.807 10.0 90 0.767 9.2

6.5 5 0.600 10.0 4 1.000 15.5

7.0 46 0.870 12.4 56 0.804 10.3

7.5 3 0.667 16.0 1 1.000 16.0

8.0 20 0.700 8.5 29 0.897 13.0

8.5 2 1.000 18.5 2 1.000 22.0

9.0 9 0.778 17.1 8 0.875 16.2

9.5 0 0.000 0.0 1 1.000 9.0

10.0 4 0.750 22.0 4 0.750 15.5

10.5 1 1.000 1.0 1 1.000 10.0

11.0 2 1.000 8.5 1 1.000 28.0

13 Comments | Posted in General

## Re-handicapping the AFC

A couple of weeks ago, I tried to handicap the AFC playoff picture. I'll do it again today, with a little twist in the formula.

With all due respect to our Chiefs, Bills, Steelers and Titans fans, I'm going to leave them out of the equation today. Figuring out the odds with four teams (Jets, Jaguars, Broncos and Bengals) is difficult enough, without an NFL super computer. Random note: the second place teams in all four AFC divisions have the same record (8-6), and the third place teams in all four divisions *also* all have the same record (7-7).

Here are the remaining schedules for the AFC contenders.

Team 16 17

Cin @Den Pit

Den Cin SF

Jac NE @KC

NYJ @Mia Oak

Once again, it's time to replace those teams with their ratings from Jeff Sagarin.

Team 16 17

24.80 @21.05 22.64

21.05 24.80 9.47

29.95 29.93 @19.74

22.00 @20.69 10.54

Now we can calculate each team's chance of winning each game, using the formula:

Home team prob. of winning =~ 1 / (1 + e^(-.438 - .0826*diff)), where "diff" equals the home team's rating minus the road team's rating.

Team 16 17

Cin 0.47 0.65

Den 0.53 0.80

Jac 0.61 0.60

NYJ 0.42 0.80

From this, we could sum the weeks and get an expected number of season wins (8 + the number above):

Den 9.33

NYJ 9.22

Jac 9.21

Cin 9.12

As you can tell, that's pretty close. Things change pretty quickly around here, and Cincinnati went from being a playoff favorite to really being on the outside looking in...right?

Of course, the total number of expected wins is pretty irrelevant, because some teams have better tiebreaker scenarios than others. The four wildcard contenders play seven unique games over the last two weeks. Those games could end up in any of 128 different combinations. The most likely combination would be: Denver beats Cincinnati, Cincinnati beats Pittsburgh, Denver beats San Francisco, Jacksonville beats New England and Kansas City, the Jets lose to Miami and the Jets beat the Raiders. There's about a 9.5% chance the games go that way. If they do, the Broncos and Jaguars would be in with 10 wins, and the Jets and Bengals would miss out.

For those who took a hard look at the percentages above, you could probably guess the second most likely outgame: the same as before, except Cincinnati now topping Denver. In that case, the Bengals and Jaguars would make it.

The tiebreakers can get pretty complicated, and I can't promise you that I've done them 100% correctly. I'll give it my best. But suffice it to say, getting to 10 wins seems like a pretty safe bet (although the Jets would miss out if three teams get to 10).

Here are the odds that each team gets to X number of wins:

Wins 10 9 8

Cin 0.30 0.51 0.19

Den 0.43 0.48 0.09

Jac 0.36 0.48 0.16

NYJ 0.33 0.55 0.12

In terms of tiebreakers, the Jets look to be in the worst position because they'll lose out to the Jaguars via head-to-head, and will lose out to Bengals and Broncos because of a poor conference record. So the only way the Jets can make the playoffs is if they have a better record than two of the other teams. Of the 128 combinations, 36 of them would give the Jets a better record than two of the other three teams, and make the playoffs. The sum of the odds of any of those combinations occuring is just north of 27%, so the Jets are slightly better than average favorites to make it.

The Broncos are the opposite of the Jets; they seem very likely to make it if tied. If tied with the Jets or Jaguars, Denver would make it because of a better conference record. Even if Denver loses to Cincinnati, Denver will make it as long as they don't have a worse record than two other teams. There are 59 combinations where no one would have a better record than Denver, and another 39 where only one team would post a better record. The sum of the odds of any of those 98 combinations equals 74%.

Cincinnati would win a tiebreaker over the Jets and Jaguars, but might or might not against the Broncos. If Denver beats Cincinnati, Denver would get the tiebreaker. If Cincinnati beats Denver, Cincinnati would get the tiebreaker if neither the Jets nor the Jaguars have the same record as the Bengals and Broncos. If one of those teams do (i.e., a three-way tie), then Denver would be the first team in because of a better conference record, and would make it in over Cincinnati. (Then, depending on the record of the 4th team, Cincinnati may or may not get in.) I'll save you the grunt work, and just say there are 78 combinations that would put the Bengals in, and there's a 54% chance of that happening.

The Jaguars aren't in much better shape than the Jets; they'd beat out the Jets in a tiebreaker, but would not top Denver or Cincinnati. There are 56 combinations where they have two of the following three scenarios: the Jets don't have more wins than the Jags, the Jags have more wins than the Broncos, the Jags have more wins than the Bengals. The sum of those odds? 45%.

So for the two spots remaining, the Broncos lead the pack with a 74% chance of seeing the post-season. The Bengals and Broncos are neck and neck, with Cincinnati (54%) just a bit more likely than Jacksonville (46%) to make the playoffs. The Jets have only a 27% chance, but take solace in this, Jets fans: a win over the Fins increases the Jets' chances to 59.5%, and there's less than a 30% chance the Jets win out and don't make the playoffs. The real reason New York's penalized is that the Jets only have a 1/1000 chance of making the playoffs with 9 wins, by far the lowest of these final four.

Note: Just a reminder, these percentages all sum to 200% (two playoff spots). Of course, the assumption in all of this is that the Steelers, Bills, Titans and Chiefs all miss the playoffs.

10 Comments | Posted in General

## The Greatest Fantasy Season Ever

Is LaDainian Tomlinson having the greatest season ever, and what does that have to do with John Brockington?

There's a whole lot to say about LaDainian Tomlinson. Readers of this blog probably know he set the single season scoring record this past weekend, even if Paul Hornung thinks there should be an asterisk. And everyone knows that Tomlinson set the single season touchdown record, too. So just how good of a season is he having?

Let's start by looking at pure dominance. Over a nine-game stretch last year, Larry Johnson scored an incredible 264 fantasy points. He rushed for 1,351 yards and 16 touchdowns, and gained another 276 yards and a score through the air. When it was over, I thought that might have been the greatest stretch in fantasy football history. Johnson averaged 29.3 FP/G, an absolutely unheard of number.

Marshall Faulk set the record for fantasy points in a season by a non-QB, with 374.9 in 2000. Faulk did that in just fourteen games, for an astounding 26.8 FP/G. Priest Holmes in 2002 averaged 26.6 (while also only playing fourteen games), but in the last thirty years only Holmes (2002, 2003, 2004) and Faulk (2000, 2001) averaged even 23 fantasy points per game. Emmitt Smith came closest, averaging 22.8 FP/G in 1995, with Shaun Alexander's 2005 season and Terrell Davis' 1998 season right behind him.

As we all know, then LaDainian Tomlinson happened. His season started innocently enough, with only one 100-yard rushing game and two scoreless games in the season's first month. But after eight straight 100-yard games and 28 more TDs, Johnson's great 2005 has been left in the dust.

Through fourteen games, Tomlinson has scored 406.1 fantasy points this year. That's an average of 29 FP/G, ever so slightly behind Johnson's great run in 2006. But let's compare apples to apples; LJ averaged 29.3 FP/G over nine games. Over his last ten games, LT is averaging 34.34 FP/G. That's the greatest stretch in the modern fantasy football era.

Tomlinson not only broke the single-season FP mark for non-QBs, he smashed it. But does that mean this is the best season in the history of fantasy football? To figure that question out, you need to know a bit more about Value Based Drafting ("VBD"). In short, we need to compare Tomlinson to his peers (other 2006 RBs), so we can compare him across eras and across positions.

LT's 406 points give him a VBD value of 266; simply, this means he's scored 266 fantasy points more than the 24th best running back, Corey Dillon. He's also scored 126 more points than the 2nd best running back, Larry Johnson. Obviously, that's really good. But is it the best of all time?

In that 2000 season, Marshall Faulk had a VBD value of 216, and in 2002 Priest Holmes' VBD number was 220. Those numbers actually understate their true values, since both only played 14 games while the rest of the league played sixteen. Terrell Davis (233 VBD, 1998) and Priest Holmes (231, 2003) earned the highest VBD values of any player at any position over the last thirty years.

Now, of course, LT has passed both of them. And he's likely to add to his total, and maybe even reach a mind-boggling 300 VBD points. So why are we discussing this now, instead of in a few weeks?

Because while TD's 233 points of value was the most in the last 30 years, it wasn't the most of all-time. For many years, it's been undisputed that O.J. Simpson had the greatest single season in fantasy football history. Many of us know that Simpson ran for 2,000 yards in 1973, becoming the first player to ever do so. What many don't know, was that his 1975 season was one of the greatest of all time. Simpson *averaged* 160 combined yards per game, and reached paydirt 23 times in a fourteen game season.

Simpson scored 362 fantasy points in 1975, an unheard of number for that era. It was the record for non-QBs until 1995, when Emmitt Smith scored 365 points. Last year, Shaun Alexander's 364 points knocked The Juice to sixth all-time. But what's most impressive is how Simpson distanced himself from his peers.

While Larry Johnson's streak last year was good, I really doubted that anyone would ever challenge Simpson's 247 VBD points in a fourteen game season. Simpson had one of the best seasons of all time, and the 24th ranked RB that season -- John Brockington -- totaled 676 yards and eight TDs.

But once again, LT continues to amaze. Simpson's VBD pro-rated for a 16 game season is 282 points, which Tomlinson seems likely to break. So while most of us will remember Tomlinson's 2006 season for how he set the single-season touchdown record, I'll remember it for what I consider to be a much more incredible achievement.

You might think the 24th best RB is an arbitrary baseline, so here is how many more points that LT (and O.J.) scored than the X ranked RB did that season. As you can tell, LT is better by any measure, save comparing the players to the fifth best fantasy RB that year.

LT O.J. Diff

2 125.7 54.2 71.5

3 154.5 98.6 55.9

4 164.5 150.3 14.2

5 167.9 171.5 -3.6

6 173.2 171.7 1.5

7 215.5 173.8 41.7

8 215.9 176.9 39.0

9 219.8 180.1 39.7

10 232.4 192.7 39.7

11 235.2 206.6 28.6

12 235.9 211.8 24.1

13 237.2 211.9 25.3

14 237.6 222.4 15.2

15 242.5 229.8 12.7

16 244.0 230.2 13.8

17 244.1 231.1 13.0

18 251.3 234.4 16.9

19 255.9 238.2 17.7

20 258.7 240.4 18.3

21 259.2 243.7 15.5

22 262.4 243.8 18.6

23 265.0 245.5 19.5

24 265.8 246.7 19.1

25 267.0 249.3 17.7

26 273.0 251.2 21.8

27 275.3 251.7 23.6

28 292.6 252.0 40.6

29 293.2 252.3 40.9

30 294.8 256.3 38.5

As I've stated a few times, I've long held O.J.'s record in high regard. But Tomlinson's going to smash another record, too. The most the number one RB has ever topped the number two RB by was 100 points, when Emmitt Smith lapped Curtis Martin and the rest of the NFL. Walter Payton (88.2, 1977), Jim Brown (76.7, 1963, based on somewhat incomplete data), Leroy Kelly (76.2, 1968, same data concern), Marshall Faulk (63.8, 2001) and O.J. were the only players to ever even beat the number two RB by 60 fantasy points. And right now, Tomlinson's topping Johnson by more than double that.

6 Comments | Posted in Fantasy

## Maximum likelihood with home field and margin of victory

Before reading this entry, make sure you've read part I and part II in the maximum likelihood series.

**How to incorporate home field into a maximum likelihood model**

In the basic model, we are trying to maximize the product of all R_i / (R_i + R_j), where this factor represents a game in which team i beat team j. In order to build home field advantage into the model, we need one additional parameter. Let's call it *h*. Think of it as a multiplier that affect the home team's rating.

Let's look at the same simple "season" we looked at last time:

A beat B

B beat C

C beat A

A beat C

In the basic model, we chose ratings A, B, and C so as to maximize:

P = A/(A+B) * B/(B+C) * C/(A+C) * A/(A+C)

Now let's assume that the home teams in those games were A, C, C, and A. If *h* is a multiplier that alters the home team's rating, then A's probability of winning that first game isn't A/(A+B), it's hA/(hA+B). And so the quantity to be maximized is:

P = hA/(hA+B) * B/(B+hC) * hC/(A+hC) * hA/(hA+C)

Now instead of having three dials (A, B, and C) to twiddle, we have four dials: A, B, C, and h. But it's the same game: set them all so as to maximize P. Here are the home-field-included rankings through week 14:

TM Rating Record

======================

sdg 5.600 11- 2- 0

ind 4.326 10- 3- 0

chi 4.141 11- 2- 0

bal 3.769 10- 3- 0

nwe 2.451 9- 4- 0

nor 1.688 9- 4- 0

cin 1.678 8- 5- 0

jax 1.545 8- 5- 0

dal 1.322 8- 5- 0

den 1.278 7- 6- 0

nyj 1.216 7- 6- 0

nyg 1.197 7- 6- 0

ten 1.078 6- 7- 0

buf 0.981 6- 7- 0

kan 0.861 7- 6- 0

phi 0.775 7- 6- 0

atl 0.720 7- 6- 0

sea 0.719 8- 5- 0

mia 0.688 6- 7- 0

pit 0.679 6- 7- 0

car 0.550 6- 7- 0

min 0.474 6- 7- 0

cle 0.405 4- 9- 0

gnb 0.404 5- 8- 0

hou 0.364 4- 9- 0

was 0.324 4- 9- 0

stl 0.292 5- 8- 0

sfo 0.273 5- 8- 0

tam 0.247 3-10- 0

ari 0.180 4- 9- 0

oak 0.114 2-11- 0

det 0.088 2-11- 0HFA = 1.556

If you take two averagish teams, say the Bills and Titans, and plug in the numbers, you get a 63% probability of Tennessee beating Buffalo in Nashville and a 59% probability of the Bills winning that same matchup in Buffalo. If, on the other hand, you have two mismatched teams like the Colts and Lions, then homefield means very little and you get 99% Colts in Indy and 97% Colts in Detroit.

**How to incorporate margin of victory into a maximum likelihood model**

Several months ago I told you about what I call the very simple rating system. That was a rating system that included only points scored and points allowed (and schedule). It doesn't directly consider wins and losses at all. However, by tinkering just a bit, you can turn it into a system that does consider wins and losses. In fact, you can turn it into a system that *only* considers wins and losses (and schedule). In doing so, you lose the theoretical elegance of the method, but you might get a system that "works" better. And most of the time that's what you want.

The situation here is similar. Maximum likelihood is a method that only considers wins and losses and doesn't consider margin of victory at all. But with a little tweaking you can turn it into a system that does exactly the opposite or you can set it somewhere in between. Just as is the case with the simple rating system, tweaking the system in this way strips it of some of its abstract beauty. But if it turns it into a tool that is better for the purpose you have in mind, then that's OK.

To incorporate margin of victory, all you have to do is (conceptually) pretend the game is 100 games and then decide based on the final score how you want to divvy up those hundred games between the two teams. Or, to put it another way, you want to award each team some percentage of a win and some percentage of a loss.

The easiest way to do it is to award the entire game to the winner. That's just the basic margin-not-included system we've been talking about.

The other extreme would be to award something like

(1/2) * ( 1 + (WinnerPoints - LoserPoints)/(WinnerPoints + LoserPoints) )

to the winner. So for example a 17-10 win would be worth about .63 wins, while a 37-30 win would be worth about .55 and a 37-10 win would be worth .79. A shutout would always be worth one full win. Using this system, the NFL ratings through week 14 look like this:

TM Rating Record

======================

chi 1.940 11- 2- 0

jax 1.777 8- 5- 0

sdg 1.589 11- 2- 0

bal 1.551 10- 3- 0

dal 1.420 8- 5- 0

cin 1.370 8- 5- 0

nwe 1.360 9- 4- 0

nyg 1.348 7- 6- 0

ind 1.305 10- 3- 0

nor 1.231 9- 4- 0

den 1.211 7- 6- 0

mia 1.074 6- 7- 0

phi 1.071 7- 6- 0

buf 1.057 6- 7- 0

ten 0.946 6- 7- 0

kan 0.937 7- 6- 0

pit 0.924 6- 7- 0

car 0.918 6- 7- 0

atl 0.898 7- 6- 0

nyj 0.861 7- 6- 0

sea 0.854 8- 5- 0

hou 0.848 4- 9- 0

min 0.801 6- 7- 0

was 0.738 4- 9- 0

cle 0.699 4- 9- 0

stl 0.639 5- 8- 0

ari 0.617 4- 9- 0

sfo 0.616 5- 8- 0

det 0.604 2-11- 0

gnb 0.589 5- 8- 0

oak 0.521 2-11- 0

tam 0.483 3-10- 0HFA = 1.158

The "predictions" now look much more intuitive. Colts over Lions, instead of being a 95+% walkover for the Colts is now seen as a 71% chance of a Colts' win in Indy and a 65% chance of a Colts win in Detroit.

It makes sense that this change in the algorithm would result in much more conservative (and more realistic) predictions of future games. By treating a win as only a partial win, we're allowing the algorithm to use information that our brains are already using when we make a quick top-of-the-head guess. For instance, when the Colts beat Buffalo 17-16 in week 10, it goes down in the standings as one win for the Colts and one loss for the Bills, and the basic maximum likelihood model likewise counts it as a 100% win for the Colts. But the modified model instead sees it as a close game that really could have gone either way but that the Colts happened to win.

The model above treats that Colts win as a 52% win for Indy and a 48% win for Buffalo. Some people might think that goes a bit too far, that winning should count for something extra beyond the point margin. Those folks might use a split like this to the winner:

.6 + .4 * ( (WinnerPoints - LoserPoints)/(WinnerPoints + LoserPoints) )

This guarantees winners at least 60% of the win. Not surprisingly, it will have the effect of making the rankings look more like the standings (but still not as much as the original margin-not-included model):

TM Rating Record

======================

chi 2.169 11- 2- 0

sdg 1.905 11- 2- 0

bal 1.795 10- 3- 0

jax 1.755 8- 5- 0

ind 1.600 10- 3- 0

nwe 1.508 9- 4- 0

cin 1.424 8- 5- 0

dal 1.418 8- 5- 0

nyg 1.329 7- 6- 0

nor 1.323 9- 4- 0

den 1.229 7- 6- 0

buf 1.054 6- 7- 0

phi 1.039 7- 6- 0

mia 1.011 6- 7- 0

ten 0.983 6- 7- 0

kan 0.938 7- 6- 0

nyj 0.919 7- 6- 0

pit 0.897 6- 7- 0

atl 0.883 7- 6- 0

car 0.858 6- 7- 0

sea 0.850 8- 5- 0

hou 0.753 4- 9- 0

min 0.748 6- 7- 0

was 0.656 4- 9- 0

cle 0.649 4- 9- 0

stl 0.576 5- 8- 0

gnb 0.564 5- 8- 0

sfo 0.557 5- 8- 0

ari 0.516 4- 9- 0

det 0.461 2-11- 0

tam 0.441 3-10- 0

oak 0.431 2-11- 0HFA = 1.204

9 Comments | Posted in BCS, Statgeekery

## Cycles

Yesterday the Bills beat the Dolphins 21-0. The week before, the Dolphins beat the Patriots by the same score. Back in week 7 the Patriots beat the bills by 22. So we have this 3-game cycle.

2006 week 7: nwe over buf 28- 6

week 15: buf over mia 21- 0

week 14: mia over nwe 21- 0

Let's say this 3-team cycle has an absurdity score of 64, the sum of those three victory margins. That's not the highest absurdity score of 2006. Here is the highest (so far):

2006 week 5: jax over nyj 41- 0

week 12: nyj over hou 26-11

week 7: hou over jax 27- 7

The highest in post-merger NFL history? Here they are:

1988 week 3: nyj over hou 45- 3

week 15: hou over cin 41- 6

week 6: cin over nyj 36-191989 week 4: hou over mia 39- 7

week 6: mia over cin 20-13

week 15: cin over hou 61- 71988 week 1: nwe over nyj 28- 3

week 3: nyj over hou 45- 3

week 4: hou over nwe 31- 61989 week 9: rai over cin 28- 7

week 15: cin over hou 61- 7

week 11: hou over rai 23- 71980 week 3: min over chi 34-14

week 14: chi over gnb 61- 7

week 8: gnb over min 16- 31973 week 1: atl over nor 62- 7

week 5: nor over det 20-13

week 3: det over atl 31- 61992 week 2: tam over gnb 31- 3

week 14: gnb over det 38-10

week 8: det over tam 38- 71972 week 3: atl over ram 31- 3

week 4: ram over sfo 31- 7

week 7: sfo over atl 49-141970 week 1: bos over mia 27-14

week 14: mia over buf 45- 7

week 7: buf over bos 45-10

Some of those absurdity scores are padded by a single uberblowout. The 2006 Bills/Patriots/Dolphins cycle is one of only 15 in NFL history in which all three games had margins of 21 points or more. Here is the cycle with the largest minimum point margin:

1992 week 2: tam over gnb 31- 3

week 14: gnb over det 38-10

week 8: det over tam 38- 7

Several of these have happened in a span of three weeks. Here are a couple of fun ones:

1984 week 14: hou over pit 23-20

week 15: pit over cle 23-20

week 16: cle over hou 27-201979 week 13: nor over atl 37- 6

week 14: atl over sdg 28-26

week 15: sdg over nor 35- 0

8 Comments | Posted in General

## Maximum likelihood, part II

For reference, here is maximum likelihood, part I. This post won't make much sense unless you've read that one.

Remember that I likened the method of maximum likelihood to trying to twist a bunch of dials (one for each team) so that a particular quantity is as big as possible. If you're looking at a season of 1A college football, you've got 119 dials, the thing you're trying to maximize has about 800 parts to it, and each of the dials directly controls about 12 of those parts.

Suppose you're twiddling with the Florida dial. In that mess of 800 factors, you see (R_flo / (R_flo + R_vandy)). Turning up the Florida dial increases that piece. So turn it up. Likewise, cranking the Florida dial increases the (R_flo / (R_flo + R_arkansas)) bit, so you turn it up some more. But then you notice there's a (R_auburn / (R_auburn + R_flo)) piece in there. Turning up the Florida dial decreases this part. You could counteract that by turning up the Auburn dial, but you know you're going to have to pay a price for that eventually because of the (R_georgia / (R_auburn + R_georgia)) piece, among others.

The point is, there is a place which is "just right" for the Florida dial. They won a lot of games, many of them against good teams (this creates big denominators), so you want to turn their dial up. But you can't turn it up too much, or else it will turn down that Auburn/Florida piece, to the detriment of the entire product.

Now consider Ohio State's dial. Turn it up. Now turn it up some more. Now turn it up some more. Keep turning it up and, because the Buckeyes never lost a game, you'll never run into any problem. There's nothing stopping you from turning Ohio State's dial up to infinity. You can always make that product bigger by turning Ohio State's dial up. Their rating has to be infinite.

That's OK, you say. Ohio State was undefeated and should be ranked first, right? Right, but then note that the same thinking applies to Boise State. They must, in a sense, necessarily be tied with Ohio State with an infinite rating. Is that what we want? Maybe, and maybe not, but I'm pretty sure most people don't want a system that *mandates* that undefeated teams always rank at the top no matter what.

But the plot thickens. Michigan's only loss was to Ohio State. So the only way it hurts you to turn up Michigan's dial is because of this term: (R_osu / (R_osu + R_mich)). But if Ohio State's ranking is infinite, then you can turn up Michigan's dial without penalty. And since they won all the rest of their games, turning up the Michigan dial helps increase the product. So Michigan, it turns out, needs an infinite rating as well, though not quite as big of an infinite rating as Ohio State's [yes, I'm getting sloppy with the infinities here --- my goal is to give an impression of the way things work, not to be mathematically precise].

Now who else needs an infinite rating? Wisconsin, whose only loss was to Michigan. Once Michigan's dial is jacked up to a gazillion, it doesn't hurt you much to jack Wisconsin's up to a few million.

Rather than start talking about the technicalities of this infinity business, let's just summarize with this: **the method of maximum likelihood, in its purest form, mandates that, no matter what the schedules look like, the top ranked teams must be those that have never lost, or have only lost to teams that have never lost, or have only lost to teams that have only lost to teams that have never lost, or ....**

In many situations --- basketball, baseball, NFL --- this isn't generally a problem. For college football, it's a huge problem. It's certainly defensible to have Michigan ranked ahead of Florida. But even setting aside Boise State, I don't know too many people who think Wisconsin should be ranked ahead of Florida. Further, if you wanted to rank all 706 college football teams, then any undefeated Division III or NAIA team would have to rank ahead of Florida too.

In my opinion, maximum likelihood is one of the best rating systems around: it has a sound theoretical basis, is relatively easy to understand, and produces what most people consider to be sensible results in most cases. But all models break in some situations and this one unfortunately happens to break right when and where it's needed most: at the top of the standings of a typical college football season.

But there are some ways to fix it.

One way is simply to count a win as a 99% win and 1% loss. How do you do that? Well, the easiest way to think about it is to pretend that every game is 100 games, 99 of which were won by the winner and one of which was won by the loser. Now Ohio State isn't 12-0; they're 1188-12. But the point is that they are now in the denominator of a few terms for which they are not also in the numerator. So their rating won't be infinite. If you do this with the pre-bowl 2006 college football data, you knock Wisconsin down to #9.

This practicality, however, is gained at the expense of elegance. In particular, why 99%? Why not count a win as 94% of a win, or 63%, or 99.99%? The higher that number is, the more your rating system will depend on wins and losses. The lower it is, the more it will depend on strength of schedule. As soon as it gets below 94%, for example, Florida starts to rank ahead of Ohio State. [Astute observers will at this point suggest varying that percentage according to the margin of victory: a 1-point win could count as 60% of a win, for example, while a 28-point win could count as 99% of a win. This indeed can be done --- and I'll do it in a future post --- but for now I'm playing by BCS rules: only Ws and Ls.]

An arbitrary parameter just jars my sensibilities. It might "work" (depending on what you mean by "work"), but it ruins the nice clean description of this method. I have seen a couple of academic papers that employ more complicated fixes, but they also have a parameter and no objective basis for determining what that parameter ought to be.

What I prefer is the simple fix proposed by David Mease. He simply introduces a dummy team and gives every team a win and a loss against that dummy team. Problem solved; now no team is undefeated and no team will have an infinite rating. If you find this a cludgy or arbitrary solution that ruins the theoretical beauty of the method, then you can read Mease's paper, where he explains how the introduction of the dummy team can serve as a set of Bayesian priors. If you're into that kind of thing.

Mease's ratings are among my favorites and, if I were running the BCS, they'd be a part of it. Now back to Peter Wolfe, whose ratings are included in the BCS and who uses something he describes as a maximum likelihood method. He does not specify exactly how he fixes the infinite rating problem. I keep meaning to email and ask him, but for some reason I only remember to do so every year around early December, and I figure he's probably got enough emails to deal with in early December.

I have tried putting in a dummy team. I've tried counting wins as P percent wins for various values of P. But I can't replicate the order of Wolfe's rankings. That might have to do with the fact that Wolfe ranks all 706 college football teams, whereas I'm only ranking the D1 teams (with an additional "generic 1AA team" included to soak up the games against 1AA teams.). Or he might have some elegant fix that I'm not aware of. Maybe in February or March I'll remember to email him and ask.

6 Comments | Posted in BCS, Statgeekery

## Another rating system: maximum likelihood

Several months ago, I spent two posts (1, 2) talking about mathematical algorithms for ranking teams. All the chatter that comes along with the BCS standings has gotten me inspired to write up another one.

This one does not take into account margin of victory, and it is very similar to one of the BCS computer polls. I'll tell you about that at the end of the post.

Let's start with a 3-team league:

A beat B

B beat C

C beat A

A beat C

So A is 2-1, B is 1-1, and C is 1-2. We want to give each team a rating R_A, R_B, and R_C. And we want all those ratings to satisfy the following property:

Prob. of team i beating team j = R_i / (R_i + R_j)

What if we just arbitrarily picked some numbers. Say R_A = 10, R_B = 5, and R_C = 1. If those are the ratings, then (assuming the games are independent) the probability of seeing the results we actually saw would be:

(Prob of A beating B) * (Prob of B beating C) * (Prob of C beating A) * (Prob of A beating C)

which would be

10/(10+5) * 5/(5+1) * 1/ (1+10) * 10/(1+10) =~ .0459

To summarize: **if 10, 5, and 1 represented the "true" strengths of the three teams, then there would be a 4.59% chance of seeing the results we actually saw.** That number (4.59) is a measure of how well our ratings (10, 5, and 1) explain what actually happened. If we could find a trio of numbers that explained the actual data better, it would be reasonable to say that that trio of numbers is a better estimate of the teams' true strengths. So let's try 10, 6, and 2. That gives the real life data a 6.51% chance of happening, so 10, 6, and 2 is a better set of ratings than 10, 5, and 1.

What we want to do is find the set of ratings that *best* explain the data. That is, find the set of ratings that produce the maximum likelihood of seeing the results that actually happened. Hence the name; this is called the *method of maximum likelihood*. Imagine you have three dials you can control: one marked A, one B, and one C. You're trying to maximize this quanity:

(R_A / (R_A + R_B)) * (R_B / (R_B + R_C)) * (R_C / (R_A + R_C)) * (R_A / (R_A + R_C))

One way to increase the product might be to turn up the A dial; that will increase the first and fourth of those numbers. But there are diminishing returns to cranking the A dial. Once it's been turned up pretty high, then turning it up further doesn't increase the first and fourth terms much. Furthermore, turning up the A dial decreases the third number in the product, because A lost that third game. So you want to stop turning when the increases in the first and fourth terms are balanced by the decreases in the third.

The game is to simulaneously set all three dials at the place that maximizes the product. How exactly we find that maximum is a bit math-y, so I'll skip it. If people are interested, I can post it as an appendix in the comments [UPDATE: here it is]. But the point is, it can be done.

If we do it in this simplified example, we get this:

Team A: 8.37

Team B: 5.50

Team C: 3.62

[Of course, if you multiplied or divided all those numbers by the same constant, you'd have an equivalent set of ratings. It's the ratios and the order that matter, not the numbers themselves.]

Using these numbers we could estimate, for example, that the probability of A beating B is 8.37/(8.37+5.5), which is approximately 60.3%. I've never seen these predictions actually tested on future games. That is, if you look at all games where this method estimates a 60% chance of one team beating another, does the predicted winner actually win 60% of the time? Maybe I'll test that in a future post, but for now it's beside the point. Perhaps the best way to interpret the 60.3% figure is not: this method predicts that A has a 60.3% chance of beating B tomorrow. Rather it's this: assigning a 60.3% probability to A beating B is most consistent with the past data.

This distinction is reinforced when we look at the rankings produced by this method through week 14 of the 2006 NFL season:

TM Rating Record

======================

sdg 4.790 11- 2- 0

ind 3.716 10- 3- 0

chi 3.617 11- 2- 0

bal 3.469 10- 3- 0

nwe 2.439 9- 4- 0

cin 1.714 8- 5- 0

nor 1.666 9- 4- 0

jax 1.617 8- 5- 0

dal 1.256 8- 5- 0

den 1.232 7- 6- 0

nyj 1.209 7- 6- 0

nyg 1.097 7- 6- 0

ten 1.056 6- 7- 0

buf 0.976 6- 7- 0

kan 0.887 7- 6- 0

phi 0.851 7- 6- 0

pit 0.777 6- 7- 0

mia 0.764 6- 7- 0

atl 0.753 7- 6- 0

sea 0.712 8- 5- 0

car 0.603 6- 7- 0

min 0.469 6- 7- 0

cle 0.448 4- 9- 0

hou 0.395 4- 9- 0

gnb 0.391 5- 8- 0

was 0.362 4- 9- 0

stl 0.312 5- 8- 0

sfo 0.306 5- 8- 0

tam 0.278 3-10- 0

ari 0.192 4- 9- 0

oak 0.134 2-11- 0

det 0.101 2-11- 0

The Colts' probability of beating the Lions, according to this method, is 3.72/(3.72+.101), which is about 97.4%. That's a bit higher than my intuition says it ought to be. Part of that, remember, is that the method doesn't take into account margin of victory and therefore does not know that the Colts have squeaked by in a lot of games and were destroyed by the Jaguars. All it sees is a team that has played a very tough schedule and still has nearly the best record in the league. But the other part is that this isn't designed to predict the future, it's designed to explain the past.

I told you that this method is similar to one of those actually in use by the BCS. That method is Peter Wolfe's, and he describes the method here.

The method we use is called a maximum likelihood estimate. In it, each team

iis assigned a rating valueR_ithat is used in predicting the expected result between it and its opponentj, with the likelihood ofibeatingjgiven by:

R_i / (R_i + R_j)The probability

Pof all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game. The rating values are chosen in such a way that the numberPis as large as possible.

That is precisely the system we've described above, but if you load up all the games and run the numbers, you won't get numbers that match up with the ones Wolfe publishes. I'll explain why in the next post.

18 Comments | Posted in BCS, Statgeekery

## LT’s touchdown record

I suspect this won't ultimately happen, but right now LaDainian Tomlinson is in position to be only the second person in post-merger NFL history to lead the league in a major category and double the output of the second-place guy. Here are the guys who have led the league in rushing touchdowns by the greatest percentage margin over second place:

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

2006 LaDainian Tomlinson (26) Larry Johnson (13) 100.0 271.4

2003 Priest Holmes (27) Ahman Green (15) 80.0 170.0

1994 Emmitt Smith (21) Natrone Means (12) 75.0 200.0

1985 Joe Morris (21) Eric Dickerson (12) 75.0 133.3

1995 Emmitt Smith (25) Chris Warren (15) 66.7 150.0

In the 2006 row, the 100.0 indicates that Tomlinson has 100% more rushing TDs than second-place performer Larry Johnson. He also has 271.4% more rushing scores than the 10th ranked player in that category.

I said Tomlinson would be the second to do this. The first, not surprisingly, was Jerry Rice, who doubled Mike Quick's 11 receiving TDs in 1987 and more than doubled Quick's second-place total of combined rushing and receiving touchdowns in the same year.

**Receiving TDs**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1987 Jerry Rice (22) Mike Quick (11) 100.0 214.3

1984 Mark Clayton (18) Roy Green (12) 50.0 125.0

1989 Jerry Rice (17) Sterling Sharpe (12) 41.7 88.9

2003 Randy Moss (17) Torry Holt (12) 41.7 88.9

1994 Sterling Sharpe (18) Jerry Rice (13) 38.5 125.0

**Rushing/Receiving TDs**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1987 Jerry Rice (23) Mike Quick (11) 109.1 187.5

2006 LaDainian Tomlinson (29) Larry Johnson (15) 93.3 222.2

1995 Emmitt Smith (25) Carl Pickens (17) 47.1 92.3

2000 Marshall Faulk (26) Edgerrin James (18) 44.4 100.0

1985 Joe Morris (21) Roger Craig (15) 40.0 90.9

Here are the league leaders with the biggest margins in some other categories:

**Receptions**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1987 JT Smith (91) Al Toon (68) 33.8 62.5

2002 Marvin Harrison (143) Hines Ward (112) 27.7 57.1

1970 Dick Gordon (71) Marlin Briscoe (57) 24.6 51.1

1990 Jerry Rice (100) Andre Rison (82) 22.0 40.8

1975 Chuck Foreman (73) Reggie Rucker (60) 21.7 46.0

**Rushing yards**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1973 O.J. Simpson (2003) John Brockington (1144) 75.1 106.9

1975 O.J. Simpson (1817) Franco Harris (1246) 45.8 97.7

1977 Walter Payton (1852) Mark VanEeghen (1273) 45.5 98.9

1980 Earl Campbell (1934) Walter Payton (1460) 32.5 112.1

1984 Eric Dickerson (2105) Walter Payton (1684) 25.0 80.2

**Receiving yards**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

2002 Marvin Harrison (1722) Randy Moss (1347) 27.8 36.2

1973 Harold Carmichael (1116) John Gilliam (907) 23.0 53.1

1990 Jerry Rice (1502) Henry Ellard (1294) 16.1 49.0

1975 Ken Burrough (1063) Isaac Curtis (934) 13.8 37.0

1991 Michael Irvin (1523) Gary Clark (1340) 13.7 44.6

**Yards from scrimmage**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1973 O.J. Simpson (2073) Calvin Hill (1432) 44.8 63.0

1975 O.J. Simpson (2243) Chuck Foreman (1761) 27.4 87.9

1974 Otis Armstrong (1812) Lawrence McCutcheon (1517) 19.4 65.5

1977 Walter Payton (2121) Lydell Mitchell (1779) 19.2 79.4

1994 Barry Sanders (2166) Chris Warren (1868) 16.0 55.8

**Passing TDs**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1986 Dan Marino (44) Ken O'Brien (25) 76.0 131.6

1984 Dan Marino (48) Dave Krieg (32) 50.0 152.6

1974 Ken Stabler (26) Joe Namath (20) 30.0 136.4

2004 Peyton Manning (49) Daunte Culpepper (39) 25.6 122.7

1997 Brett Favre (35) Jeff George (29) 20.7 84.2

**Passing yards**

Year Leader Second place %DIFF %Diffvs#10

=============================================================================

1973 Roman Gabriel (3219) Jim Plunkett (2550) 26.2 80.2

1981 Dan Fouts (4802) Tommy Kramer (3912) 22.8 48.6

1992 Dan Marino (4116) Steve Young (3465) 18.8 32.1

1991 Warren Moon (4690) Dan Marino (3970) 18.1 51.1

1990 Warren Moon (4689) Jim Everett (3989) 17.5 54.7

8 Comments | Posted in General, History

## Franchise milestones

Which team has the oldest single-season rushing record? The Cleveland Browns, of course. Not surprisingly, the Bills and Bears are next on the list, as all three teams have only had one distinguished runner in franchise history, and each of the Hall of Famers retired many years ago.

1963 cle 1863 Jim Brown

1973 buf 2003 O.J. Simpson

1977 chi 1852 Walter Payton

1979 phi 1512 Wilbert Montgomery

1979 crd 1605 Ottis Anderson

1980 oti 1934 Earl Campbell

1981 nor 1674 George Rogers

1984 tam 1544 James Wilder

1984 ram 2105 Eric Dickerson

1985 rai 1759 Marcus Allen

1992 pit 1690 Barry Foster

1995 dal 1773 Emmitt Smith

1997 det 2053 Barry Sanders

1998 sfo 1570 Garrison Hearst

1998 den 2008 Terrell Davis

1998 atl 1846 Jamal Anderson

2000 min 1521 Robert Smith

2000 clt 1709 Edgerrin James

2002 sdg 1683 LaDainian Tomlinson

2002 mia 1853 Ricky Williams

2003 rav 2066 Jamal Lewis

2003 jax 1572 Fred Taylor

2003 gnb 1883 Ahman Green

2003 car 1444 Stephen Davis

2004 nyj 1697 Curtis Martin

2004 nwe 1635 Corey Dillon

2004 htx 1188 Domanick Davis

2005 was 1516 Clinton Portis

2005 sea 1880 Shaun Alexander

2005 nyg 1860 Tiki Barber

2005 kan 1750 Larry Johnson

2005 cin 1458 Rudi Johnson

How about receivers? I'm guessing most people wouldn't expect to see who holds the longest standing franchise receiving yards record:

1961 oti 1746 Charley Hennigan

1965 sdg 1602 Lance Alworth

1967 nyj 1434 Don Maynard

1981 atl 1358 Alfred Jenkins

1983 phi 1409 Mike Quick

1984 mia 1389 Mark Clayton

1985 sea 1287 Steve Largent

1986 nwe 1491 Stanley Morgan

1989 tam 1422 Mark Carrier

1989 cle 1236 Webster Slaughter

1995 sfo 1848 Jerry Rice

1995 ram 1781 Isaac Bruce

1995 gnb 1497 Robert Brooks

1995 det 1686 Herman Moore

1995 dal 1603 Michael Irvin

1996 rav 1201 Michael Jackson

1997 rai 1408 Tim Brown

1997 pit 1398 Yancey Thigpen

1998 buf 1368 Eric Moulds

1999 jax 1636 Jimmy Smith

1999 chi 1400 Marcus Robinson

2000 kan 1391 Derrick Alexander

2000 den 1602 Rod Smith

2001 crd 1598 David Boston

2002 nyg 1343 Amani Toomer

2002 clt 1722 Marvin Harrison

2003 min 1632 Randy Moss

2004 nor 1399 Joe Horn

2004 htx 1142 Andre Johnson

2005 was 1483 Santana Moss

2005 cin 1432 Chad Johnson

2005 car 1563 Steve Smith

Many of you probably know that Joe Namath was the first player in NFL history to throw for 4,000 yards in a season. But did you know that when he's still the last Jet to ever reach that milestone?

1967 nyj 4007 Joe Namath

1979 pit 3724 Terry Bradshaw

1980 cle 4132 Brian Sipe

1981 sdg 4802 Dan Fouts

1983 gnb 4458 Lynn Dickey

1983 dal 3980 Danny White

1984 mia 5084 Dan Marino

1984 crd 4614 Neil Lomax

1986 was 4109 Jay Schroeder

1986 cin 3959 Boomer Esiason

1991 oti 4690 Warren Moon

1994 nwe 4555 Drew Bledsoe

1995 nor 3970 Jim Everett

1995 det 4338 Scott Mitchell

1995 chi 3838 Erik Kramer

1995 atl 4143 Jeff George

1996 rav 4177 Vinny Testaverde

1996 jax 4367 Mark Brunell

1999 car 4436 Steve Beuerlein

2000 sfo 4278 Jeff Garcia

2001 ram 4830 Kurt Warner

2002 rai 4689 Rich Gannon

2002 nyg 4073 Kerry Collins

2002 buf 4359 Drew Bledsoe

2003 tam 3811 Brad Johnson

2003 sea 3844 Matt Hasselbeck

2004 phi 3875 Donovan McNabb

2004 min 4717 Daunte Culpepper

2004 kan 4591 Trent Green

2004 htx 3531 David Carr

2004 den 4089 Jake Plummer

2004 clt 4557 Peyton Manning

There are only five records still standing from the '60s: Joe Namath and Don Maynard hold the all-time Jets passing and receiving yards records (in the same season), Jim Brown figures to hold the Cleveland single season rushing record for the foreseeable future, and Charley Hennigan should do the same for the Houston Oilers/Tennessee Titans franchise. Lance Alworth is safe too, unless Antonio Gates absolutely blows the current tight end receiving record (Dave Parks, SF, 1344) out of the water.

There are no single season receiving yardage leaders from the 70s, and only Terry Bradshaw's Steelers record keeps the passing yardage leaders from suffering the same fate. With a very good finish, Ben Roethlisberger will break that record this year. Four franchise leaders in single season rushing yards from the '70s remain, and none are in jeopardy this year.

Excluding the Texans, only four teams wiped out their record books this millenium (with apologies to all the pedantic types out there): the Colts (Manning, James, Harrison), Vikings (Culpepper, Smith, Moss), Chiefs (Green, Johnson, Alexander) and Giants (Collins, Barber, Toomer). While one of New York's teams still holds two of its three main records from the '60s, the other one cleaned house more recently than any other in the NFL: The Giants franchise single season rushing, passing and receiving records were all set in 2002 or later.

10 Comments | Posted in History

## Pro-football-reference.com on national TV

I have reports from multiple sources indicating that a screenshot of pro-football-reference.com appeared on NBC's Football Night in America last night. I didn't see any of it, but from my understanding, this is the context.

The game was Dallas / New Orleans so, as you might expect, there was a big feature on Tony Romo. In an interview with Andrea Kremer, Romo says that Bill Parcells gave him a homework assignment: do some research on former Bengal quarterback Greg Cook. During the segment, they showed Romo looking at Cook's p-f-r page.

At NBC's website, they have lots of clips of the interview, but they do not show what they showed on TV last night. If anyone happens to have that clip recorded, I would be very appreciative if you sent it to me.

12 Comments | Posted in P-F-R News

## Rule change questions

This was going to be a rule change proposal, but it's so obvious that I feel like I must be missing something.

If a receiver makes a catch near the sideline, but doesn't come down in bounds, but *would have* come down in bounds (in the judgement of the officials) had he not been shoved out, then it's a catch. Why? Why is pushing a guy out of bounds not considered legitimate defense in that situation when it is considered legitimate defense in every other situation? That's never made sense to me.

In my mind, if a rule introduces the need for a speculative judgement (whether he would have landed in bounds), there had better be a good reason to have that rule in place. What is the reason for this rule? Why is it needed? Are people worried that Brian Urlacher is going to pick up Steve Smith at the hash mark after he catches a slant, carry him to the sideline, and deposit him out of bounds?

I was delivering this rant to a friend of mine, and he started a rant about his own personal sideline-catch-related rule. Why is it that two feet are required to establish possession? I evidently hadn't ever given it much thought, because I was unable to give him an answer. The more I think about it, the more I think he's got a point. The difference between zero feet and one foot is about a million times more significant than the difference between one foot and two. Why draw the line between one and two instead of between zero and one?

18 Comments | Posted in Rule Change Proposals

## The SEC: just another conference

It's not that I think the SEC is a *bad* conference. In fact, this year I'd probably vote for the SEC as the nation's strongest conference (though the PAC-10 is very close). My position is simply that the SEC is one of five conferences that are roughly of equal strength and, in any given year, might be the strongest or might be the weakest of the five. It's too early to tell whether the New Big East will join that group or not, but they seem to be on the right track.

Since the BCS was born in 1998, here are the records of those five conferences, plus the conference of one known as Notre Dame, against each other:

SEC P10 B10 B12 ACC ND1 TOTAL

SEC 6- 8 16-14 14-13 34-34 2- 3 72-72

P10 8- 6 27-21 23-23 6- 9 10-16 74-75

B10 14-16 21-27 19-17 16-12 13-14 83-86

B12 13-14 23-23 17-19 9-15 3- 4 65-75

ACC 34-34 9- 6 12-16 15- 9 8- 6 78-71

ND1 3- 2 16-10 14-13 4- 3 6- 8 43-36

[NOTE: in the ACC data, I'm including all the teams that are currently in the ACC.]

The glaring lack of evidence of SEC dominance was no surprise to me, but I am a bit surprised by the Big XII's poor showing. I figured they were also Just Another Conference, but they may actually be, in the long term, a slight notch below the other four.

Also of note if you're comparing the SEC to say, the Big 10 or the PAC 10, is that the Big 10 plays about 28% more games (per team) against major conference teams --- counting Notre Dame --- than does the SEC, and the PAC 10 plays about 24% more.

Now, I've heard it said that conference strength is really about strength at the top of the conference. Here are the records if we consider only interconference games between two teams who each finished over .500 within their own conference.

SEC P10 B10 B12 ND1

SEC 2- 3 8- 8 4- 5 2- 1 16-17

P10 3- 2 10- 5 11-10 10- 4 34-21

B10 8- 8 5-10 7- 4 7- 8 27-30

B12 5- 4 10-11 4- 7 2- 2 21-24

ND1 1- 2 4-10 8- 7 2- 2 15-21

I threw the ACC out of this one because of the Miami / Virignia Tech / Boston College confusion. If you include the rest of the ACC, it doesn't change things much. I also counted Notre Dame as having a winning record in its conference every year.

The USC Trojans are a ridiculous 15-3 against teams from the top halves of the other major conferences, which accounts for almost the entire over-.500-ness of the PAC 10. Still, the PAC 10's case as the strongest conference of the BCS era is strong: their best team has played more tough teams than anyone and has won almost all those games, while the rest of the conference's top-half teams are .500 against the other conferences' top-half teams.

But this post isn't about making a case for the PAC 10, it's about pointing out that, if the SEC was so good, you might think they'd win more games than they lose in the long run against the conferences that they're supposed to be better than.

OK, be honest, what do you think of this schtick: what if I start referring to the SEC as "the JAC"? Does that work, or is it too talk radio?

6 Comments | Posted in College

## The Poisson distribution

One of my original goals when starting this blog was to highlight some of the mathematics in and of the game of football. I didn't have anything groundbreaking in mind; I just thought it might be nice for a football example to show up when people googled markov chain or Benford's Law or whatever. I was doing a little of that during the offseason, but I've gotten away from it ever since some actual football started getting played. In the comments to last week's Benford's Law post, JKL provides a nice excuse to get back to it:

Do the distribution of TD’s follow a normal or a poisson distribution, or some other distribution pattern.

For all WR’s who score exactly 5 TD’s in a season, do we have the expected number of single 3 TD games from that population as a whole, or are there more or fewer players with exactly 1 TD in 5 different games than we might otherwise expect?

Let's first imagine a receiver whose true ability level is 8 touchdowns per year. In a given season, he might score nine or six or ten, but over several (imagined) seasons he'd average eight per year. What if you wanted to simulate a year's worth of game logs for this player?

One simple model would be to note that this player should average .5 touchdowns per game and then view each game as a coin flip. Heads he scores in that game, tails he doesn't.

That's not a terrible model. It will give the guy 8 TDs per year in the long run. But it's obviously lacking. In the long run, it will predict that half his games will be 1-TD games and the other half will be 0-TD games. Our years of experience reading box scores tell us that's not realistic.

So why not break it down a bit further? Instead of viewing this guy as a .5-TDs-per-game player and then simulating 16 games each as a coin flip with probability .5, we could view him as a .25-TDs-per-half player and then simulate 32 halves each as a coin flip with probability .25. This idealized receiver will still average 8 TDs per year, but now he will have 2-TD games 12.5% of the time, 1-TD games 37.5% of the time, and 0-TD games half the time.

Better.

But why stop there? Let's look at him as a .125 TDs-per-quarter player and simulate 64 quarters. I'll spare you the calculations, but this would result in the following:

0-TD games: 58.62% of the time

1-TD games: 33.50% of the time

2-TD games: 7.18% of the time

3-TD games: 0.68% of the time

4-TD games: 0.02% of the time

Now that's starting to look relatively realistic.

This "coin-flipping" model is called a *binomial model*, by the way. Let's stop here and consider a couple of the assumptions implicit in the binomial model. In order to compute the above, we have assumed that each quarter (coin flip) is independent of the others. In other words, the above assumes that Chad Johnson's scoring in the first quarter tells us nothing one way or the other about whether he'll score in the second quarter. There are all sorts of reasons why we might doubt that assumption. Scoring in the first quarter might be a clue that he's playing against a weak secondary, which would indicate an increased chance of TDs in future quarters of the same game. On the other hand, scoring in the first quarter might cause the opposing defense to start double- or triple-covering him, thereby leading to a lower probability of future TDs. And that's just the tip of the iceberg of possible ways this model fails to be literally correct.

But you know what they say: all models are imperfect, some are useful anyway. Let's press on and see what happens.

What if we look at him as a .00833-TDs-per-minute player and then simulate 960 minutes each as a coin flip with probability .00833? What if we look at him as a .0001389-TDs-per-second player and then simulate 57600 seconds?

We're getting into some obvious absurdity here, as this model would yield a chance of this receiver scoring a thousand (or much more) TDs in a season. It would be a very, very, very tiny chance --- so tiny that for all practical purposes it could never happen --- but a chance nonetheless. Furthermore, as we break the season down into more and more pieces, each of which is smaller and smaller, the calculations required are getting uglier and uglier.

Believe it or not, it turns out that the math can be simplified by breaking the season down into infinitely many pieces, each of infinitessimal length (technically, breaking it down into N pieces and then taking the limit as N goes to infinity). When you do that, what you get is this:

Prob. of having an N-touchdown game =~ e^(-1/2) (1/2)^n / n!

This is called a *Poisson distribution with parameter 1/2* (the parameter 1/2 comes from the fact that our guy averages half a TD per game). When you plug that in for various values of n, you get this:

0-TD games: 60.65% of the time

1-TD games: 30.33% of the time

2-TD games: 7.58% of the time

3-TD games: 1.26% of the time

4-TD games: 0.16% of the time

5+-TD games: 0.02% of the time

Note that all of the above is pure theory. At no point in making the above computations did any of the details of how NFL football is played come into the discussion. A mathematician who has never seen a football game could have built this model. It's not a good model unless it describes what happens in actual football games.

So does it?

If you look at all receivers since 1995 who played in 16 games and scored exactly 8 touchdowns, you'll find 29 such seasons. That's a total of 464 games. If the poisson model is to be believed, we would expect about 281 zero-TD games, about 141 one-TD games, and so on. Here is a table showing the expected and actual totals:

TDs Prob. Expected Actual

================================

0 0.607 281.4 272

1 0.303 140.7 157

2 0.076 35.2 30

3 0.013 5.9 5

4 0.002 0.7 0

5 0.000 0.1 0

Whether that's close enough to claim that the poisson really is a good model in this case is for another post. For now, let's just say it looks pretty close. The actual data shows a few more one-TD games and couple fewer 0- and 2-TD games than expected, but overall it's a remarkably good match.

Of course, there's no reason to limit ourselves to players who scored 8 TDs. We could similarly look at players who scored 4 or 6 or 12 or whatever. All we have to do is plug in 4/16 or 6/16 or 12/16 or whatever into the formula in place of 1/2. Here is the data:

32 receivers with 4 TDsTDs Prob. Expected Actual

================================

0 0.779 398.7 395

1 0.195 99.7 106

2 0.024 12.5 11

3 0.002 1.0 0

4 0.000 0.1 0

5 0.000 0.0 044 receivers with 5 TDs

TDs Prob. Expected Actual

================================

0 0.732 515.1 506

1 0.229 161.0 177

2 0.036 25.1 20

3 0.004 2.6 1

4 0.000 0.2 0

5 0.000 0.0 039 receivers with 6 TDs

TDs Prob. Expected Actual

================================

0 0.687 417.9 407

1 0.258 156.7 178

2 0.048 29.4 19

3 0.006 3.7 4

4 0.001 0.3 0

5 0.000 0.0 037 receivers with 7 TDs

TDs Prob. Expected Actual

================================

0 0.646 382.2 364

1 0.282 167.2 201

2 0.062 36.6 23

3 0.009 5.3 4

4 0.001 0.6 0

5 0.000 0.1 029 receivers with 8 TDs

TDs Prob. Expected Actual

================================

0 0.607 281.4 272

1 0.303 140.7 157

2 0.076 35.2 30

3 0.013 5.9 5

4 0.002 0.7 0

5 0.000 0.1 031 receivers with 9 TDs

TDs Prob. Expected Actual

================================

0 0.570 282.6 277

1 0.321 159.0 166

2 0.090 44.7 46

3 0.017 8.4 7

4 0.002 1.2 0

5 0.000 0.1 015 receivers with 10 TDs

TDs Prob. Expected Actual

================================

0 0.535 128.5 114

1 0.335 80.3 105

2 0.105 25.1 18

3 0.022 5.2 3

4 0.003 0.8 0

5 0.000 0.1 09 receivers with 11 TDs

TDs Prob. Expected Actual

================================

0 0.503 72.4 68

1 0.346 49.8 56

2 0.119 17.1 18

3 0.027 3.9 1

4 0.005 0.7 1

5 0.001 0.1 07 receivers with 12 TDs

TDs Prob. Expected Actual

================================

0 0.472 52.9 50

1 0.354 39.7 44

2 0.133 14.9 14

3 0.033 3.7 4

4 0.006 0.7 0

5 0.001 0.1 0

The patterns are generally the same as what we saw in the 8-TD case: the actual numbers show fewer 0- and 2-TD games than the poisson model would predict, and more 1-TD games. Just off the top of my head, I'd guess that this is because of a general tendency to spread things around among different receivers on the same team. Whether that's forced by the defense or mandated by the coach I'm not sure. Also, there are just a shade fewer 3+ TD games than the poisson would predict. This may be because teams that have a receiver who catches 2 TD passes generally have a comfortable lead and don't need to throw anymore. Or because defenses who give up two TDs to the same guy try to make darn sure they don't give up a third.

The bottom line is that, if you know that something --- calls from telemarketers, flat tires, power outages, touchdown catches --- will happen, on average, *x* times per game, or per month, or per day, or per decade, and you want to know what is the probability that it will happen *n* times in a given time period, the poisson model can often give you a pretty good estimate.

42 Comments | Posted in Statgeekery

## Handicapping the AFC

Here are the remaining schedules for the AFC contenders.

Team 14 15 16 17

Cin Oak @Ind @Den Pit

Den @SD @Ari Cin SF

Jac Ind @Ten NE @KC

KC Bal @SD @Oak Jac

NYJ Buf @Min @Mia Oak

To handicap the race, we need to rank the teams. I'll use Jeff Sagarin's PREDICTOR ratings, which he claims to be the best predictor of which team will win in the future. Replacing each team in the above chart with their rating, you get:

Team 14 15 16 17

26.00 11.96 27.57 20.60 21.65

20.60 29.58 11.51 26.00 9.02

28.58 27.57 19.68 30.32 19.67

19.67 29.15 29.58 11.96 28.58

22.20 20.81 17.87 19.81 11.96

To be clear what the above chart means, the Jets have a rating of 22.2 and the Bills have a 20.81 rating (the Jets opponent in week 14). According to Sagarin, we need to add 2.75 points to the home team, so viola:

Team 14 15 16 17

26.00 9.21 30.32 23.35 18.90

20.60 32.33 14.26 23.25 6.27

28.58 24.82 22.43 27.57 22.42

19.67 26.40 32.33 14.71 25.83

22.20 18.06 20.62 22.56 9.21

This enables us to create a legitimate point spread for each game.

Team 14 15 16 17

Cin -16.79 4.32 -2.65 -7.1

Den 11.73 -6.34 2.65 -14.33

Jac -3.76 -6.15 -1.01 -6.16

KC 6.73 12.66 -4.96 6.16

NYJ -4.14 -1.58 0.36 -12.99

Now, we need to convert those point spreads to likelihoods of victory. We can do this using a crude formula that I developed, which more or less jives with what actually happens. It's unclear whether the relationship is linear or not, but that's the simplest measure and approximates what happens pretty well. The formula to determine winning percentage (the dependent variable) from point spread is Y = -0.028X + 0.500. So a point spread of 0 would of course give a team a 50% chance of winning; a point spread of -3 would equate to a 58.4% chance of winning, and a 7 point spread is equivalent to a 69.6% chance. Those seem pretty reasonable. So...

Team 14 15 16 17

Cin 0.97 0.38 0.57 0.70

Den 0.17 0.68 0.43 0.90

Jac 0.61 0.67 0.53 0.67

KC 0.31 0.15 0.64 0.33

NYJ 0.62 0.54 0.49 0.86

This gives the Bengals a 97% chance of winning at home against the Raiders and the Broncos a 68% chance of winning in Arizona. Those feel about right, too.

Basically, if the season was played 1000 times, we'd expect the Bengals to win against the Raiders in 970 games. So we give them .97 wins this week, 0.38 wins in Indy, 0.57 wins against the Broncos and 0.70 wins against the Steelers. That means we'd project Cincinnati to average 9.62 (7 wins already, plus the 2.62 wins over the next month) wins if the season was replayed 1000 times. How does that compare to the other teams?

Cin 2.62

NYJ 2.51

Jac 2.48

Den 2.18

KC 1.42

That list more or less is the best I think we can to do to handicap the AFC. It also reflects what my gut is telling me, too. The Jaguars are given 2.5 wins despite facing Indy, Ten, NE and KC, but it's easy to see why. Sagarin ranks them as the 6th best team to date, mostly because Jacksonville killed the Jets 41-0, beat the Titans by 30, and beat the Giants and Dolphins by a combined 30. These rankings examine ONLY the margin of victory in each game, and winning and losing is irrelevant. The Jaguars rank 11th in the system when the ONLY thing that matters is who wins, and the margin of victory is thrown out.

We could debate which system is better, and Sagarin seems to think the margin of victory is better at predicting future games. That leads to the counterintuitive result of ranking Jacksonville ahead of Indianapolis, and giving the Jaguars a great chance to win the game since it's at home.

Either way, the Jaguars and Jets are in the big games this week. You'd expect KC to lose to Baltimore, Denver to lose in San Diego and Cincinnati to romp the Raiders. The Jets and Jaguars have games that could go either way, and if either team could come out of this weekend with a victory, that would go a long way towards putting Denver and Kansas City in their rear view mirrors.

A couple of final notes on the Jets, who seem to be everyone's surprise team. I'm not shocked the Jets have done so well, but people are overlooking the biggest reason why New York has been so successful: Chad Pennington stayed healthy. It was easy to project the Jets to go 4-12 if Pennington played 6 games; but a healthy Pennington means the Jets shouldn't have been predicted to have a losing record. From 2002-2005, the Jets were 8-18 in games where Pennington wasn't the main QB, versus 21-17 when Pennington was the main QB. Pennington and the Jets are now 28-22 after the strong performance this year.

The other reason, of course, is the job that Eric Mangini's done. It's now a two-man race for coach of the year between him and Sean Payton. Loyal PFR readers shouldn't be too surprised by the success of these first time coaches; In July I pointed out that only two of the last ten rookie coaches had losing records. Mangini and Payton were the cream of this year's crop, and Rod Marinelli has been unsuccessful. The next four games will tell us a bit more about Gary Kubiak, Brad Childress, Mike McCarthy and Scott Linehan, but I don't think any of the four has been a dissapointment so far in 2006.

7 Comments | Posted in General

## Post your playoff ideas

Last Friday I made a case for USC, had they beaten UCLA, to get the title game berth over Michigan. It's not a coincidence that I left Florida out of that discussion. I knew that the Florida-Michigan question would not be so easily settled, so I just got lazy and gambled that I wouldn't have to deal with it.

It's very close, and I frankly haven't completely sorted out whether or not I think the right call was made. Fortunately, we've got what seems like several months to talk about it between now and the games, so I'll get around to posting my thoughts as soon as I figure out exactly what they are.

In the mean time, since it's on everyone's mind right now, let's make this the place where we post our proposals for an NCAA Division I football playoff. I only have one rule: before you post your scheme, you must declare whether it is a "if I were king of the world, this is the system I'd implement" proposal, or a "this is a workable system that could actually be agreeable to all parties concerned given the actual state of affairs we find ourselves in right now" proposal.

Submissions of both kinds are encouraged, but just so we can avoid all the cross-talk (which is similar to the will-he?/should-he? cross-talk in Hall of Fame discussions), I'll set up separate posts for each category. Post practical proposals here and pie-in-the-sky dreams here.

6 Comments | Posted in BCS, College

## Post your playoff ideas: practical proposals

If you've got a playoff system that you think actually makes sense, given all the constraints currently in place, let's hear about it.

34 Comments | Posted in BCS, College, Rule Change Proposals

## Post your playoff ideas: unrealistic proposals

If you've got a college football playoff system that would be perfect, but sadly could never happen in the world we find ourselves in, let's hear about it right here.

14 Comments | Posted in BCS, College, Rule Change Proposals

## Rematch?

I say no.

My p-f-r blog colleague Chase supports the Wolverines' candidacy for the second slot in college football's championship game. Back in the comments to an old post, he put it thusly:

We’re trying to decide which team is the second best in the nation. I don’t understand how a knock against a team in the “which one of these is the second best in the country” can be “they lost to the first best team in the country.”

What Chase is saying --- and he does have a point --- is that losing to Ohio State does not provide any evidence that Michigan is not the second best team in the country. In fact, we don't have any evidence at all that Michigan is not the second best team in the country. In the case of USC and Florida, we do have some evidence that they're not #2. Namely, losses to Oregon State and Auburn respectively.

But that's just half the equation. The other side of it is, we don't have much evidence that Michigan *is* the second best team in the country either. Or at least not nearly as much as we have for USC. According to my margin-not-included computer rankings (which are pretty vanilla), USC has beaten four top-25 teams while Michigan has only beaten two. USC has beaten eight top-50 teams to Michigan's five (and that includes #49 and #50). If losing to #1 doesn't provide evidence that you're not #2, then I claim that beating #63, #68, #77, #79, #93, and #94 doesn't provide any evidence that you are #2.

They both beat Notre Dame, so cross them off the list.

Michigan's most impressive wins have been against Wisconsin (11), Penn State (27) and Minnesota (49). USC has seven wins (not counting Notre Dame) against teams ranked higher than Minnesota. The Big Ten is simply down this year. That's not Michigan's fault, but it's a fact.

When you look at "second order wins" (i.e. teams that were beaten by teams that were beaten by Michigan), you see just how comparitively weak the Wolverines' schedule has been:

**First- and second-order wins by Michigan** (second-order in parentheses)

11 - Notre Dame

12 - Wisconsin

(23 - Georgia Tech)

27 - Penn State

(28 - Georgia)

That's all of them in the top 30. Their next-best is a second-order win over UCLA, who USC is going to get a first-order win against tomorrow.

**First- and second-order wins by USC** (second-order in parentheses)

(8 - Auburn)

9 - Arkansas

11 - Notre Dame

(13 - Tennessee)

15 - Cal

(16 - Oklahoma)

(20 - Oregon State)

(21 - BYU)

(23 - Georgia Tech)

24 - Nebraska

(26 - Texas A&M)

(27 - Penn State)

And Florida might get added to the list tomorrow.

Of course, Michigan has no "second order losses" at all --- which is essentially Chase's point --- while USC has second order losses to Boise State as well as some Pac-10 middlers. That's certainly part of the discussion, but in my opinion the lopsidedness of the win lists trumps that. The reason I'd favor USC over Michigan is essentially the same reason I'd favor USC over Boise State, just to a lesser degree.

Finally, I know the Michigan-Ohio State game was a Three Point Game, but did anyone really think, at any point after the first quarter, that Michigan was going to win it? By my count, in the second half they had the ball for only seven plays during which they trailed by less than a touchdown. They gained ten yards on those seven plays. When a team gets three turnovers, gives none, and still trails for the last 51 minutes of the game, that's not a performance, in my view, that demands a rematch.

Trojan haters take heart. Because I wasted my time writing this before the UCLA game, they'll surely lose that one to make it all moot.

21 Comments | Posted in BCS, College