Normally when I do a thread/post on teams' Statistical Differentials I do a long, detailed post with lots of numbers. I'm not going to do that this time for three reasons: 1) I don't really have time (as evidenced by the fact that I'm just now posting this today rather than a few days ago), 2) I've added another set of splits statistic categories to the model that had to mostly be figured/compared manually, and 3) I just don't really feel like it.
Even though I won't go into all of the details, I will give you a peek into some of the insight that can be gleaned from the data.
NOTE: As with all statistical analysis, and especially predictions gleaned from that analysis, this must be taken with a particularly large grain of salt: statistical analysis cannot account for outliers - results varying from the norm due to unknown factors such as injuries, emotional factors/mood, "rust", etc. - in individual events. This prediction assumes that both teams perform at a relatively equal level to what they have performed thus far.
Ok, now - for the results...
While poring over some of the differentials a thought occurred to me in regard to a differential "split" comparison category that could provide - and does seem to provide - some very interesting insight into a team's relative performance. I call it the "Quality Unit" split. I use many the typical splits in my analysis of the units' performances in different categories: e.g., Total Offense vs. Conference Opponents, Scoring Defense vs. BCS Opponents, Red Zone Efficiency vs. BCS 'Winning Record' Opponents, etc. The idea behind those different splits is to garner more consistent, reliable data by using results versus what would be expected to be a more consistent level of competition. The "Quality Unit" split is a little more detailed and is mostly irrespective of conference affiliation and record. The reason for this is due to the fact that a relatively good team may have a particularly bad unit while a relatively bad team may have a particularly good unit in a certain category.
What the "Quality Unit" split does is compare a particular team's unit/category performance versus that unit/category's inverse against that team's opponents who are ranked in the Top 40 for that unit/category's 'Quality Unit' ranking. To be eligible for a 'Quality Unit' ranking, the unit must have faced off against at least 3 Top 40 'Quality Unit' ranked units themselves. This is to put in perspective the outliers who generally faced lessor competition such as, for example, Marshall's Scoring Offense. While Marshall's Scoring Offense currently ranks at 7th among FBS schools at 40.9 points per game, they only faced the minimum 3 'Quality Unit' Scoring Defenses. And against those units, Marshall averaged a mediocre 26.3 points per game; they were hardly a scoring juggernaut against good scoring defenses. Compare that to Alabama's scoring offense which is currently ranked at 13th in the FBS, averaging 38.5 points per game. Alabama's scoring offense, though, has faced 6 'Quality Unit' Scoring Defenses and averaged an excellent 31.8 points per game against those units. Notre Dame's scoring offense is currently ranked at 74th in FBS at 26.8 points per game. Notre Dame's scoring offense, though, has faced 7 'Quality Unit' Scoring Defenses and averaged a very poor 21.6 points per game against those units.
Here is a chart with a few, select teams and their corresponding Scoring Offense 'Quality Unit' Rankings:
Scoring Offense Quality Unit Rankings Team: Scoring O Quality Unit Rank (of 87) Points vs. Quality Units Total Scoring Offense Rank (of 124) Total Points Per Game Oregon 1 44.1 2 49.6 Oklahoma State 2 36.3 3 45.7 Baylor 3 32.0 4 44.5 Alabama 4 31.8 13 38.5 … Clemson 7 29.3 6 41.0 Texas A&M 10 28.8 4 44.5 Florida State 14 28.3 10 39.3 Marshall 25 26.3 7 40.9 Georgia 29 25.0 18 37.8 Kansas State 39 22.8 11 38.8 Notre Dame 57 21.6 74 26.8 Oklahoma 69 20.8 15 38.2
As would be expected, some high-scoring offenses had inflated overall numbers due to facing easier competition and their Quality Unit ranking is therefore lower. Others were more consistent against better competition and their ranking is therefore higher.
There is a thread I posted previously showing similar traits in the Statistical Differential Analysis of Notre Dame's Rushing Defense, if you'd like to re-read it, here:
It is interesting to note that a few of the bowl results provided further evidence for Notre Dame's Rushing Defense being overrated. As was mentioned in the above referenced thread, much of Notre Dame's impressive Rushing Defense Ranking came as a result of sacks and sack yardage. In College Football, of course, as opposed to the NFL sacks and sack yardage count against the rushing totals rather than the passing totals. If you were to calculate the Rushing Defense Rankings without counting sacks then they would fall a bit differently.
Against Oklahoma, Notre Dame held the Sooners to an astounding 0.6 yards per rush. However, Landry Jones was sacked twice for a loss of 16 yards and had a snap fly over his head for a loss of 19 yards. Take away those three plays and the Sooners actually averaged around 2.4 yards per rush. Not good, by any means, but much more realistic than 0.6 yards per rush. Texas A&M held those Sooners to only 3.6 yards per rush on 34 carries, with a third of Oklahoma's rushing yardage coming on three plays. The biggest difference between those two games, though, is that Oklahoma actually showed up to play in both of those games and was simply run off of the field against Texas A&M while the Sooners were tied with Notre Dame most of the way through the 4th Quarter.
Against Pittsburgh, Notre Dame gave up an unimpressive 4.4 yards per rush. However, Pittsburgh gave up 5 sacks in that game for a total of 31 yards. That means that, on rushing plays, Notre Dame gave up 6.3 yards per rush to mighty Pitt. And, as you likely recall, Pitt missed a 33 yard field goal that would have given them the win over Notre Dame in the second overtime period. Incidentally, that missed field goal came on a 4th and 1 in which the Officials failed to notice that Notre Dame had two players on the field with identical numbers, which should have given Pitt a 1st and 10 at the 11 yard line. This is, of course, the same Pitt team that didn't belong on the field with the mighty Ole Miss Rebels. The Rebels somehow managed to hold the mighty Pitt rushing attack to only 2.3 yards per rush on 36 carries.
The bottom line is this:
Notre Dame's impressive defensive numbers are more a product of the offenses they faced than of the overall quality of their defense and Notre Dame's unimpressive offensive numbers do not look any better when you delve into the defenses they faced. Yes, Notre Dame's offense did seem to improve as the season went on, however, the quality of the defenses they faced went down during that stretch.
Meanwhile, Alabama's offensive and defensive numbers are impressive across the board. And when you examine the units they faced throughout the season, the quality of those numbers does not diminish. In fact, most of those numbers look more impressive when you delve into Alabama's competition.
The Statistical Differential Analysis model I'm using has evolved over the past few seasons as I've had time to put more into it. However, it's been fairly accurate for each of the past three seasons. The first "official", I guess you'd say, version of the model predicted a 30-20 Alabama win over Texas in the BCSCG three years ago. It also predicted a 24-10 Alabama win over LSU in last year's BCSCG. And, much more recently, it predicted a 34-27 Alabama win over Georgia in the SECCG - although, when I made my personal prediction, I went with my gut and said 34-17 since I thought there was no way Georgia was actually going to score 27 points on our Defense.
For this BCSCG, if you add it all up, the current version of the model - including the new 'Quality Unit' ratings - predicts a score tonight of:
Notre Dame 16
Of course, for my personal prediction, I'm going to go with my gut and say that we'll do a bit better than the model predicts, for a score of:
Notre Dame 13