top of page

Preseason Ratings

One of the main reasons I decided to do a rating system specific to DIII Football was to develop preseason (and early season) ratings that do a good job of projecting how teams will finish the season. When I look at other rating systems' pre-season ratings, I never felt like the Top 25 or Top 10 passed the eye test, and often results on the field seemed to validate my hunch more than it would their predictions.

 

While my method for determining pre-season ratings doesn't really differ in too meaningful of a way from other rating systems, I believe that my general method for rating teams in-season does a much better job of preventing some of the flaws that other systems (and my old one) are subject to. Division III football is very different from other levels of football in terms of parity, size, and the amount of overlap between mutual opponents. With some conferences only playing one or two non-conference games every year, and with so many of those games coming against the same conferences, a DIII rating system that doesn't strongly consider the long-term trends of a team's ( or conference's) strength is subject to poor pre-season ratings, and then poor predictions in the playoffs.

 

For example, the ten-team UMAC plays a nine-game conference schedule, meaning that the conference as a whole only plays eleven non-conference games (including their conference champ's game in the playoffs). A vast majority of those games come against teams in the NACC, MWC, or Maranatha Baptist. Those eleven games are the only benchmark we have to compare them to other conferences, and a model that solves a system of equations to determine the conference's relative strength would be prone to overfitting their results.

 

My preseason rating is derived from a combination of the team's end-of-year ratings over the previous four seasons, with a very slight regression to the mean. Because the quality of the best versus the worst teams in DIII is so large, my method produces better results by regressing toward each team's own long-term mean than it does by regressing toward the national average.

 

Another factor taken into consideration is the amount of returning starters each team has on offense and defense (punters and kickers are ignored). My original intent was to use teams' number of returning All-Conference players to better gauge the quality of returning players, but it became immediately clear to me that the variability in amounts and criteria with which each conference selected their All-Conference teams would make a nationwide comparison nearly impossible. In its final form, teams average just under seven returning starters on each side of the ball. For every starter over or under that average, a team is rewarded with (or deducted, depending if they're over/under and on offense/defense) 1.15 on their preseason AdjO/AdjD.

 

As of this writing, I have data on returning starters going back to the 2013 season (via D3Football.com's Kickoff Coverage). Before 2013 the ratings are based solely on results from previous seasons.

 

The last thing to consider was how to handle the preseason ratings for first-year programs and teams just joining NCAA DIII (because my system is a closed loop among DIII teams, I do not have data on previous season for teams moving over from NAIA or other divisions). For teams moving to Division III from the NAIA or another Division of the NCAA, I use a 4-year regression of their offensive & defensive ratings from Massey. For teams moving up from the Junior College ranks, they are given an offensive & defensive rating 12 ppg below average. Startup programs are given ratings 15 ppg below average (which is the average rating for JUCO's & startups over the 17 season I have data for).

 

The standard error between my preseason AdjO & AdjD ratings and the end-of-season AdjO & AdjD ratings is only about 5.75, which mean that about two-thirds of teams will finish the season within at least 6 points of their preseason rating, 95% of teams will finish within 12 points, and maybe one team per year will finish more than 18 points different from their preseason rating. In the one season for which I have tracked this information on other sites, my system significantly outperforms all other publicly available computer ratings.

 

To develop the preseason ratings for the 1998 season, I used the same methodology as for teams joining DIII from other divisions--a four year regression of Massey Ratings. Kenneth Massey has ratings going back significantly further than 1998, but his schedules & scores are not "official," so they're not publicly available.

​

bottom of page