top of page

The Value of Experience


My preseason ratings are the only DIII rating system that utilizes data on returning starters to inform its calculations, but it's far from the only system to use such a metric. Bill Connelly's S&P+ and ESPN's FPI both use data on returning starters, and FPI even has a variable for injured starters. This season S&P+ began using returning production--measured by percentage of yards, tackles, PBU's, etc.--instead of just the number of returning starters, and it got me wondering if there would be a better way for me to update my ratings.

Currently my ratings use the returning starter data from D3Football.com's Kickoff coverage, which asks coaches (and probably more than sometimes, SID's) how many players they have returning this season who started at least half of the team's games last season. Two important things to note about how D3Football does this: they are asking the coaches directly, and they define a returning starter as someone who started more than half of the team's games. Using this method, each additional returning starter is worth about 1.5 points per game throughout the season. In the spirit of S&P+, I wanted to try to do something better, and the two approaches I decided to experiment with were a breakdown of returning starters by position, and a breakdown of returning Games Started and Games Played from NCAA.org's statistic site.

For a particular subset of teams--potential Top 25 teams--Pat Coleman (who operates D3Football.com) breaks down returning starters by position. He shares this data with the panel of Top 25 pollsters to inform their preseason ratings, and this year he graciously agreed to share some of that data with me.

Using this preseason breakdown of about 40-50 teams by position for the 2015 and 2014 seasons, I tried to calculate the relative value of each position over the two-year span, and for each year individually. When analyzing the two-year span, I used only teams who appeared on the list for both seasons. I kept every other team's returning starter adjustments for those seasons unchanged from my current method. I ran an optimization model that determined the most likely PPG value for each position by minimizing the standard error between my predicted AdjO/AdjD values and the teams' actual end-of-season AdjO/AdjD values. As it turns out, having a quarterback with some game experience is really important:

It's somewhat surprising to me that any of these positions would have a negative point value, especially offensive line or defensive backs. Had I not done this analysis, I would have assumed offensive lineman would have been the second-most important position (behind quarterback), but here only defensive backs a less important to maintaining success. The offensive skill positions have a huge amount of variance between 2014 and 2015, which make it hard for me to make any meaningful conclusions, but I would guess that the variance is mostly due to the large difference between returning a stud and returning a regular starter. With such a small subset of teams, it's hard to tell.

Another big surprise to me was that returning defensive backs consistently correlated with worse defenses. As an avid reader of Bill Connelly at Football Study Hall, I am conditioned to think that pass defense is the area where experience matters most. I do suspect the nature of offensive schemes in DIII is partly the reason why DB's seem to matter less than linebackers and defensive lineman. In the FBS game, passing offenses tend to be more dynamic than your typical Division III team's. It's also worth noting that many of the teams who appear on a potential Top 25 list very possibly were over-performing the previous season, so the apparent negative correlation between experience and improvement in the defensive backfield is probably just standard regression.

Again to reiterate, this data is isn't a random grouping of the DIII landscape, they're teams who were likely the top of their respective leagues the previous season. This could, and almost assuredly does, skew the results one way or another.

One hypothesis for why it's probably skewed is that some of these teams were probably only in the discussion for Top 25 voting because they had an exceptional quarterback that would be harder to replace than an average DIII QB, thus inflating the value of quarterbacks. Yes, I agree that QB is probably the most important position at any level, but a truly great (think: "outlier") quarterback could be the sole reason a team is good enough to get considered for Top 25 voting, and this subset of teams would include more of those great (outlier) quarterbacks than a randomly-generated sample of the entire nation.

The other data source I used to analyze returning experience was from the NCAA.org stats site, which has each team's officially-reported counts for games played and games started for each player from 2013 on. Because these are teams' official stats as they report them to the NCAA, they should be some of the most accurate counts available.

For this analysis, I calculated each team's percentage of Games Started and Games Played returning from one season to the next by excluding seniors only from the total. If a player didn't have a class year listed, any of their games started or played were excluded entirely. So the calculation for "%GS Ret" would be:

(GS by Juniors + GS by Sophomores + GS by Freshmen) / (Total Games Started)

Most teams in DIII play ten games, so each unit (offense & defense) should have roughly 110 games started (11 players * 10 games) per season. The average number of games started returning each year is around 70, or about 64%, and the percentage of games played returning is around 74%. Unlike games started, each team will have a different number of total games played depending on how many players see the field for that team. Mount Union, thanks to their many blowout victories, typically plays over 100 players in varsity contests throughout a season, more than most DIII teams even have on their roster.

My thoughts behind including games played returning is simple, a player who has varsity game experience should add more to a team than one without that experience, regardless of whether they were a starter or not.

If the correlation between Games Started (from NCAA.org) and Returning Starters (from D3Football's Kickoff) were equal, we should expect the Percentage of Games Started Returning to be worth 11 times that of a Returning Starter, or about 16.5 ppg. The value of Percentage of Games Played Returning should probably be worth less than that, I would expect it to be around 8 ppg.

Below you can see my regressions for how much game experience matters in improving a team's offensive and defensive performance throughout a season. The steeper the upward slope, the more important that metric is in terms of improvement.

I'll spare you the more math-y details, and just tell you that Percentage of Games Started Returning is actually only worth about half as much as I had hypothesized earlier, or about 8 points per game, and Percentage of Games Played Returning is worth only about 1.5 ppg.

I believe there's probably a few reasons for the difference between the relative importance of a returning starter and percentage of games started returning.

Reason #1: there's a lot of schools who aren't very diligent in their weekly statistical reports to the NCAA. I don't remember which school it was, but there was a school that had only reported a total of 7 games started for every offensive lineman on their roster one year.

Reason #2: when a starter is injured, coaches want to replace them with veteran players. Consider this scenario, a starting linebacker is injured and will miss a few weeks during the season. Who is the coach going to choose to replace him in the starting lineup for the next couple of games, his veteran senior who knows the defense in and out, or his athletic freshman who has been playing on the scout team all season? Probably the senior.

Reason #3: self reporting returning starters more than 6 months after the season skews a coach's perspective. Consider the scenario I outlined above. Does the coach look at that veteran senior linebacker as a "starter" lost? Probably not. His starter was the player who was injured. I have to assume that many coaches, when asked this question about returning starters, instead tend to answer the question "How many of your eleven best players on offense are coming back this season?"

When this analysis was all said and done, I discovered something somewhat unexpected: the methods I'm currently using are probably the best way to adjust preseason ratings for returning talent.

Using my current methods for returning starters, I correctly predicted around 78.5% of games for 2014 and 2015. When I used the positional data (for the teams I had the data on), my predictions were only right about 78% of the time, and using games started/played, I was correct around 77% of the time.

Maybe next season I'll go more in-depth and look at retuning production instead of just participation, but for now, I still like my preseason ratings better than those from any other DIII computer rating system.


bottom of page