top of page

Over-react or Under-react after Week 1

Inspired by this post by Stewart Mandel at The Athletic (may be behind a pay-wall), in which he broke down some of the major clashes of the weekend, and determined if fanbases should over- or under-react to the results, I decided to take a peak into the outcomes of Division III games for the same reason. Because it's my birthday today, and I don't plan on spending all day breaking down film instead of going outside to enjoy the last gasps of summer, I won't be breaking down individual performances with film study. Instead I'll be taking the 50,000 foot view of every game played in Week 1 since 2013 (the first season for which I have returning starter data available).

I have a pretty simple question I want to answer: Is a team's Week 1 performance indicative of likely results going forward? In other words, if a team vastly under-performs in the first week, is the model being alarmist enough with their post-Week 1 rating?

Take UW-Whitewater for example. They came into the season as the #4 team in my preseason ratings, and played Illinois Wesleyan, the 20th rated team. The Warhawks went in to the game as a 17-point favorite by my numbers, but ended up losing after a last-minute drive fell short due to an errant shotgun exchange. In the current ratings, UWW (#7) remains ahead of IWU (#14). Is this right? For one, that's somewhat of an unanswerable question, but in the abstract, the goal of my model is to project out future games (it does pretty good; better than any other computer model, but not necessarily any better than coaches' polls). Each weekly rating is supposed to be a best guess, a snap shot, at where a team finishes the season. Teams are supposed to be equally-likely to finish above their current rating as they are below it, but does the first week of a new season tell us more than any other game?

To do this, I graphed the rating change for every Week 1 game against that team's rating change from there until the end of the season. The results for every rating-change pair are below.

If the first week's results were truly a bellwether for the rest of the season, one would expect to see less of a hazily-formed circle here, and more of an angled blob going from bottom-left to top-right. The orange line is the trend line of a simple regression. The R-square value for the trend line is only 0.003, which implies pretty much zero relationship between Week 1 results and the final season ratings.

What about teams with the largest rating changes after Week 1? Do particularly surprising results tell us more than only mildly surprising ones? The graph below only shows data for the teams in either the top or bottom quartile of the data set.

The R-squared value here does double to 0.006, which means the relationship is twice as meaningful, but twice as meaningful as "meaningless" is still meaningless.

The moral of the story here? Yes, the results from Week 1 do tell us something new about the teams we didn't know before, but don't over-react, just temper expectations.


bottom of page