Help me keep The Crosstab ad free. Subscribe to premium content and join the community on Patreon for just $2.
Become a Patron now!

Does Quantitative Forecasting Sometimes Lead Us Astray?

Does Quantitative Forecasting Sometimes Lead Us Astray?

Note: this post was released early for the site’s supporters on Patreon. If you’d like early access to posts and other content, follow the link above.



The big 2018 question not yet answered by the data: So, what about candidate quality?


David writes to me:

What I’m trying to understand-with your model and as a general theory- is the degree to which qualitative inputs and indicators matter.

By way of example…

  • NJ-02 has a Republican lean and a Dem forecast of -1%. However, there are factors that suggest State Senator Van Drew is heavily favored to win. This includes his strength as a candidate (and the relative weakness of the presumptive Republican nominee) and the presumed advantages he is likely to have in campaign resources (money raised and outside spending) that will matter in the fall.

  • TX-23 has a slight Republican lean, but a Dem forecast of +10%. That said, Rep. Will Hurd’s campaign will likely have more money to spend than Gina Ortiz-Jones, benefit from outside spending by the NRCC and CLF that is, at worst, at parity with their Democratic counterparts. I think they’re both good candidates but Hurd benefits from being known and favorable portrayal locally and nationally as a “moderate” and bi-partisan Congressman. A concern for Democrats is voter turnout. The district is majority Hispanic and in the bottom 20% of educational attainment (High School and College) and this has hurt prior Democratic candidates in midterm elections (On a DCCC call they specifically mentioned TX-23 as a district where lower turnout in BexarCounty cost them the congressional seat).

I’m not shilling for Hurd’s campaign but I question the degree to which your model is over estimating how well (or poorly) Democrats will do in certain districts.

And if that’s due to a lack of inputs with non-statistical origins that may are hard to define in non-subjective mathematical terms.

And I mean “question” with the utmost respect and appreciation for your work on the House model, PA-18 special election and generic polling which I found informative and invaluable.

David has hit the nail on the head with the trouble forecasters have in House races. Candidates matter, and candidate quality isn’t well understood by historical data — on top of which we need so that we are able to train statistical models. Roughly speaking, these factors find a place in the penumbras of the forecast’s probabilities, but large (20-point) error will happen about 1% of the time, and those are always (or, in the past, have always been) due to candidate quality.

The bigger question is this: how can the model adjust for those? Allow me to ramble my way through an explanation.

With incumbents, it’s easy (not really) to adjust for candidate quality— we simply use their past vote margin as a guess of how “good” they are and use the residual between their past margin and the partisan lean of a seat as the advantage they’ll have next time around (note that in TX-23, Hurd actually performs exactly according to the seat’s partisan lean).

However, this method has more uncertainty in an election where the challenger this time is significantly better/worse than last time. In short, the model assumes that all candidates are “generic,” for better or for worse. And in open seats, this correction doesn’t actually hold weight in the models — the differences get explained by something out, and small (or large) effects of candidate quality in one district get canceled out by large (or small) effects elsewhere. One can imagine that without relevant previous vote history in open seats, this task is even harder (the model still uses previous vote in open seats, but relies on it less than in districts where incumbents are running).

We can adjust for these things statistically, but the answer is sort of ad-hoc — the only way to really blend qualitative assessments of candidate quality with the purely quantitative indicators in a contests.

We make room for this in in the forecast with a candidate quality adjustment that indicates the boost in performance candidates got in 2014 and 2016 when their challenger didn’t previously hold elected office, had dismal fundraising, or otherwise were just bozos (AZ-01, looking at you Paul Babeu). This gets us part of the way to a better answer, but again, error remains in the forecast, and always will, though to varying degrees. There will be surprises in November. The 2018 forecast model has yet to incorporate these candidate quality adjustments as most fundraising data is incomplete and many seats do not yet have challengers.

For the record, if you’re using your assessment of NJ-02 and TX-23 to pinpoint misses the model may make, I think that’s a good use of qualitative analysis in predicting house elections. No method is perfect, and we should use all the information we can in our forecasting pursuits. That means combining the forecast with other data when we suspect it may be wrong. However, we have to be very careful about what judgements we are making in districts when saying the data-driven indicators could miss.

Keep in mind that the method missed only 3 seats in aggregate in 2016, so while more seats were directionally wrong (7), and one probabilistically “wrong” (NJ-05 had a >95% chance of Republican victory in 2016, but was won by a Democrat), all but 1 district fell within the margin of error. That means one didn’t, and we’d be wrong to expect more certainty in 2018.

Of course, another possibility entirely is to combine the forecast with seat-level polling in any given seat, correcting the error from voters feeling warmly/cooly about certain candidates. I’m currently exploring a Bayesian route for doing so.




Stay tuned for more formal work on this topic, as well as the value of data-driven forecasting alongside (and often, above) qualitative assessments of House races.


G. Elliott Morris avatar
About G. Elliott Morris
Elliott is a ("big") data journalist and data scientist who specializes in American politics, public opinion, and predictive analytics. He is a government, history, and computer science student at The University of Texas at Austin and worked previously for the Pew Research Center and Decision Desk HQ.
comments powered by Disqus