Advertisement
The forecasts, like those from Decision Desk HQ, Nate Silver and 538, are now ubiquitous, but their accuracy is hard to measure.
Kaleigh Rogers has been covering election polling since 2019.
In 2008, Nate Silver’s election forecast model was so novel — and accurate — that it landed him on Time magazine’s list of the 100 most influential people the next year.
These days, election forecasts — which use data including polling, historical results and economic factors to predict the likelihood of an election outcome — are much more commonplace. There were at least 10 major forecasters this election cycle, though you could also find forecasts produced by high school statistics classes and Reddit users. (The New York Times has not published a pre-election forecast since 2016.)
While election polling has been around for more than a century, election forecasts have come to the forefront only in the past decade and a half. And while polls are intended to provide a snapshot in time, taking the pulse of how Americans are feeling about a race, forecasts go a step further, analyzing the polling and other data to make a prediction about who is most likely to win, and how likely.
But this year, with the polls already showing a razor-thin presidential race, the forecasts often indicated that the race was tied nationally or in swing states, or they gave one candidate or the other only a slight edge.
These results raise the question: What is the value of these election forecasts? Proponents say they help synthesize the available information and that more data is better than less. They also argue that the forecasts this year performed well, generally — capturing the uncertainty of a close race, while holding out for the possibility that Donald J. Trump could sweep the swing states, as he did.
But critics say forecasts add more noise than signal — and may even be doing harm.
Among the final forecasts published before Election Day, it was hard to discern a clear prediction. One forecast said Mr. Trump had a 54 percent likelihood of winning Pennsylvania, for instance, while another said Kamala Harris had a 53 percent chance of winning the same state. And while a forecast that gave Mr. Trump a 51 percent chance of victory may have technically called the race correctly, it was hardly a daring prognostication.
“How you should interpret us having Trump at 54 percent the day before the election is not that much different than Nate Silver,” who had the race as a dead heat, said Scott Tranter, the director of data science at Decision Desk HQ. Mr. Tranter’s firm partnered with The Hill to build an election forecast. “In stats class, we’re saying the same thing. To the public we’re saying something different, and I’m not necessarily sure that that is the fair way to look at it, but that’s the reality we live in.”
Those narrow margins make it difficult to judge whether any one model was really that much more accurate than another. Certain statistical analyses can compare accuracy across models once the final results are in, but even those wouldn’t reveal much about the accuracy of forecasts overall or over time.
Justin Grimmer, a professor of public policy at Stanford, was an author of a study published this year that showed that, at best, it would take decades — and at worst, millenniums — to be able to properly evaluate how predictive these forecasts truly are.
“It can take a long time to establish that one method is better than another method at predicting, say, the overall winner or accurately characterizing the probability of a particular candidate winning,” he said. “When forecasting models move the probability of one candidate winning from 52 percent down to 48 percent, we’re just not in a position to know if that’s a meaningful move, if this is reflecting some actual change, or if that’s just statistical noise.”
So what, exactly, is the point of all this numerical soup? Forecasters say their work helps the public interpret what the polls are signaling — in the case of the 2024 presidential race, that was a lot of uncertainty — rather than getting caught up in the “noise” of vibes, spin and punditry.
Several forecasters I spoke to also argued that forecasts are particularly useful in predicting congressional races, and that those predictions can provide a better view of how any one forecaster performed. For instance, a polling analysis site, 538, predicted the Senate races would most likely result in a 52-48 seat advantage for the G.O.P., missing by just one seat. (I previously covered polling for 538, which Mr. Silver led until last year, but I wasn’t directly involved in building the site’s forecast model.)
Some forecasters also noted that there is a demonstrable appetite for these models. Mr. Silver said his newsletter, Silver Bulletin, where he published his model this year, peaked as the second-highest-ranked political newsletter on Substack.
Mr. Tranter said there was a booming market for political forecasts in the private sector. Decision Desk has built models for clients, including social media influencers, news organizations and financial services businesses.
Still, there are skeptics, particularly as election models have become more common. Benjamin Recht, an electrical engineering and computer science professor at the University of California, Berkeley, has been an outspoken critic of election forecasts. When you consider all of the individual choices a forecaster has to make when building a model, he said, it’s more akin to astrology than meteorology.
The statistical framing of these predictions can give voters a false sense of certainty, he said. (Analyses that are published alongside these forecasts typically try to combat this.) A study published in 2020, for instance, argued that forecasts can increase voters’ certainty in an election outcome and even decrease turnout.
Mr. Silver — whose own forecast showed Ms. Harris and Mr. Trump in a dead heat nationally, and accurately predicted the winner in five of the seven swing states — said the size of the audience for forecasts shows they must have value at least to those who obsess over them.
“Forecasts are always going to have haters because a lot of partisans want to be spoon-fed good news, and because there’s a lot of innumeracy,” Mr. Silver said. “But you’re never going to have anything interesting to contribute if you’re worried about pleasing everyone.”
Advertisement