Вы находитесь на странице: 1из 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/315912215

A Recap of the 2016 Election Forecasts

Article  in  Political Science and Politics · April 2017


DOI: 10.1017/S1049096516002766

CITATIONS READS
13 980

15 authors, including:

James E. Campbell Helmut Norpoth


University at Buffalo, The State University of New York Stony Brook University
102 PUBLICATIONS   2,203 CITATIONS    88 PUBLICATIONS   2,427 CITATIONS   

SEE PROFILE SEE PROFILE

Alan I. Abramowitz Michael Lewis-Beck


Emory University University of Iowa
89 PUBLICATIONS   4,166 CITATIONS    283 PUBLICATIONS   10,223 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

PollyVote View project

Guidelines for Science View project

All content following this page was uploaded by J. Scott Armstrong on 01 August 2018.

The user has requested enhancement of the downloaded file.


.........................................................................................................................................................................................................................................................................................................................

POLITICS
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

.........................................................................................................................................................................................................................................................................................................................

A Recap of the 2016 Election Forecasts


Introduction of one percentage point! These three forecasts were more accurate
than either the median of the flurry of mid-campaign “Polls-Plus”
forecasts churned out by Nate Silver’s 538 website or the mean of
James E. Campbell, University at Buffalo, SUNY the 10 pre-election polls gathered by Real Clear Politics on the day

W
............................................................................................................................................... before the election.5
Seven of the fundamentals forecasts missed the actual vote per-
ith one of the wildest, roughest, and nastiest centage by only one percentage point or less. The seven include the
presidential campaigns in modern American forecasts from the Jeromes, Lewis-Beck and Tien, Lockerbie as
history now thankfully behind us and with well as the two forecasts by Erikson and Wlezien and the fore-
late vote returns trickling in to a final offi- casts from my two models. Just beyond these seven are both
cial count, it is time to take stock of the 2016 Holbrook’s forecast and PollyVote. They missed by just under one
election forecasts.1 and a half percentage points.
Of the 10 forecasts in the collection, only two erred by more
The Presidential Forecasts than two percentage points. Abramowitz’s “Time for a Change”
From late June to early September, eight forecasters or teams of forecast under-predicted the vote for Clinton by 2.5 percentage
forecasters in the PS symposium issued ten presidential election points, though he expected as much. In the write-up of his fore-
forecasts of the national two-party popular vote (along with the cast, Abramowitz essentially anticipated a gap between the normal
PollyVote composite forecast assembled from an array of different “time for a change” effect and a much less likely “time for a
types of forecasts). To avoid any misrepresentation or misun- change to Trump” effect. Norpoth’s early March forecast of a
derstanding, it should be emphasized that these forecasts were 52.5% Trump popular vote majority produced the largest error of
explicitly of the national two-party popular vote for the major the ten forecasts (3.6 percentage points).6
party candidates and not their electoral vote division.2 Aside from the just noted exceptions, 2016 was a great year
The final vote count indicates 65.9 million votes were cast for for the political science presidential vote forecasts. It provided
Clinton and about 63.0 million for Trump, providing Clinton with further evidence that presidential elections can be predicted with
a 2.9 million national popular vote plurality. About 7.8 million a good deal of accuracy well in advance of the election with
votes were cast for other candidates.3 Despite Trump’s electoral parsimonious, stable, theoretically grounded, and transparent
vote victory, Clinton received a narrow but clear plurality of the statistical models based on the historical record of the relation-
national popular vote.4 Clinton received 51.1% of the two-party ship between the election’s fundamentals and the vote.
popular vote cast nationwide to Trump’s 48.9%.
So, how did these forecasts do? With a few exceptions, the The Congressional Forecasts
accuracy of the presidential vote forecasts ranged from impressive Finally, though the Trump-Clinton contest took center stage
to extraordinary. In my introduction to the October symposium (maybe center-ring), important elections were also being con-
I described how the models weighed the overall conditions, weak ducted for Senate and House seats. Conditions seemed ripe for
economic performance, and general dissatisfaction with the direc- the Democrats to make big gains in both chambers. Republicans
tion of the nation against the pre-campaign preference polls were defending 24 Senate seats to only 10 for the Democrats and
of the candidates. The former favored the Republicans and the Republican numbers in the House (247 seats) were at an 85-year
latter favored the Democrats, but all of this was within a context high point. In both chambers, there was plenty of room for
of an open-seat race in a period of polarized hyper-competition. Democrats to gain seats and little possibility of Republicans
On balance, the forecasts expected a Clinton plurality in a tight doing better than holding their ground.
race. Summing up, I wrote that “the median forecast predicts that The four House and three Senate forecasts reflected this range
Clinton will win 51.1% of the two-party national vote” (Campbell of possibilities. Although there was speculation during the
2016, 653). This calls for an end-zone celebration. Of course, any campaign of a Democratic wave election, it never materialized.
forecast or set of forecasts that is this accurate depends a great Democrats gained only two Senate seats and a mere six House
deal on luck, especially in a year with so many outlandish twists seats. This preserved Republican majorities in both chambers
and turns, but one needs to be in that vicinity to be so lucky—and (52 Senate Republicans and 241 House Republicans). Among the
we were. House forecasts, Lockerbie and Lewis-Beck and Tien were on tar-
Table 1 presents the individual forecasts and errors from our get in predicting a preserved status quo. The Abramowitz forecast
ten statistically estimated historical-fundamentals models along of a 16 seat Democratic pick-up was a bit high, but my forecast of
with the PollyVote composite forecast that draws on alternative a 32 seat Democratic seat gain was far off the mark. Among the
prediction methods. Of the forecasts produced by the fundamentals three Senate forecasts, all expected Democrats to gain more seats
models, three missed Clinton’s national vote share by less than a half than they did. The Abramowitz and the Lewis-Beck and Tien

doi:10.1017/S1049096516002766 © American Political Science Association, 2017 PS • April 2017  331


Po l i t i c s : A R e c a p o f t h e 2 0 1 6 E l e c t i o n Fo r e c a s t s
.........................................................................................................................................................................................................................................................................................................................

forecasts expected four seat Democratic gains and my forecast performance of the major-party nominees in primaries, the other
was for a seven seat Democratic gain. The errors in different fore- is the swing of the electoral pendulum in general elections.
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

casts no doubt reflect differences in each model’s specifications, Both predictors favored Trump over Clinton. He did bet-
but the tightness of the presidential race may be a general factor ter in Republican primaries than she did in Democratic ones,

So, how did these forecasts do? With a few exceptions, the accuracy of the presidential
vote forecasts ranged from impressive to extraordinary.

in the failure of Democrats to convert their substantial pre-election an advantage established in the first two contests (in New
exposure advantages (Republicans defending more seats) into Hampshire and South Carolina). And, after two terms of
actual seat gains. Democratic control of the White House, the electoral pendu-
lum was poised to swing back to the Republican side, espe-
EARLY AND CERTAIN: THE PROSPECT OF PRESIDENT TRUMP cially since Obama’s margin in 2012 was a lot less than it had
Helmut Norpoth, Stony Brook University been in 2008. Given the accuracy of the model, with only one
At first, it sounded outlandish, then, as the general election miss among out-of-sample predictions from 1912 to 2012, I had
kicked into high gear, ludicrous, and finally, on the eve of Elec- great confidence in the 2016 forecast, no matter how much of
tion Day, downright delusional: the forecast that Donald Trump an outlier it was.
had an 87-percent chance of being elected President (http:// To be honest, the Primary Model got one thing wrong.
primarymodel.com/). Many who saw this prediction, first posted Clinton, not Trump, won the popular vote. But, as luck would
on March 7, 2016, must have wondered if it was not a misprint. It have it, the predicted popular-vote margin was big enough to
was Hillary Clinton who had such a high chance of making it to guarantee a Trump victory in the Electoral College. Many studies
the White House, right? Indeed, the best known websites post- will probe what accounts for the glaring disjuncture between the
ing daily election forecasts—from 538, to the Times Upshot, to the popular vote and the electoral count in this election. It appears
Huffington Post, to betting sites like PredicIt—all agreed through- already, though, that Trump won a majority in the Electoral
out the election year that Clinton was the prohibitive favorite. College by capturing three blue states (Pennsylvania, Wisconsin,
Unlike those forecasts and others featured in PS, the predic- and Michigan) on top of reliably red states and several battle-
tion of a near-certain Trump victory did not come from opinion ground states. It is quite telling that the “change” theme strongly
polls or economic indicators. The Primary Model, which gener- resonated in those blue states. When asked in exit polls what can-
ated it, relies solely on election results; in other words, on what didate quality mattered most, voters in Pennsylvania, Wisconsin,
voters actually do, not what they say they might do in an election, and Michigan picked “bring about change” by a 2–1 margin over
or on how outside factors might affect their vote. The model dis- having “the right experience.” 2016 was a change election and
tills more than 100 years of presidential elections, primary and Trump was the stronger change candidate, especially in the states
general, into a formula with essentially two predictors. One is the that decided the outcome.

Ta b l e 1
The Accuracy of the 2016 Presidential Vote Forecasts
Lead Time Democratic Two-Party Error from
Forecasters Forecast Model or Method (days before election) Popular Vote Prediction 51.1% vote*
Michael Lewis-Beck and Charles Tien Political Economy Model 74 51.0% −0.1
James Campbell Convention Bump Model 74 51.2% +0.1
James Campbell Labor Day Trial-Heat and Economy Model 60 50.7% −0.4
Brad Lockerbie Economic Expectations and Political 133 50.4% −0.7
Punishment Model
Robert Erikson and Christopher Wlezien Leading Economic Indicators and Polls 148 and 78 both 52.0% +0.9
Bruno and Veronique Jerome The State-by-State Model 121 50.1% −1.0
J. Scott Armstong, Alfred Cuzan, PollyVote Vote-Share Forecast 62 52.4% +1.3
Andreas Graefe, and Randall Jones, Jr.
Thomas Holbrook National Conditions and Trial Heat Model 61 52.5% +1.4
Alan Abramowitz Time for a Change Model 102 48.6% −2.5
Helmut Norpoth The Primary Model 246 47.5% −3.6

Note: The two-party vote for Hillary Clinton was 51.1% as calculated from data made available from official sources gathered by David Wasserman (2016).

332  PS • April 2017


.........................................................................................................................................................................................................................................................................................................................

IT’S THE POPULAR VOTE, STUPID: ELECTORAL predicting the electoral vote. When the popular vote is very close,
COLLEGE MISFIRES AND THE PERILS OF FORECASTING misfires can happen but they are essentially random events. They
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

PRESIDENTIAL ELECTIONS depend on unique and unpredictable circumstances like major


Alan I. Abramowitz, Emory University voting problems in one key state or a candidate winning several
In late July, more than three months before Election Day, the swing states by extremely narrow margins. Our forecasting
Time for Change Model predicted that the Republican candidate models cannot predict such events and we should not pretend
would win the 2016 presidential election with 51.4% of the major that they can.

…one should evaluate political science forecasting models based on what they are designed
to do, which is predicting the popular vote. We are not in the business of predicting the
electoral vote.

party vote. At that time, however, I expressed the opinion that THE 2016 MODELS: PLEA FOR THEORY AND LEAD
because Donald Trump was not a mainstream Republican and Michael S. Lewis-Beck, University of Iowa
was running an inept campaign, his vote share would probably
Charles Tien, Hunter College, CUNY
fall well short of that predicted by the model, allowing Clinton to
In election forecasting, the leading scientific rivalry plays out in
win the election.
the competition between the polls and the models. Forecasting the
In the end, Donald Trump received only 49% of the major
2016 US presidential election, the polls stumbled while the mod-
party vote, 2.5% less than predicted by the Time for Change
els stood tall. With respect to the latter, we look to the 11 models
Model. This was the first time since I developed the model in
appearing in PS: Political Science & Politics (October 2016) and their
1988 that it predicted the wrong winner of the popular vote. post-election evaluation by James Campbell (2016). These models
However, the error of 2.5 percentage points was only slightly all predict the two-major-party vote share to be a function of a few
larger than the average absolute error of 2.2 percentage points select independent variables, measured at the national level and
for all of the elections between 1988 and 2012. over time (usually on the elections across the post-World War II
While it is tempting to claim credit for correctly predicting period). As a body, these models forecast well, generating a median
Trump’s victory, doing so would be fundamentally dishonest. My 2016 forecasting error of less than one percentage point (i.e., 0.9),
model, like almost all political science forecasting models, pre- an error estimate much lower than the typical poll.
dicts the popular vote, not the electoral vote. There is a very good This success merits underlining, since each model generated
reason for this: predicting the electoral vote would require pre- only one forecast (as opposed to the weekly, even daily, forecasts
dicting the winner of the election in 10–15 swing states and this generated from polls). Moreover, this accuracy was achieved at
cannot be done weeks or months before Election Day. Indeed, the a fair distance from election day itself. Indeed, the median lead
2016 results show that this can be very difficult to do even on time for the forecast, in terms of days before the contest, is 78.
Election Day. These results demonstrate clearly that with serious lead time, the
Fortunately for political science forecasting models, the sine qua non of a valuable forecast, considerable accuracy remains
popular vote is generally a very accurate predictor of the elec- possible. Thus, we make a plea for sustaining the distance
toral vote. When the popular vote is very close, however, an component in future forecasting models. We also make a plea for
Electoral College “misfire” can occur and the 2016 misfire was reliance on theory, to which we now turn.
a doozy. The polls, when used for forecasting, are not based on
The key to the electoral vote misfire in 2016 is the fact that theories of the vote choice. They serve as accounting devices
Donald Trump won four of the six closest states—Florida, Michigan, to monitor vote intentions at a point in time. Vote intention
Pennsylvania, and Wisconsin—with a total of 75 electoral votes. items are best avoided in forecasting model specifications (except
Trump’s combined margin in three of those states—Michigan, perhaps as an add-on to capture omitted variables). The better
Pennsylvania, and Wisconsin—was under 80,000 votes. If Hillary models, we contend, rest on explicit, well-regarded theories of
Clinton had won those three states, she would have won the pres- electoral behavior. In other words, the independent variables in
idency with 278 electoral votes. such theory-driven models are measures that stand for the con-
ceptual forces at work. We believe our Political Economy model,
Two Final Thoughts on Evaluating Forecasting Models which forecast the Clinton two-party vote share at 51.0% repre-
First, one should evaluate forecasting models based on their track sents this type of model.
record over many elections, not on their accuracy in the most Our Political Economy model can be expressed, in words, as
recent election. Some of the models presented in this symposium follows:
might have come closer to predicting the results of the 2016 elec-
Presidential Vote = PoliticalPopularity + EconomicGrowth  Eq. 1
tion than the Time for Change Model but the Time for Change
Model has a long track record of accuracy. where Presidential Vote = two-major-party share of the popular
Second, and most importantly, one should evaluate political vote for the president’s party, Economic Growth = GNP growth in
science forecasting models based on what they are designed to do, the first two quarters of the election year, Political Popularity = July
which is predicting the popular vote. We are not in the business of job approval rating for the president in the Gallup Poll.

PS • April 2017  333


Po l i t i c s : A R e c a p o f t h e 2 0 1 6 E l e c t i o n Fo r e c a s t s
.........................................................................................................................................................................................................................................................................................................................

The model essentially holds the election to be a referendum (1) the cumulated weighted growth in leading economic indi-
on the president’s handling of major economic and non-economic cators (LEI) through quarter 13 of the current presidential
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

issues. The better the performance on these two dimensions, the term and (2) the incumbent party candidate’s share in the
more voters prefer the incumbent party. Estimating the model most recent trial-heat polls.9 What mostly distinguishes our
with ordinary least squares (OLS) across the post-World War II model from others is the reliance on leading indicators from
period (1948-2012) gives these results: the quarter ending in March of the election year. The early
reading of LEI works well as a predictor because it summarizes
Vote = 37.50 + .26 * Popularity + 1.17 *Growth  Eq. 2
growth in the economy leading up to the election year and also
Plugging in the relevant values (P = 51, G = .20) the model provides advance indication of changes in the economy during
generates an exact forecast for 2016. In general, the model the election year (Wlezien and Erikson 1996). The exact equa-
manages a median out-of-sample (jackknife) prediction error of tion and forecast change as the poll readings change during
only 2.0 percentage points, less than the usual margin of error for the election year.
pre-election vote intention surveys. For the future of forecasting, Our forecast published in PS (Erikson and Wlezien 2016),
national level theory-driven models hold considerable promise. based on trial-heat polls in August, after the two nominat-
ing conventions, was that Hillary Clinton would win 52.0% of
THE 2016 TRIAL-HEAT AND SEATS-IN-TROUBLE FORECASTS the two-party popular vote. This turned out to be quite close
to Clinton’s Election Day share of 51.1%, which of course did
James E. Campbell, University at Buffalo, SUNY
not translate into an Electoral College victory. Contributing to
The 2016 presidential election was defined by its negatives.
the expectation of a close election were LEI numbers through
Americans were dissatisfied with the direction of the nation.
the beginning of the year, which were almost exactly the aver-
About 55% had overall unfavorable opinions of Hillary Clinton
age for the previous 16 presidential cycles. Trial-heat polls pit-
and even more held Donald Trump in low regard, though the
ting Clinton and Trump gave Clinton a small but persistent
difference narrowed in the campaign’s closing weeks as Trump’s
advantage.
unfavorables among Republicans receded from the mid-30s to the
Forecasts made at other points in the election year
high-20s (CNN 2016, Gallup 2016).
were strikingly similar to our “official” forecast (see table 2).
Both the Convention Bump and Economy and the Trial-Heat
Our July 1 forecast using the LEI measure and polls from the
and Economy forecasting models weighed the unfavorable con-
second quarter of 2016 was 52.2% for Clinton.10 A later fore-
text facing the in-party Democrats as well as their advantage of
cast substituting polls from just before the first convention
Clinton being less disliked than Trump. On August 26, the
was 52.0%, exactly as we forecasted after the conventions.
Convention Bump and Economy model predicted Hillary Clinton
Plugging polls for the final week of the campaign (from
would receive 51.2% of the national two-party vote. Two weeks
Real Clear Politics) into our quarter 16 model predicts a little
later, the Trial-Heat and Economy model predicted Clinton
lower 51.4%.11 Although our specific predictions changed some
would receive 50.7%. With Clinton’s vote count at 51.1%, these
over time, broad electoral expectations remained much the
forecasts could hardly have been more accurate.
same.
The 2016 election outcome reaffirms the established record for
Even more than usual, the media narrative about 2016 was
accuracy of these models. In actual use, the Trial-Heat and Econ-
about the (uniquely unpopular) candidates and their cam-
omy model’s mean absolute error over six elections is only 1.6
paigns. Despite frequent predictions by pundits that Trump’s
percentage points.7 The more recently constructed Convention
candidacy was doomed, the post-conventions campaign played
Bump and Economy model now has a mean absolute error of only
out very much like our projections based solely on the early
0.8 percentage points in three elections. As well as being accurate,
economy and early polling. Did the fundamentals win out? In
both models are simple, stable, transparent, and have a strong
the final analysis, Trump’s strength and weaknesses as a can-
foundation in a well articulated theory of presidential campaigns
didate offset those of Clinton, whose candidacy was weighted
(Campbell 2008).8 Their strength supports their common prem-
down by WikiLeaks revelations and the fallout of FBI investi-
ise that campaigns are not purely retrospective contests, that the
gations of her e-mail problem.
candidates and the public’s pre-campaign evaluations of them are
important to an election’s vote division.
Turning to the congressional forecasts, my seats-in-trouble Ta b l e 2
models anticipated large Democratic seat gains in both House Forecasts of the Clinton Vote Share at
and Senate elections. Neither materialized. I suspect that the Different Points in 2016
large forecast errors reflect a drawback from a wave election this
year as the early revulsion to the Trump nomination subsided Date of Polls Forecast
and many reluctant Trump voters eventually came to terms with April–June 52.2%
the idea of voting for the “lesser of two evils.”
Before conventions 52.0
After conventions 52.0
LEADING ECONOMIC INDICATORS, THE POLLS, AND THE
2016 PRESIDENTIAL VOTE Final Week 51.4

Robert S. Erikson, Columbia University Note: Based on models including the quarter 13 measure of cumulative
growth in leading economic indicators and also trial-heat polls measured at
Christopher Wlezien, University of Texas at Austin various points of the election timeline. The equations are all based on data from
We prepared forecasts of the 2016 presidential vote at different the 16 elections between 1952 and 2012. For the details, see Erikson and Wlezien
(2016).
points of the election timeline. Our model contains two variables:

334  PS • April 2017


.........................................................................................................................................................................................................................................................................................................................

THE ECONOMIC PESSIMISM MODEL


Ta b l e 3
Brad Lockerbie, East Carolina University
The National Conditions and Trial-Heat
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

The economic pessimism model performed strongly this year.


The model forecast the Democratic candidate to get 50.4% of the Model before and after the 2016 Election
two-party vote. The final vote count gave the incumbent party
51.1% of the two-party vote, yielding an error of less than a per- 1952–2012 1952–2016
centage point. This is especially impressive given that the fore- Slope t-score Slope t-score
cast was made 133 days before the election. The model is available National Conditions Index 0.161 3.09 .156 3.11
well before the election. Moreover, with only two independent
Early September Trial Heat 0.384 4.30 .391 4.54
variables, the model is parsimonious. Running the equation with
Constant 19.847 5.30 19.76 5.42
the additional data point does not change matters consequen-
tially; the point estimates are virtually identical. The economic Obs. 16 17
item increases by less than .001 and the time in the White House R2 0.856 .852
changes by less than .10. The model is very stable. Mean Abs. Out-of-Sample Error 2.193 2.144
The campaign featured a reality tv show personality and Root Mean Out-of-Sample Error 2.930 2.830
presumed billionaire who regularly made what were thought to
be disqualifying statements from the first day of the campaign Note: The index of national conditions is comprised of presidential approval (Gallup)
and aggregate satisfaction with personal finances (Survey of Consumers), both
through almost the end. Many pundits predicted a landslide averaged over June, July, and August of election years. The trial-heat variable is the
average of the incumbent party percent of the expressed two-party vote in Gallup
victory for the incumbent party through early in the evening of trial-heat polls taken in the first week of September (or earliest polls in September
election day. A model looking at the fundamentals of the cam- if none in the first week) of the election year. In years in which one of the nominating
conventions is held in early September, the trial-heat polls are taken from the week
paign produced a much more accurate forecast of the outcome after the last day of the convention. In 2016, Gallup did not release presidential
well in advance of the election. Political scientists, unlike many trial-heat polls, so the average of all publicly available polls during the first week in
September were used.
others, did a good job of forecasting a close election.
The results of the House model were similarly strong. The
model forecast a zero seat change in the House. The actual change
victory despite a substantial popular vote loss might illustrate the
is a 6 seat loss. Running the equation with the additional data
potential folly of forecasting elections based on popular votes. At
point does not alter the interpretation at all. The economic item
the very least, the 2016 outcome should serve as a reminder that
is identical to the earlier result. The open seat interaction item
we are predicting the national popular vote winner, not the
changes by .01. The only item that changes appreciably is the time
winner of the presidential election.
in the White House variable. Still, in neither case is it significant.
Looking forward to 2020, as it will be a first bid for reelection, THE STATE-BY-STATE POLITICAL ECONOMY MODEL:
the Republicans are in a good position to retain the White House. BETWEEN SEMI-SUCCESS AND SEMI-FAILURE
Even if the economy is perceived as negatively as it was in the
Bruno Jerôme, University of Paris II
spring 0f 2016, they should do quite well. In fact, the incumbent
party would be predicted to win with approximately 55% of the Véronique Jerôme-Speziari, University of Paris Sud Saclay
two-party vote. Two and a half months before the 2016 US election our State-
by-State Political Economy Model (PEM) gave a slight advantage
POST-MORTEM: NATIONAL CONDITIONS AND TRIAL-HEAT to Hillary Clinton with 50.15% of the popular vote (error margin
POLLS +/- 4.6%) and 319 electoral votes (See PS 49 (4): 680–86). In the end,
Thomas M. Holbrook, University of Wisconsin-Milwaukee on November 8, 2016 the Democratic candidate actually won the
It may sound a bit odd to say that the 2016 election went (two-party) popular vote with 51.1%, however she only had 232
electoral votes and Donald Trump has been elected the 45th
pretty much according to form, but viewed from the perspective
President of the United States.
of the popular vote totals, that appears to be the case, at least
Thus, our model successfully predicted the correct Democratic/
based on predictions from the National Conditions and Trial-
Republican balance of power with regard to popular vote
Heat model. The model, which incorporates an index of national
but this was not enough to ensure a satisfactory result. Indeed,
conditions measured during the summer prior to the election
this is the first time of our PEM fails since its first run in
and trial-heat poll results from the first week in September,
2004.
predicted that Hillary Clinton would receive 52.5% of the two- If we look at the balance between forecasts and achievements,
party popular vote, whereas she ended up with 51.1% of the the model did not correctly predict the results in FL, IA, MI, OH,
two-party vote. The error in this prediction (1.4 points) is PA, VA, and WI (see figure 1).
somewhat smaller than the average absolute error (2.19 points) Two key states are clearly in the error margin (FL and PA,
from previous out-of-sample estimates from the same model e.g. 49 EVs) but this does not absolve us from seeking possible
(see table 3). In the updated model, which incorporates the explanations.
2016 election result, there are only very slight changes to the If we consider our main variables, Hillary Clinton had no
slopes and fit statistics, compared the 1952–2012 model, again rational reason to lose MI, OH, or WI. First, Obama’s popularity
confirming that the 2016 outcome was very close to what the (52.5% in August 2016) was a major plus. Secondly, in these Rust
model projected. Belt states, we observe a faster decrease in unemployment (over
Of course, the 2016 election did not go according to form 20-quarters) compared to national level. Moreover, still in MI,
in other important ways, and Donald Trump’s Electoral College OH, and WI, Donald Trump underperformed his national score

PS • April 2017  335


Po l i t i c s : A R e c a p o f t h e 2 0 1 6 E l e c t i o n Fo r e c a s t s
.........................................................................................................................................................................................................................................................................................................................

Furthermore, our partisan


Figure 1 domination index captures long-
State-by-State Political Economy Model Forecast for 2016
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

term effects but not short-term


Presidential Election ones. In this regard, the Gallup
state-level party identification
show that the feeling to be close
to Republicans is above the aver-
age level (40.1%) in each of our
errors except in MI. Likewise,
the identification to Democratic
party is below the national level
(43%) everywhere notwithstand-
ing PA.
A preliminary simulation aim-
ing to include state-level popular-
ity and party identification would
allow to forecast correctly IA and
PA only (e.g. 26 EVs). Unfortu-
nately, states such as FL, MI, or WI
stand unpredictable.
Drawing on this, which direc-
tion should be taken to improve
our model? For instance, we could
add incumbent’s main rival score
at primaries. Bernie Sanders over-
performed in IA, MI, PA, and WI.
This apart, is such a variable
at GOP primaries (e.g. 44.95%). At last, in each of those cases, our accurate for each election since (at least) 1980? Doubts remain
partisan domination index was mostly in favor of the Democratic about this.
candidate. In contrast, the decrease in unemployment was far less More specifically, we should experiment the insertion of an
swift in IA, PA, and VA. Note that Donald Trump overperformed index synthesizing several local sociodemographic characteristics
his national score at primaries in FL and PA. Aside from VA,12 such as those which likely helped Donald Trump to win, despite
it seems that IA, PA, and FL had certain weaknesses potentially the loss of the popular vote.
detrimental to Clinton but the PEM was not sensitive enough to
detect them. THE 2016 POLLYVOTE POPULAR VOTE FORECAST:
All things considered, the treatment of both popularity and A PRELIMINARY ANALYSIS
partisan domination seem to have been imperfect. If we observe Andreas Graefe, Columbia University
the state level popularity (Gallup), Obama’s job approval is sig-
J. Scott Armstrong, Wharton School of the University of Pennsylvania
nificantly below his national level in IA (-5.5), OH (-4.5), PA (-3,5)
and WI (-1.5). However, a small surplus is found in FL (+0.5), Randall J. Jones, Jr., University of Central Oklahoma
MI (+1.5), and VA (+0.5). Alfred G. Cuzán, University of West Florida
Since its launch last January, the 2016 PollyVote consistently
predicted that Hillary Clinton would win the popular vote, which
Figure 2
she did. In this preliminary analysis we assess how the Polly-
Forecast Error by PollyVote Component Vote and its components performed in this election compared
Method: Mean absolute error, historical vs. to the previous six (1992 to 2012).
2016, across last 100 days before the election Ranked according to their historical accuracy from best
to worst, right to left in figure 2, the six components of the
PollyVote place as follows: citizen forecasts, prediction markets,
index models, expert judgment, econometric models, and polls.
In 2016 citizen forecasts13 defended their top position, but other
methods were outliers relative to their historical record. Both
polls and econometric models performed considerably better,
while prediction markets did worse. This fluctuation in accu-
racy among methods makes it difficult to predict ex ante which
method will do best.
Combining forecasts within and across methods, as the
PollyVote does, guarantees that its forecast will never come out
last and will always do at least as well as the typical forecast.14
This was demonstrated this year. The PollyVote incurred an error

336  PS • April 2017


.........................................................................................................................................................................................................................................................................................................................

of 1.9 percentage points, almost twice the historical average, but financial collapse in mid-September 2008, the 2008 election is excluded from
the error calculations.
lower than that of three other methods.
8. I take exception to the contention of Lewis-Beck and Tien that forecasting
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

Combining works best when the various component forecasts models with presidential preference survey data are not theoretical or
bracket the true value (Graefe et al., 2014a). Compared to the pre- explanatory. In both theoretical and forecasting research, both space and
time are crucial to a variable’s specification. Economic growth in the first
vious six elections, there wasn’t much bracketing in 2016. Five of quarter of a year is not interchangeable or identical to economic growth
the combined forecasts overshot Clinton’s share of the vote while in the fourth quarter. By the same token, presidential approval when a
president is seeking reelection may mean something completely different
only one component, the econometric models, fell short. Thus the from what it means when that president is leaving office. The same is true
PollyVote did not do as well as in previous elections. of preference polls. Preferences expressed on polls at time 1 may or may not
The method of combining forecasts makes no claim that the be related and may or may not be identical to preferences at time 2 just as
other variables at different times may or may not be related to preferences at
PollyVote will always outperform its most accurate component, time 1 or 2. My theory of campaign effects, including competitiveness effects
although that can happen, as was the case in 2004 (Cuzán et al., and the effects of other campaign contexts, provide the foundations for the two
presidential models used here (Campbell 2008).
2005) and 2012 (Graefe et al. 2014b). What is claimed is that over
9. Our model ignores a possible time-for-a-change variable based on the in-party’s
time, as the constituent methods’ relative accuracy varies, the number of consecutive terms, as we find that voters absorb the effect early in
PollyVote will surpass them. This is demonstrated in figure 2, the campaign, when it becomes fully reflected in trial-heat polls. For details,
see Erikson and Wlezien (2016).
which displays the mean absolute error of all methods across all
10. An earlier, mid-June forecast for Larry Sabato’s Crystal Ball predicted a 52.0% share
seven elections from 1992 to 2016. On average, then, the PollyVote for Clinton (Http://www.centerforpolitics.org/crystalball/articles/the-political-
continues to minimize error while avoiding making large errors. n science-election-forecasts-of-the-2016-presidential-and-congressional-
elections/).
11. Forecasting based on an equation using polls from the final seven days of
previous election years gives an identical prediction of 51.4%.
NOTES
12. VA is predicted against current trends since 2012. This state has been a Republican
1. The forecasts were published in the October issue of PS. In most cases, they stronghold for almost four decades and still has a deep impact on predicted
were initially posted on Larry Sabato’s Crystal Ball website. scores.
2. The forecast numbers are those published in each forecaster’s October PS 13. On the remarkable performance of citizen forecasts, see Graefe (2014).
article. In a couple of cases, these numbers had been updated from what
I reported in my October PS summary table, but the differences were in no case 14. The error of the typical forecast is the average of the errors incurred by individual
more than one or two tenths of a percentage point. forecasts of a given event, such as an election. By contrast, the error of the
combined forecast is the difference between the average of the forecasts and the
3. This was the highest third-party vote since Ross Perot’s second independent
true value of the event being predicted.
run for president in 1996.
4. Before concluding that the electoral vote system “misfired,” two facts should be
considered. First, the popular vote was likely affected by the electoral vote system.
Had the electoral system decided the presidential winner by the national popular REFERENCES
vote, campaigns and votes would have been differently conducted and decided.
The campaign and the vote are endogenous to the system (and probably growing Campbell, James E. 2008. The American Campaign: U.S. Presidential Campaigns
more so in each election with the proliferation of state polling) and there are and the National Vote, second edition, College Station, TX: Texas A&M University
reasons to suspect that Trump’s national vote would have improved more than Press.
Clinton’s with such a change. The principal reason for this conjecture is that ———. 2016. “The Trial-Heat and Seats-in-Trouble Forecasts of the 2016
Republicans were more divided over Trump than Democrats were over Clinton Presidential and Congressional Elections,” PS: Political Science & Politics
and active campaigns have the effect of reunifying parties. Second, even if we 49 (4): 664–68.
take the vote as exogenous to the system, Clinton’s plurality depended heavily on
running up the vote in a single state, California. Without California, Clinton’s 2.9 Cuzán, Alfred G., Scott Armstrong, and Randall J. Jones. 2005. “How we Computed
million national popular vote plurality turns into a 1.4 million plurality for Trump the PollyVote.” Foresight: The International Journal of Applied Forecasting
in the other 49 states plus D.C.. Gore’s 2000 vote plurality of a half million votes 1 (1): 51–2.
similarly depended on his 1.3 million plurality in California. Erikson, Robert S. and Christopher Wlezien. 2016. “Forecasting the Presidential
5. I examined the 538 “Polls-Plus” forecasts made after the nominating Vote with Leading Economic Indicators and the Polls.” PS: Political Science &
conventions in late July (102 days before the election) through to the first Politics 49 (4): 449–72.
presidential debate in late September (43 days before the election), well after
Graefe, Andreas. 2014. “Accuracy of Vote Expectation Surveys in Forecasting
the PS forecasts were in. Since the “Polls-Plus” forecasts were produced daily, I
Elections.” Public Opinion Quarterly 78 (S1): 204–32.
used their median for comparison. The median 538 “Polls-Plus” forecast in this
span was 51.8% of the two-party vote for Clinton, an error of +.7%. The final polls Graefe, Andreas., Scott Armstrong, Randall J. Jones, and Alfred G. Cuzán. 2014a.
in Real Clear Politics, available only the day before the election, also predicted a “Combining Forecasts: An Application to Elections.” International Journal of
51.8% Clinton vote. Forecasting 30 (1): 43–54.
6. The median forecast of the Iowa Electronic Market in its last week before the Graefe, Andreas., Scott J. Armstrong, Alfred G. Cuzán, and Randall J. Jones.
election was as far off the vote (3.6 points) as Norpoth’s forecast produced over 2014b. “Accuracy of Combined Forecasts for the 2012 Presidential Elections:
eight months earlier. The PollyVote.” PS: Political Science & Politics 47 (2): 427–31.
7. The Trial-Heat and Economy model was first used in the 1992 election. The
Wasserman, David. 2017. “2016 Popular Vote Tracker,” The Cook Political Report
economic variable was revised after the 2000 election to make the economic
(January 2, 2017). http://cookpolitical.com/story/10174.
effects conditional on whether the incumbent president is seeking reelection.
The Convention Bump and Economy model was first used in the 2004 election. Wlezien, Christopher Robert S. Erikson. 1996. “Temporal Horizons and Presidential
Because of the unprecedented intervention of the Wall Street Meltdown Election Forecasts.” American Politics Research 24: 492–505.

PS • April 2017  337


Po l i t i c s : A R e c a p o f t h e 2 0 1 6 E l e c t i o n Fo r e c a s t s
.........................................................................................................................................................................................................................................................................................................................

SYMPOSIUM CONTRIBUTORS
Downloaded from https://www.cambridge.org/core. Hunter College Library, on 03 Oct 2017 at 19:35:23, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1049096516002766

Alan I. Abramowitz is the Alben W. Barkley Professor and The Timeline of Presidential Elections (Chicago). that field. He is the author of Who Will Be in the White
of Political Science at Emory University in Atlanta, He is the former editor of the American Journal of House? Predicting Presidential Elections (Longman,
Georgia. Abramowitz has authored or coauthored six Political Science and Political Analysis. 2002), and is a founder of the Political Forecasting Group
books, dozens of contributions to edited volumes, and of the American Political Science Association. He may be
more than fifty articles in political science journals Andreas Graefe is a research fellow at the Tow Center
reached at ranjones@uco.edu.
dealing with political parties, elections, and voting for Digital Journalism at Columbia University and
behavior in the United States. He is also one of the LMU Munich’s Department of Communication Studies Michael S. Lewis-Beck is F. Wendell Miller
nation’s leading election forecasters—his Time for and Media Research. He also holds the endowed Sky Distinguished Professor of Political Science at the
Change Model has correctly predicted the popular Professorship in Customer Relationship Management University of Iowa. His interests are comparative
vote winner in every presidential election since 1988 at Macromedia University Munich. As the leader of the elections, election forecasting, political economy, and
including the 2012 election. Abramowitz’s most PollyVote project, Andreas has done extensive validation quantitative methodology. Professor Lewis-Beck has
recent book, The Polarized Public: Why American work on election forecasting methods and has developed authored or co-authored over 260 articles and books,
Government Is So Dysfunctional examines several election forecasting models. He may be reached at including Economics and Elections, The American
the causes and consequences of growing partisan graefe.andreas@gmail.com. Voter Revisited, French Presidential Elections,
polarization among political leaders and ordinary Forecasting Elections, The Austrian Voter, and
Thomas M. Holbrook is the Wilder Crane Professor of
Americans. He may be reached at polsaa@emory.edu. Applied Regression. He may be reached at michael-
Government at the University of Wisconsin-Milwaukee.
lewis-beck@uiowa.edu.
J. Scott Armstrong, Professor at the Wharton School, is He has published in leading political science journals
a founder of the Journal of Forecasting, International and is author of Do Campaigns Matter? (Sage, 1996) Brad Lockerbie is professor of political science at
Journal of Forecasting, and the International and Altered States: Changing Populations, Changing East Carolina University. His research on economics
Symposium on Forecasting. He is the creator of Parties, and the Transformation of the American and elections has appeared in numerous scholarly
forecastingprinciples.com and editor of Principles of Political Landscape (Oxford Univeristy Press, 2016). journals. He is author of Do Voters Look to the Future:
Forecasting (2001). In 2010, he was listed as one of the 25 He may be reached at holbroot@uwm.edu. Economics and Elections (SUNY). He may be reached at
Most Famous College Professors Teaching Today. He may lockerbieb@ecu.edu.
Bruno Jérôme is an associate professor in the department
be reached at armstrong@wharton.upenn.edu
of economics at the University of Paris 2 and member of Helmut Norpoth is professor of political science at Stony
James E. Campbell is a UB Distinguished Professor of the Law and Economics Center of Paris 2. His interests are Brook University. He is coauthor of The American Voter
Political Science at the University at Buffalo, The State political economy, public economics, election forecasting, Revisited and has published widely on topics of electoral
University of New York and is the guest editor of this European economics and institutions. His most recent behavior. His book, Commander in Chief: Franklin
symposium. He is the author of four books and more than books are Analyse Economique des Elections Roosevelt and the American People, is forthcoming.
80 articles and book chapters on American macropolitics, (Economica); (2010 co-authored with Veronique Jerome- He can be reached at helmut.norpoth@stonybrook.edu.
electoral change, and other aspects of American politics. Speziari) and Villes de gauche, villes de droite : Presses
Christopher Wlezien is Hogg professor of government
His most recently published book is Polarized: Making de Sciences Po (Forthcoming, 2017 co-authored with
at the University of Texas at Austin. His research and
Sense of a Divided America (Princeton). He may be Richard Nadeau, Véronique Jérôme-Speziari and Martial
teaching interests encompass a range of fields in American
reached at jcampbel@buffalo.edu Foucault). He can be reached at bruno.jerome@gmail.com.
and comparative politics, and his articles have appeared
Alfred G. Cuzán is Distinguished University Professor Véronique Jérôme-Speziari is an associate professor in various journals and books. He is coauthor of Degrees
at The University of West Florida. His most recent work in the department of management at the University of of Democracy (Cambridge) and The Timeline of
addresses first principles of government (e.g., “Five Paris Sud 11 and member of the Paris 2 Laboratory of Presidential Elections (Chicago) and coeditor of Who
Laws of Politics,” PS: Political Science & Politics, July Management. Her interests are political economy, public Gets Represented? (Russell Sage). He was founding
2015, 415-419.). On a Fulbright Grant, in spring 2016 he economics, election forecasting, political marketing, and coeditor of the Journal of Elections, Public Opinion
taught American politics and Latin American politics at nonverbal communication. Her most recent books are and Parties and currently is associate editor of Public
the University of Tartu, Estonia. He may be reached at Analyse Economique des Elections (Economica); (2010, Opinion Quarterly.
acuzan@uwf.edu. co-authored with Bruno Jerome) and Villes de gauche,
Charles Tien is a professor in the department of political
villes de droite : Presses de Sciences Po. She can be
Robert S. Erikson is professor of political science at science at Hunter College and the Graduate Center,
reached at veronique.jerome@gmail.com.
Columbia University. His research on American elections CUNY. His areas of interest include Congress, race and
has been published in a wide range of scholarly journals. Randall J. Jones, Jr. is Professor of Political Science, ethnic politics, and quantitative methods. Previously he
He is coauthor of The Macro-Polity (Cambridge Emeritus at the University of Central Oklahoma, where was a Fulbright Scholar at Renmin University in Beijing,
University Press), Statehouse Democracy (Cambridge), he teaches election forecasting and conducts research in China. He can be reached at ctien@hunter.cuny.edu.

338  PS • April 2017


View publication stats

Вам также может понравиться