Вы находитесь на странице: 1из 6

Introduction:

In both the United States and United Kingdom, exit polls play a key role in

shaping election night coverage. For anyone who has tuned into an Election Night

television broadcast in the U.S., much of the early coverage (and indeed much of the late)

is spent telling us "not only who won, but why they won". The "why they won" is a

reference to exit polls, which ask voters who they voted for and what shaped their vote.

Often times, the "why" question can provide valuable insight that would not be clear

otherwise. In 2000, experts believed (and polls suggested) that Al Gore’s hopes of

performing strongly in Florida rested with seniors due to his "lock box" stand on social

security, but the exit polls showed that older voters split relatively evenly among the

candidates. Instead, it was young voters, happy about the strong state of the economy,

who allowed Gore to end up in a virtual dead-heat with George Bush in Florida. Few

remember this age-flip revelation, but many recall the fact that the early exit polls

indicated Al Gore would win in Florida.

The ability of the Florida exit poll to discover an important fact about the

electorate, but incorrectly project the winner begs the question: what is the fundamental

purpose of an exit poll? Unlike in the U.S., U.K. exit polling is used only to forecast

results. Since 1997, British exit polls have accurately estimated the winner of the British

House of Commons election as soon as the polls closed (see more below). Is it more

important for an exit poll to explain a victory or to predict one? What happens when exit

polls disagree with the results of pre-election polls or the actual results? In this two-part

blog series, I intend to veer off my conventional path and take a comparative look at past

1
exit polls to try and answer these questions. In this short blog entry, I will start off by

looking at how British and U.S. exit polls differ. I will show that British exit-pollsters

conduct surveys with the chief goal of quickly forecasting on election night which party

(or no party) won parliament due to the fact that actual election results are reported very

slowly in Britain. My next post focuses on the differences between American and

Mexican exit polls.

Key Differences between British and American Exit Polls:

As the introduction suggests, exit polls in the United States serve the dual purpose

of early prediction, as well as explanation of the vote. These exit polls have been

conducted for media organizations for the purpose of informing/entertaining television

audiences and providing valuable material to academics after the election. According to

the late head of Mitofsky International, Warren Mitofsky, raw horserace numbers, as in

Gore leads Bush 50-47%, are never purposely aired on Election Night (due to a relatively

high margin of error as well as the enthusiastic voters problem illustrated below). In the

United Kingdom, on the other hand, exit polls have the sole purpose of predicting

election night outcomes. Like the United States, the exits polls in Britain are compiled

for, and reported by, media organizations. Unlike, in the United States, British media

organizations are very specific and tell audiences that "the Conservatives are expect to

receive XXX seats in the House of Commons, while Labour is expected to receive

XXX". Also different from the United States, UK's exit polls do not aim at predicting

individual seats, but rather the aggregate result.

The reasons for these contrasts are rather clear. First, the British House of

2
Commons is not decided by one election, but an election of 650 constituencies. Of

course, polling 650 individual constituencies is essentially impossible, as it would require

well over 300,000 interviews to have a margin of error below +/- 5%. While is true that

presidential elections in the United States are of 50 statewide contests (less for senatorial

and gubernatorial contests), the significantly higher number of British constituencies

presents a major challenge. Due to the single member district rules of the British

parliament, a straight national vote to seat uniform swing cannot be utilized1. Thus, exit-

pollsters face the task of trying to make an aggregate prediction without being able to poll

every district. To get around this problem, exit pollsters (see Figure 1, page 6) interview

at polling stations in districts that are deemed to be "swing-districts" (marginal

constituencies). That is, the first districts one would expect to flip to the opposition based

upon previous vote (gathered using exit polls as individual polling stations do not

typically report vote totals2) and demographic data. Using math far too complicated for

this post, the marginal constituency data is turned into a national seat estimate. Still, none

of this answers the question of why exit polls in Britain do not bother to find out "why"

voters vote the way they do.

Why The Difference in Exit Polls?

The answer is both simple and complex. The pace of counting in each of the 650

different British constituencies differs considerably and often takes a very long time
1 According to Mark Pack, a straight uniform swing had an average error among the three
main parties of 30 seats between 1997 and 2005.
2 See Curtice and Firth page 2 and Rallings et al. page 7. Most precinct level data is

available for only a limited time and is too resource intensive to gather quickly enough.
Some boroughs such as Brent, London keep precinct level data for longer. Otherwise,
exit poll and after election surveys such as the British Election Study are the only sources
for precinct level data.

3
because votes are counted by hand (vs. by machine in the United States). Even when

Labour won 64% of the seats in Parliament in 1997, outgoing Prime Minister John Major

did not concede until 3:25 a.m., 5 1/2 hours after the polls closed. In the similarly unclose

2008 American presidential election, John McCain admitted defeat before midnight

Eastern Time. John Boehner, incoming Speaker of the House of Representatives, a large

legislative body like the House of Commons with 435 members, also declared victory

before midnight in the lopsided 2010 election. Thus, the U.S media do not need exit polls

to declare winners at a relatively early hour3. In Britain, people watch the exit polls to

know the results before they go to sleep, according to exit pollster John Curtice. If exit

polls were not able to quickly project a winner, voters would not know who won the

election until the next morning at the earliest.

The long exit poll questionnaires needed to determine why voters voted the way

they did would likely tamper the quick exit poll results. For example, exit pollsters in

Taiwan (see page 6044) found that shortening the questionnaire did lead to higher

response rates4. Those who would take the longer exit poll might be voters who were

more enthusiastic. In 2004 (see points 34 and 35), U.S. exit pollsters suffered from this

problem because Democratic voters were more eager to take the survey. The result was

skewed exit polls that over-predicted Democrat John Kerry's vote-share. Learning from

their early declarations in 2000, and understanding the possible survey errors associated

with exit polls (and any poll for that matter), the media did not report these pro-Kerry exit

polls. Instead, they waited for actual poll results to confirm the exit poll data. As we
3Even in the tight 2004 election, NBC News was able to declare Bush a winner by 1 a.m.
4See Stephen Porter for a longer discussion of how shortening the survey length leads to
better response rates.

4
know, it did not and the exit polls were then weighted accordingly to reflect the true

electorate.

The other issue regarding long questionnaires is one of time. In the United States,

the standard national exit poll asks over 50 questions. Reporting this large amount of data

from the field back to statisticians who can then weight and model it properly is a lengthy

process (see page 11). British exit pollsters are not afforded the time. According to exit

pollsters John Curtice and David Firth, Britons vote overwhelmingly in the late afternoon

and early evening. In fact, over 60% of British exit poll data are collected after 4 p.m.

Exit pollsters would not be able to transmit thousands of question laden late surveys to

properly reflect the late rush of British voters. Compare that to the United States, where

over 60% of exit data is collected before 3 p.m. American exit pollsters can take their

time collecting data over the course of a day, and if late day voters differ from earlier

ones, the media can simply say a race "is too close to call"5. In Britain, short surveys are

the only way to collect the data quickly and relay it quickly for top-line results to be

released by the media during their 10 p.m. newscast.

Concluding Thoughts

I must ask whether the rush for top-line results is really worth all the fuss? Is the

need for quick data so great as to sacrifice educational detail about the electorate that exit

polls like those in American supply? The obvious answer is to say that the media is free

to do what it pleases. One could also argue that more in-depth detail can be found in

alternative scientific surveys conducted by phone or Internet immediately following the


5Exit poll data is reported at 12 P.M, 3 P.M, and just before the polls close in each state.
A composite of the first two “waves” is usually accurate enough to project winners in
non-tossup contests.

5
election such as the British Election Survey. That said, many of these post election polls

are not nearly as accurate as exit polls. Some voters have a tendency to lie about their

vote choice to fit the actual winner (in a type of bandwagon effect). Other people claim to

have voted when they had, in fact, not voted. Even if voters are honest, there is always

the possibility that a gap of time between casting a vote and answering a survey after

Election Day can impact a person's memory of why (s)he voted the way they did. Finally,

consider what would happen if these additional questions were added to the survey.

Fewer people would be questioned, and the margin of error for the top-line results would

increase. In the United States (unless the results are very clear or close), this causes the

media to wait 2-3 hours before announcing a result. If we apply that time frame to the

United Kingdom, an accurate seat count could still be predicted by 12 to 1 a.m. in the

morning. Perhaps that is too late, but that depends on what we want from the exit polls. If

the main reason for exit polls is to base a discussion around results, then the conversation

that could be generated by the results from the "why" questions asked in the exit poll

would also be entertaining and informative. Will we ever find out what type of exit poll

British voters prefer? I would not bet on it.