Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

Third time, this time, pollsters underestimated Trump

Americans had decided to restore Donald Trump to the White House. However, poor pollsters remained clueless while Trump orchestrated one of the most remarkable political comebacks in history. In fact, this is the third consecutive time that the pollsters have...
  • fb
  • twitter
  • whatsapp
  • whatsapp
featured-img featured-img
UNPREDICTABLE: Pollsters have underestimated Trump’s vote share for the third consecutive time. REUTERS
Advertisement

Americans had decided to restore Donald Trump to the White House. However, poor pollsters remained clueless while Trump orchestrated one of the most remarkable political comebacks in history.

In fact, this is the third consecutive time that the pollsters have wildly underestimated Trump’s vote share, both nationwide and in the crucial swing states. This time, they expected the presidential election to be nearly tied. Kamala Harris led by just one percentage point nationwide, according to the most recent opinion polls. However, Trump won the race with an overall lead of around three per cent.

In the actual vote, Pennsylvania gave Trump a 2.1 per cent lead, North Carolina and Nevada both gave him a 3.3 per cent lead, Georgia a 2.2 per cent lead, and Arizona a 6.4 per cent lead. These states were identified as swing states because the pollsters said these were having a razor-thin difference between the two contenders.

Advertisement

What caused the pollsters to consistently underestimate Trump’s vote, though? Shouldn’t they now look back at their mistakes and attempt to make the necessary corrections for their future prediction business?

Historically, the popular vote at any one time during many of the previous election years — 2012, as a recent example – was wildly misrepresented by surveys. In fact, opinion polls are built on statistical theories that are applicable when the sample is made up of red and blue balls chosen, for example, from an infinite population, where the error margin of plus or minus three per cent is also mentioned. Nevertheless, people don’t act like balls in any event. Respondents could have changed their preferences or lied to pollsters. Furthermore, one doesn’t know what proportion of respondents would actually show up to cast their votes or whether this proportion is even comparable to those who didn’t answer. Pollsters have to make informed assumptions about the likely voters.

Advertisement

And, the meagre proportion of respondents ought to be a serious problem in these surveys. Response rates to the Pew Research Center’s telephone surveys fell precipitously from 36 per cent in 1997 to six per cent in 2018. Nate Cohn of The New York Times reported in 2022 that the response rate to his surveys was as small as 0.4 per cent.

Even if appropriate statistical adjustments are done to make the data representative of all potential socioeconomic divisions with suitable proportions, such non-response creates substantial potential data concerns.

Consider a scenario where one lakh voters are approached, but only a thousand or fewer answer. The characteristics of these thousand persons must be considerably different from those of the others — that’s why they responded to the pollsters. Nevertheless, one aspires to infer the general population’s voting behaviour from these distinctly different individuals’ potentially deceptive answers!

In actuality, pollsters have historically underestimated the Conservative support base. In Britain, it has been known as the ‘shy Tory factor’ since John Major was re-elected in 1992. Conservative voters are observed to be less likely to demonstrate their loyalty to the pollsters than Labour voters.

Similar issues are observed for the People’s Action Party in Singapore and the Republican Party in the US. Furthermore, people frequently have a tendency to not disclose their support for a controversial leader, such as Trump, to the pollsters, a phenomenon known in the US as the “shy Trump factor”.

Also, a new social class was revealed as a result of David Cameron’s victory in the 2015 UK election: the “lazy Labour” voters, who told pollsters that they planned to vote for Labour but didn’t show up to cast their ballots.

Is there a significant number of lazy Democrats in America? If so, are pollsters aware of the number?

To be fair to the pollsters, Harris had to contend with two obvious obstacles this time: race and gender. The magnitude of each of these factors had an impact on the vote percentage that couldn’t be measured and was expected to vary in great degree, depending on the societal composition across the US states.

Some voters may indicate to pollsters that they are unsure or likely to vote for a minority candidate, such as a black person or person of colour in the context of the US, but would vote against that person on the election day. The “Bradley effect”, named after Tom Bradley, an African-American who led the polls before the 1982 California gubernatorial contest but lost, was noticeable in several subsequent elections.

Nonetheless, some analysts argue that in the most recent elections, the “Bradley effect” was less noticeable. Instead, following African-American Barack Obama’s victories against the white opponents in 2008 and 2012, the “reverse Bradley effect” theory was proposed by them.

The “Chisholm effect”, which bears the name of Shirley Chisholm, the first black woman elected to the US Congress in 1968, and her disastrously unsuccessful 1972 presidential campaign, is another example. Following Obama’s victory over a white woman, Hillary Clinton, in the 2008 Democratic primary, the term “Hillary effect” was created in the patriarchal American society. Was there a notable “Hillary effect” or “Chisholm effect” on Harris during this election? If yes, by what quantum; in which US state, particularly in the swing states? Could the pollsters estimate that? Or are the pollsters unaware of the quantum of these effects?

After the poll results, political analysts are busy attempting to explain Harris’ defeat by pointing out that she failed to present a viable economic strategy to her vote base and that her proposed policies didn’t differ significantly from Biden’s. Also, her steadfast support of Israel, in addition to being a crucial component of the Biden administration, may have partially shattered the Democratic base among the Arab-Americans. Amid the intensifying West Asia crisis, could the pollsters estimate how many traditional Democrat Arab-American voters voted for Trump on November 5? And do the pollsters know how many Arab-American voters stayed at home on election day in the swing states? These factors could be decisive in a close race.

An election is a combination of all such factors and many more. Undoubtedly, estimating the magnitude of these important, but shadowy, effects is a daunting task. Therefore, assessing human intention in one of their most sensitive topics — political choice — is a challenging, if not impossible, assignment for the pollsters. As a result, they are doomed to consistently make incorrect predictions quite often in such close races, especially in intricate cultures like that of America.

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper