Actually, The Polls Were Wrong
December 1st, 2012 - D. Stone
Public opinion polling in a political climate similar to Canada's can be a challenge and very few pollsters have ever been completely accurate. Polling through the Alberta Provincial election proved that measuring public opinion on more than two political parties and aiming for the lowest margin of error is more difficult than it is simple. The recent by-election in Calgary Centre was also touted as a three-way race that was too close to call, for weeks before the polls opened. It was supposed to be a three-way tie, but it never was. The Green and NDP candidates fell far below what polls were predicting. The Tory candidate won with over 5% more than what polls predicted and that was well beyond the margin of error.
In the US, Breitbart's John Nolte was quick to acknowledge that the polls were correct and everyone who questioned them for having a Democratic bias were wrong. Gauging public opinion between two political parties is far less of a challenge than gauging opinion in a more dynamic and diverse political environment like Canada. US pollsters also spend months, before an election, cooking their numbers. Predicting a winner amongst only two candidates leaves very little room for error and, with an average spread of 2-5%, predicting the popular vote within a margin of error becomes child's play.
When looking at the timeline of the US election, we can see how wrong the pollsters were. They were correct for over sampling Democrat turnouts, but their results were still significantly exaggerated and entirely inaccurate. This was probably done to coincide with any particular narrative of the moment. To justify their inaccuracies, pollster are always quick to cite their margin of error. This margin of error often comes to the rescue of a pollster's reputation. When it comes to having a 50-50 chance of predicting a winner, more emphasis should be put on how accurately the popular vote can be predicted. When we compare polls to actual results, we see significant failures in opinion polling in US elections. Looking at the timeline in retrospect also shows us how polls were conveniently off the mark up until election day.
As expected, when election day grew nearer in the US, pollsters began to sample a more realistic number of Democrats to Republicans. Polls from July showed Obama in the lead by 5-10% with 10-13% more Democrats sampled than Republicans and independents. By October, those same pollsters, like Ipsos-Reid and Pew Research, began to shrink the Democratic sampling to 5-7% over Republicans. For months before an election, pollsters are able to play it safe by distorting their results to reflect a lead or a dead heat, often to the liking of news networks that feed off of such nail-biting results and rollercoaster narratives.
By the last week in October, Ipsos-Reid, Pew Research, and various other pollsters had come up with much different numbers. Barack Obama's lead over Mitt Romney appeared to vanish as election day neared. The race was tightening, according to the news networks that were looking forward to a night of good ratings.
From August to November, we were made to believe that the American people were being indecisive as Obama and Romney traded their 5% leads in polls. Narratives emerged whenever a certain topic became news. After endorsing gay marriage, Obama's support grew. When small job growth and the Benghazi 9/11 attacks made news, his support dropped. When Romney bruised and battered Obama in the first debate, the Governor's support grew to a near 5% spread over Obama. And so on and so forth. With each piece of news, the media was able to create a perception about who was winning and who was losing on each subject.
The day after election night, Americans awoke to find that Mitt Romney had managed to gain less than one million more votes than John McCain and that Obama's support had eroded, explaining his decline in the popular vote from 52% to 50%. Turnout for the 2012 election had actually dropped by nearly 4% and people were far less interested. Comparing the race of 2012 to 2008 reveals no real difference or change in support. It also reflects a complete stand-still in Republican support.
The lack of influence and actual failure of opinion pollsters is more evident in the 2008 election campaign, in which polls showed a dead heat between McCain and Obama. The actual results showed a near landslide - by American standards - in favour of Obama, with a 7% spread between the two candidates. Just days before the election, polls were predicting a 3% spread in favour of Obama while cautioning voters that the prediction was within their 3% margin of error, making the election too close to call.
It is highly unlikely that such massive shifts in public opinion actually took place at the times that pollsters were saying that they did. Comparing hard numbers from 2008 to 2012 reveals much of the same. The only difference being a reduction in enthusiasm amongst Democrats and 2008 Obama supporters. Like in 2008, Republicans failed to show much effort.
Public opinion polls are often commissioned by news networks. Since most news networks are in the business of ratings, public opinion polls should never be used to accurately predict an election.