SMI pollsters

The UK is still pulling itself together from the results of last week’s snap election but the clever people at the SMI saw it all coming as their election predictions showed.

results table

Dr Aneta Piekut and Dr Alasdair Rae will no doubt be fielding job offers from the official pollsters, coming closest with their predictions – but how did they get it right when all the others didn’t?

Sheffield Q-Step Director, Dr Alasdair Rae explains his prediction method:

“In the weeks leading up to the election, I’d done some work with the Data Editor at Google on how people were searching for different political terms and who was getting the most search traffic. This helped me form an early view that things might not be as bad for Labour as some had predicted.

After the Conservative Manifesto launch and subsequent underwhelming performance by Theresa May - in addition to a revived Corbyn campaign - I felt that there was a strong chance that Labour could do a good bit better than many of the polls were predicting. But of course it was only really guesswork, so I decided to risk a prediction and tweet it out a few days before polling day. I went with 318 for the Conservatives, 250 for Labour, 11 for the Liberal Democrats, 49 for the SNP and 22 for the rest.

I got the Conservative figure spot-on and was reasonably close on Labour and the Lib Dems but under-estimated the Labour and Conservative surge in Scotland. The small consolation on the latter, being a Scot, is that nobody else seemed to see it coming. Anyway, this can all be seen as educated guesswork but I'm glad I managed to get close to the right result.”

Whilst Alasdair used social media data and some well-informed guesswork, Dr Aneta Piekut’s method used human behaviour combined with expert knowledge of survey design to come up with her prediction – she explains:

"My prediction was based on the recognition that there is a tension between the nature of people and the nature of polling - while human nature seeks certainty, there is an uncertainty lying in the heart of survey research. As with any sample survey we use a much smaller fraction of the population to say something about an entire group.

Looking at the polls before this and previous elections the huge variation in their results was apparent (uncertainty). At the same time we did not have reliable survey data at the constituency level. Hence, my projection was a means of recognising the nature of polling in case of single-winner voting. So there was not any data science involved, but survey methodologist thinking.

So on the SMI whiteboard I put 315 for conservatives and 260 for Labour, with both results being suspiciously close to the real result (3 and 2 seat underestimate, respectively). I was not so good in predicting seats of Liberal Democrats or SNP. Yet, my critically informed 'bet' was better than most polling results and even the experimental poll by Yougov. Who would’ve expected that? Great exercise for my 'Survey design' class next year."