email   Email Us: info@lupinepublishers.com phone   Call Us: +1 (914) 407-6109   57 West 57th Street, 3rd floor, New York - NY 10019, USA

Lupine Publishers Group

Lupine Publishers

  Submit Manuscript

ISSN: 2641-1768

Scholarly Journal of Psychology and Behavioral Sciences

Review ArticleOpen Access

The Methodological Consequences of Subconscious Electoral Psychology-why Polls do not “Predict” Election Results Volume 3 - Issue 3

Michael Bruter*

  • Department of psychology, UK

Received: January 03, 2020;   Published: January 17, 2020

Corresponding author: Michael Bruter, Department of psychology, UK

DOI: 10.32474/SJPBS.2020.03.000164

Abstract PDF

Abstract

In the current period of “electoral surprises”, polls are regularly criticised for wrongly predicting electoral outcomes. In this article, I suggest that this stems from a misunderstanding of how polls should be used. Polls should not be taken literally as “predicting” the state of an election. Instead, if we inform both the conception of polls and their interpretation with current electoral psychology research, they can be very useful tools to indirectly understand how an election is shaping. In terms of polls conception, I highlight issues with questions phrasings (polls should only ask questions respondents actually have an answer to), sampling (including the lag between polls of respondents and pools of registered voters) and sampling controls (notably in quota samples which are solely reliant on a minimal number of social and demographic factors where psychological ones may be more critical to verify). In terms of process, we must acknowledge that about 20-30% of voters tend to make up or change their mind on the vote in the last week before an election, that casting a vote in a polling station is very different from answering a survey question from one’s home, and that citizens use polls as a source of information shaping their own electoral choice and behaviour through empathic displacement. Finally, in terms of interpretation, we must forget the deceptive and “lazy” simplicity of taking polls as a predicted vote to look for more complex symptoms such as whether the people who should be expected to support a certain party are indeed rallying behind it, polls volatility and how the votes of referees and supporters are shaping as they will explain different aspect of the final electoral result [1].

Introduction

All those suggestions are direct methodological implications of recent electoral psychology research and suggest that whilst polls are potentially extremely powerful tools of prediction, this is only in a complex and indirect way that requires a high level of survey literacy and analysis. When the results of the 2016 Brexit referendum, the same year’s US Presidential Election, the 2017 General election in the UK, or indeed the 2019 European Parliament elections were revealed there was, commentators rushed to blame incompetent pollsters on what they saw as surprising and unpredictable outcomes. This stems from a common understanding in much of the popular media that polls can be used as “predictions” of an electoral result, something that many specialists of electoral behaviour keep repeating that they are not. Similarly, comments on the abundance of polls published in the run up of the December 2019 UK General Elections have focused on an attempt to “translate” them into magnitudes of electoral majority.
In recent months, however, the obviousness of the disconnection between poll predictions and electoral results have become all the more obvious that their level of divergence and volatility in a country like the UK has been totally unprecedented. If we consider the polls that preceded the May 2019 European Parliament elections, they betrayed a range of nearly 15 points between the most optimistic and pessimistic ones when it came to the score of the Brexit party, nearly 10 points on the scores of Labour or the Lib Dems, and even a factor of 1 to 3 on the suggested score of Change UK. In the run up to the December 2019 election, divergence is just as profound, with some predicting a Conservative landslide with a lead of 15 points over the second largest party whilst others put the two within a few points of one another. On any given day, different polls may even conclude at the very same time that the gap between the Conservative and Labour parties is widening and reducing, suggesting that polling discrepancies do not only affect the level of support for the parties but even the dynamics of their respective strengths [2]. Inevitably, when the election finally occurs, some polls will be deemed “right” and some “wrong”. There will be the usual assortment of self-congratulation and outcry, quite possibly outlandish calls for polls to be forbidden or demands for another public enquiry about them. All this because reading polls is actually a lot more complicated than it looks. Indeed, in this article, I suggest that the common use of polls as electoral predictions is incompatible with the state of political science knowledge of the psychology of voters and how it can be characterized both analytically and methodologically.
In this day and age, there is still a lot of misunderstanding on how to read polls, and a tendency to blame polls on our own collective illiteracy in understanding what they can and cannot tell and our collective misconceptions on the psychology of voters. Crucial findings on the consequences of survey question design (e.g. Krosnick [3]) or confirmation bias in its various forms Klayman [4] are altogether ignored. Here are seven points that those interested in polling and in elections might wish to think about when considering what they have read and what they can expect.
In this article, we question how insights from electoral psychology shed light on the ways that we should and should not ask opinion poll questions and can and cannot interpret their answers. To put it differently, we use electoral psychology insights to highlight a number of ways in which we tend to go wrong in the way in which opinion polls are typically constructed and interpreted. In practice, we will assess a number of key reasons why voting intentions and other typical opinion poll measures cannot be taken as predictions of election results based on electoral psychology research, and then explain how those polls can be organised and interpreted instead.

Over 90% of our political thinking and behaviour is actually subconscious

The predominantly subconscious nature of human behaviour has been well studied by scholars such as Lakoff and Johnson [5]. Specialists of human behaviour are well aware of it, but sometimes, it is hard to grasp the full implications of this reality and its impact of the relationship between what a respondent is asked in a poll or survey, and what they can answer, what people believe they are signalling and what they are actually revealing. The archpredominance of subconscious means that when a question asks someone what they think or what they will do, even if they are completely honest, they are being asked for information that they cannot provide because ultimately, at the conscious level, they do not really know. So whilst a person’s answer to the question “who will you vote for on Thursday” may actually be very useful, very telling, and very consequential, it is not so in any way in a direct manner, as an accurate description of who they will be voting for on that day. What is more, the consequences of the subconscious nature of political behaviour can be harder to evaluate as times change.

For instance, many electoral analysis models are still based on the underlying idea that voters identify with parties Campbell, Converse [6], whilst partisan identification levels have been consistently declining in many countries for several decades already Franklin, Mackie [7]. The problem is that whilst everyone knows that partisan identification has declined, the prevalence of “old” models of identification are still implicitly underlying in much opinion polls analysis, notably in the form of partisan choice being perceived as the natural metrics of consistency in electoral behaviour. In other words, voters would be consistent when they repeatedly vote for the same party and inconsistent when they do not. Therefore, electoral unfaithfulness or populism are consequently interpreted as incoherent, anomalous, or indeed protest behaviour.

Yet, there may be many other sources of consistency in electoral behaviour which are not based on constant party choices. One may choose to vote for Labour when the Tories are in power for exactly the same reason that they chose to vote Tory when Labour was in power. This can be perfectly coherent and predictable, just not on the basis of a simplistic partisan framework but instead based on our understanding of how different voters perceive the function of elections or their own role as voters-what Bruter and Harrison [9] call their electoral identity (the way citizens implicitly consider their role as voters). Switching the basis on which one evaluates electoral coherence from partisan identity to electoral identity entirely modifies the way in which one reads opinion polls results, no longer looking for partisan shares of the vote, but for both static and dynamic patterns that reveal the ways in which citizens are intending to vote in a given election-for example if it is based on ideological support, referee-type arbitration of parties’ proposals on given issues [10] political anger or frustration Harrison [11], etc.

None of those bases of evaluations are conscious, but they showed, for example, that pre-electoral intentions in the Brexit referendum, the US 2016 Presidential election, or the UK 2017 General Election (or for that matter a forthcoming potential UK Snap Election as of late 2019) is not based on adhesion, thereby making partisan declarations of poll respondents fragile and not altogether reflective of their future vote. Another way of saying that is that those who “should” be enthusiastically supporting Ms Clinton in 2016 or Ms May in 2017 were not doing so [9].

If asking people, a question that they do not have an answer with, all that is measured is noise (or worse)

A natural consequence of the subconscious “iceberg” of political behaviour lies in its implications on question phrasings. Ultimately, a major finding that comes from general survey response theory [13]but is confirmed by electoral psychology research [9] is that many of the questions typically asked of citizens are based on what researchers, journalists, or politicians want an answer to rather than what citizens are able to tell. This is fairly obvious in the context of the polling which is preceding potential new Brexit referendum and snap General Election in the UK as of late 2019. In effect, voters are asked how they will vote in such a referendum or such a General election “if” (the UK leaves with or without a deal or does not leave at all at the end of October 2019, etc).
It is of course fully understandable that journalists and parties would love to know the answer to that question, but survey response theory makes it fairly obvious that it is not by asking those questions in the terms of what we wish to know that we can find out the answer. Indeed, asking questions in the terms of the researchers’ own analytical questioning is not only lazy but also counter-productive, because it leads respondents to use those questions as shortcuts for both what they feel that they are actually being asked (almost never the question itself) and of what they wish to convey. This is actually true of both quantitative and qualitative research as shown by Bruter [14] who found that even an apparently straightforward declarative identity question such as “where do you come from” does not actually measure a true identity but is rather shaped by what the respondent or interviewee believe that their interlocutor is actually aiming to find out. Thus, when asked where she comes from, an academic at the University of Nice who is in fact originating from Leeds in the UK would likely answer that she is from the UK if meeting locals in a foreign country that she is visiting, but from Leeds if instead she is meeting fellow Brits (or even from which part of Leeds if they are from Leeds themselves), and quite likely from Nice or from France if instead she is discussing with colleagues at an academic conference, and quite possibly from Europe if visiting a remote part of the world where most visitors are in fact American or Australian. This is not a sign of identity schizophrenia but a symptom of respondents always contextualising any question so as to try and understand what their interlocutor is “really” trying to figure out about them.

Transported to the question of hypothetical voting situations, it is very likely that someone asked how they would vote in a General Election should the UK fail to leave the EU by 31st October 2019 is really more likely to interpret that question as asking if they will feel that Government will have failed (or made voters angry) should they indeed fail to deliver on their promise of closing the Brexit episode by that date. As people cannot answer to different questions with a single answer, it also follows that their answer does not tell us how they will vote if the Government fails to leave the EU on 31st October. In other, technical words, if one asks citizens a question that they do not actually genuinely have an answer to, at best one will measure noise, but in fact, even more likely, the measure will be biased because the way the question is used as a shortcut for something else is likely to be systematic and therefore to symmetrically produce systematic error, ie bias.

(Most) people vote on Election Day for a reason

Another finding of Bruter and Harrison [9] is that when it comes to elections, people do change their minds or make it up late. In most elections, 20-30% of people make up or change their minds (on whether to vote or not and for whom) about half of them (10-15%) on Election Day itself. In fact, in a low salience Irish referendum that took place in 2012, we found from the Referendum Commission official polling data that the proportion of people making up or changing their minds in the last week was close to 80% Bruter and Harrison [15], whilst research on the Brexit referendum confirmed those proportions even in a very high salience referendum [13]. Arguably, that example is all the more striking that by contrast, very small proportions of voters converged on referendum positions after the vote, while they had still done so beforehand. This critical importance of the last week-and even more so of Election Dayraises significant issues about how opinion polls are typically used as “mocks” of forthcoming electoral results.

This last week and final day volatility has many explanations, not least that electoral campaigns and atmosphere typically pick up pretty radically in the last week. Similarly, it is not the same thing to answer a question about one’s future vote on a phone, at the dinner table, or on one’s computer, and to stand in the polling booth with a ballot in one’s hand and a responsibility on one’s shoulders. From that point of view, the sum of recent research in electoral psychology on how citizens’ experience and behaviour when they vote using temporal remote voting and on Election Day is striking as are findings that people are far more sociotropic when they vote at a polling station rather than when they vote using geographical remote voting from home [9].This is similarly emphasised by the impact of polling station location on the way citizens vote [1].
Usually, a lot of those late deciders and switchers will cancel each other out so may not be visible at the aggregate level (the “result”), but sometimes that is not the case and as a result those late deciders-mind changers can tend to go in a given direction in which case individual level changes can add up to an aggregate level twist (e.g. France 2017 second round). This adds an additional puzzle for those intending to interpret pre-election polls in the run up to a vote: whether the predictable individual-level volatility of the last week and of Election Day will have neutral effects at the aggregate level or will, instead, modify aggregate level due to context [16,13].

Pollsters make assumptions about what they are getting wrong

To worsen the situation, unlike academic surveys, commercial polls often do not present raw responses, but rather corrected estimations based on a number of criteria effected through weightings. In other words, many pollster do not only ask respondents who they will vote for and give the result. If they did that, the results would look very different than the polls being published across the media. Instead, pollsters make assumptions regarding where these raw estimates may be going wrong. The scope and basis of the corrections vary across countries and polling companies. In some cases, pollsters will simply ask respondents how sure they are about their vote -or how sure they are that they will vote at all, and they will make assumptions on that basis about which answers to count or not, how to weigh them, or which camps benefit from the safest support. In other cases, they use “trial and error” aka past experience to try and correct what they believe is the gap between measured responses and actual picture. For instance, if for the past 3 years, a survey company always underestimated the country’s vote for the extreme right, they will assume that their current measure will similarly under-count those voters and they will “correct” the responses that they get by simply using ad hoc weightings (ie multiplying the declared vote for the extreme right by a small or sometimes not-so-small factor) to try and reach a more realistic figure. All of those assumptions stem from an important observation: that uncorrected opinion polls describe an electoral picture which does not typically match the final vote that citizens will return. The “assumption underlying that assumption”, however, is that opinion polls would indeed be predictions of an electoral results and that any mismatch between the poll and the result is therefore error, a perspective which this article thoroughly questions. Moreover, the fact that those assumptions are made, let alone what they are and how corrections are applied-are rarely obvious to occasional readers. Many countries have no restrictive legislation on how polling companies must report their polling results and methodologies, and where they are, weightings are, by nature, acceptable mechanical instruments of survey analysis, because even though they are questioned by many political psychologists (in that they will mean that some respondents are counted “more” than others which can affect explanatory models), they are considered indispensable for those who rely more on polls as description mechanisms as they are necessary to avoid social or demographic distortion due to some groups or types of respondents being more represented than others. Similarly, restricting the analysis of an opinion poll to some types of respondents-such as those who say that they are sure that they will vote-is also commonly accepted even where specific poll reporting legislation is in place, because it is obviously true that some people who answer surveys will not vote.

All those assumptions and corrections, however, have major consequences on how we read poll results. For instance, as mentioned, polling companies will make assumptions over turnout. What’s the point of asking someone who they will vote for and taking that into account if the person stays home watching television instead? So pollsters try to make assumptions about which of their respondents will actually vote or not, based on their own answers on their likelihood to vote, their past electoral behaviour, or their social and demographic profile and what polling companies believe to be the likely profile of those who will and will not vote on Election Day. When expecting a high turnout of say 80%, those assumptions are not very difficult to make, but the lower the expected turnout, the more fragile the guesswork will be, making both the level and nature of turnout even harder to predict.
Often, survey companies will use questions such as “how sure are you to vote?” as predictors of turnout, as seen above, most of electoral behaviour is in fact subconscious so that those questions are not actually very efficient, So many of the differences that are found across polls before elections may be partly due to different pollsters making different assumptions on which respondent categories will and will not be likely to actually vote, something many got wrong in 2017 about young people. Using questions about how sure people are of who they will vote for are in fact equally ineffective as predictors of who will change their minds, so there again, self-reporting based attempts at assessing which voting intentions will and will not hold based on the claimed certainty of one’s vote are also bound to carry and in some cases amplify error where polls are used as predictors of an election result.

The mismatch between pools of respondents and pools of voters

In many cases, polling companies will thus instead assess likelihood to vote (or, post elections, the analysis of who voted or not) based on social and demographic predictors. However, in the context of the EU membership referendum and of the 2017 elections, Bruter and Harrison (2017b) have shown that most polling companies, and even some election studies, do not base those calculations on actual measures of turnout, because they try to measure who will vote out of total respondents, whilst turnout is a calculation of who votes out of registered voters instead. Of course, if the likelihood to be electorally registered was randomly distributed, this confusion would have limited consequences, but for instance, the UK Electoral Commission (2019) recently confirmed that over a third of young people in the UK are un-registered or mis-registered, very significantly more than any other age group. This issue then combines with the point raised above about the use of weightings to try and correct “error” in raw predictions. When surveys “know” that they will get a figure wrong, they usually use weightings to return to a “credible” truth, so as they know that more people will claim to vote than actually will, they tend to overemphasise the few people who admit that they will not participate in an election to restore the balance.
However, as discussed above, young people aged 20-25 in particular are much less likely to be correctly registered than any other category of voters. So in a poll, when a 20-25 answers that they will not vote, they will be automatically counted as an abstentionist, whilst in practice, there is probably nearly one in three chances that they are simply unregistered to vote and therefore not included in official abstention statistics. As respondents tend to rationalise their answers on turnout and therefore, not enough people “confess” to being abstentionists, polling companies use weightings to reassess the pool of voters and non-voters in their sample by effectively over-counting the few people who accept that they are non-voters.
However, if a person is in fact wrongly counted as an abstentionist (because he/she is, in fact, unregistered and therefore not part of turnout calculations), his/her responses will be overrepresented as part of the pool of abstentionists and lead to even grosser misrepresentation of who abstentionists are. It is therefore critical, based on electoral psychology findings, to ask people if they are registered to vote or not and use this information before correcting turnout estimations. Doing that, Bruter and Harrison [10] suggested that the participation of young citizens was in fact underestimated in both the 2016 Referendum and to a lesser extent the 2017 General Election. Conversely, the “real” predictors of abstention were coincidentally understudied as pollsters were following a partly false track based on the confusion between unregistered and abstaining young citizens in their existing pools of respondents.

Electoral ergonomics- and how voters react to it-matter

Another obvious point which was emphasised in both 2015 and 2017, as well as the 2016 US Presidential election and the 2019 Australian and European Parliament elections is that design matters. Electorates are not monolithic, and national trends do not count as much as what happens at the level of each constituency. For instance, in the Australian election, Labor performed better than in the last election in Victoria, the Australian Capital Territory, and Western Australia but their performance was disastrous in Queensland and Tasmania effectively losing them the election. Similarly, in the UK, Labour was more affected by the rise of the UKIP vote in the 2015 General Election notably in some Northern constituencies, and benefited more from its decline in 2017 in the same areas. As a result, most pollsters now try harder to look for sub-national patterns which may lead to different seat distributions than expected, and part of the debates that followed the June 2017 General Election in the UK pertained to whether some pollsters had managed to put together better micro-geographical models than others leading to more accurate predictions.

With the European Parliament elections such as those which took place in May 2019, things are also complicated by the logic of the d’Hondt method which implies different (and far more complicated) calculations for strategic voters than what you would normally do under a plurality system (“first past the post”). Under plurality, to vote strategically is pretty easy: just pick one of the strong guys as voting for a small party or candidate will waste your vote. Under PR and the d’Hondt method, things are a lot more complex. Often, voting for a strong list will mean a wasted vote whilst supporting a smaller list may enable them to pass the bar that will have them win a seat. In the two 2019 Israeli General elections, those hesitations were at the forefront of many voters’ comments, hesitating between strengthening one of the two main lists (Likud to the right or Blue and White) so that it would finish top and get a stronger chance at leading a coalition, or some of the smaller lists (such as the right wing list or Labour) which some expected to struggle to meet the electoral threshold.

Those differences, however, are not directly institutional but ergonomic, in that they pertain not to mechanical system effects but rather to how those systemic constraints interact with voters’ psychology [9]. Thus, when it comes to the 2019 European Parliament elections in the UK, the “remain voter” website designed by data scientists came up with suggestions that many could have found paradoxical, suggesting that those who wanted more remain seats should vote for the Lib Dems in Scotland and Wales (where the SNP and Plaid Cymru will get their seats anyway), the Greens in the North West and the Midlands (where the Lib Dems should have their representatives), and for Change UK in London and the South East (where both the Lib Dems and the Greens have in their view reached their representation potential). Even then, those predictions still relied on the polls, and whilst there was some clear evidence of adaptive strategic voting on the part of remain voters, there was also a sense that ultimately, the Brexit party benefited more from its own arch-dominant on the Leave side than divided remain parties.

Of course, in the context of a forthcoming snap UK election, such strategic dynamics would be even more significant, especially in the absence of a sense of solidarity between two large parties on the centre left – the Labour and Liberal-Democrat parties which suggested that they would not make an alliance given their divergences on how to handle the Brexit question and other policy priorities both nationally and internationally. Typically, based on opinion polls, the media focuses more on headline figures in terms of expected percentage of the vote, but seats do matter a lot to parties and to the reality of institutions such as the European Parliament, and the translation of votes into seat can vary dramatically depending on who votes how where, but also of how different elements of electoral organisation are in interface with citizens’ psychology to produce electoral ergonomics, and how those electoral ergonomics are in turn fuelled by context.

Polls do not just measure voting intentions, they shape them

One of the important concepts in Bruter and Harrison [9] model is the concept of “emphatic displacement”. It means that when they vote, many people think of the rest of the country voting at the same time as them (not just for whom they are voting, but that is of course an important part of the picture). And of course, what bigger indication for most citizens of what the rest of the country (or continent) must be doing than what they hear from opinion polls? There is an obvious temporality issue here. For anyone voting on Election Day, polls precede the electoral decision and thus, by definition, can inform it whilst the opposite (actual vote informing a poll occurring before) is not possible. Because votes do not tend to measure raw preferences but instead, voters vote with a certain responsibility on their shoulder and the role that they assume as they cast their ballot Bruter and Harrison [15], this information can be extremely important.
For instance, the authors suggest that part of the explanation for the apparent “surprise” of the 2015 General Elections which saw the Tories win a straight majority after all polls seemed to predict a hung Parliament precisely follows from those polls predicting a hung Parliament, typically with the Tories the more popular party but Labour more likely to lead a coalition, so that many voters cast their ballot thinking that they were effectively shaping the type of coalition that could lead the country in the context of a hung Parliament they were taking for granted. Conversely, in the recent Australian election, all polls seemed to point to a Labor majority, but all also confirmed that Labor leader Bill Shorten trailed well behind incumbent Liberal Scott Morrison as preferred Prime Minister and it is equally likely that this expected Labor win was part of what voters reacted to in their actual electoral choice in the election.
In an era of increasingly sophisticated citizens and free flowing information, the intuitive tendency to empathic displacement means that citizens are effectively eager to understand what the elections represents for other citizens and how they will behave. Polls thus offer an additional opportunity to feel in control of one’s electoral experience and of the power of one’s vote, including (but not only) in terms of the strategic voting considerations evoked earlier in this article. In turn, if polls inform and shape the choice of citizens, then by definition it cannot be a neutral measure of it, and cannot be interpreted as such. Indeed, commentators cannot blame citizens from trying to make the exact same sense of opinion polls that they are trying to make (and offer) themselves.
There is of course no way around this. In an era of transparency, it is actually good that all citizens have equal access to polling information rather than those being secretly collected by parties or governments as is still the case in some countries. At the same time, however, this creates an endogeneity problem: polls do not just measure something with no ensuing consequence, instead, they become one of the prime sources of information that voters will use to make up their minds.

Polls beyond mimicry-using electoral psychology insights towards a mature conception of opinion polls?

With those problems in mind, some could be excused for thinking that polls are useless or even a threat to democracy. They would, nonetheless, be plainly wrong. First, despite popular accusations, polling is typically a very serious and ethical business. There are, of course, a number of things which we believe could be done differently – for instance, many “representative samples” are merely based on three variables and that really is not enough to determine their true representativeness. We also sometimes see polls and surveys with question phrasings that can be really problematic (leading or ambiguous). The bottom line is that survey design is a complex job and a complex skill and not anyone can just improvise themselves survey designers and think that this will probably be “good enough”. We should also be careful about magic solutions. For instance, to answer waves of criticism, polling companies and journalists alike seem to swear by “margins of error” in poll predictions, but technically, most calculations of margins of error assume randomness of the sample, which is simply not the way commercial pollsters recruit their samples (many use, instead, quota samples or sometimes cluster samples, neither of which are random), so that many margins of error actually do not say what people think they do.

An often-heard complain is that one can make a survey “say anything”. That is actually not true, but designers need to be sufficiently literate in the exercise to know what is the potential impact of different phrasings and sufficiently honest to choose some which will give them as accurate a vision as possible. As discussed, many questions frequently asked in commercial polls and reported in the press are most unlikely to measure what those designers and interpreters believe that they have. More generally, whilst academic surveys are typically far more robust, commercial polls are a very useful measure of public reaction to given prompts, this is almost never in a literal, mimicry-type manner. Instead, a number of preconditions must be born in mind when designing, reading, and inferring from commercial and political polling.
First, rigour must be invested in asking reasonable questions to quality samples. Specialist scholars and serious survey companies constantly try to improve their models and understand what they may be getting wrong. Members of reputable bodies such as the British Polling Council abide by strict rules and are used to measuring standard objects such as voting intentions or consumption behaviour. For more complex or new concepts, a lot of work always has to be invested in knowing how to measure things appropriately and accurately and it takes significant amounts of reflection, self-criticism, skill, time and money to get things right. This is also why many survey companies work with scientists to try and be as accurate and effective as possible.
As a prerequisite, every poll item should ensure that it pertains to questions that are intuitive to respondents using words likely to make unequivocal sense to them. For example, asking citizens for their opinion on a policy measure which is not salient will always measure something else than what is intended, as to complex or projective questions based on hypothetical scenarii. Academically, such complex questions may sometimes be justified, but that will typically be in the context of latent variables measured by multiple items with an understanding that all of the responses received will be contaminated by some level of measurement error. By contrast, questions such as “how will you vote if x happens?” will invariably fail to measure how citizens would vote if x happened because they ask respondents to put themselves in a situation which they cannot accurately project based on conscious anticipation alone.
Second, electoral psychology research suggests that psychological variables offer complementary (and sometimes better) predictive models of political attitudes and behaviour than socio-demographic ones. This is critical in terms of sampling because many sampling methods (notably quota based) are merely based on a very limited number of often three or four social and demographic criteria which may have little or no relevance to the phenomena a poll or survey is trying to explain whilst allowing psychologically skewed samples to potentially distort results unchecked. If many current phenomena – including key electoral psychology concepts such as populism, frustration, and hostility – are more dependent on psychological traits than on some social and demographic criteria which used to explain a lot more things in the now largely outdated world of fully aligned politics, this can lead to serious error in models and call for a rethink of how the quality and robustness of samples is to be assessed. Third, polls are part of what informs citizens, what makes them consider elections as a collective event, shape its atmosphere, and their interest in it. Many models of turnout, such as those of van der Eijk, Franklin [16] show that narrow polls can lead to the perception of a tight electoral race and in turn to an increase in turnout. After all, polls are also some of the information which can be less shaped by party and media influences if citizens are educated in and careful at reading them.
This leads to the next issue, not how to design decent polls, but how they can be used. Ultimately, polls are complex things to read and their value is precisely not as simple as “what people say they prefer is what they will vote for” does not mean that polls do not tell us anything interesting and indeed likely to enable us to understand how an election may turn. As mentioned earlier, it is not unusual for some specialists of electoral behaviour to expect outcomes that seem to contradict the “obvious” message of a poll regarding the likely result of an election based on those polls (or our own more sophisticated surveys instead) as shown by Bruter M [14]. Their suggestion is that from an electoral psychology point of view, the value of poll is not in predicting the state of an election, but rather in offering a number of elements that can be used for more indirect diagnosis. Thus, instead of the raw state of voting intentions, they suggest focusing on whether people we would normally expect to support a given party are indeed showing support. They believe that we should try and understand if supporters for the various camps are equally drawn by adhesion to their party or if some are, instead, merely expressing a desire to select the lesser of several evils.

They suggest that we should separately analyse the voting intentions of “referees” and “supporters” because it is in many ways the formers who will likely make the difference in terms of voting valence whilst the latter will be good at predicting which sides may potentially suffer from abstention. Finally, as is well known, trends are also important but not merely in the sense that later polls would be closer to the state of a vote than older ones. Instead, it is a case of dynamics highlighting levels of volatility in the opinion (and therefore predictability of the vote) and help us to understand when the atmosphere of the election is setting up and which aspects of a potential complex context are therefore likely to have a heavier impact on a vote, given our earlier reference to the 20-30% of voters making up or changing their minds within a week of the vote including half of them on Election Day. It would not take political scientists 7-10 years of studies to become electoral behaviouralists so if there was a magic recipe on how to read polls “right”, let alone if anyone could just read what a poll “predict” and use it as a fair measure of a forthcoming vote. As with a medical diagnosis, cues are typically more subtle, more indirect, but exercised and experienced scholars will still be able to read them correctly given the right questions and the right information.

Conclusion

So, should we get rid of polls? Well, nobody would suggest that we should stop having chemistry experiments just because not everyone of us can just immediately interpret their result or selfteach how to be effective chemists on the internet. Social sciencesincluding survey science-is just that, a complex body of knowledge which design, analysis, and interpretation requires complex learning, knowledge, and experience. It also requires sufficient introspection to know the limits and conditions of the exercise. With that in mind, polls are tremendous tools and contribute very usefully to the democratic thinking of citizens. Voters not mirroring what “voting intentions” looked like in polls actually do not think that polls are wrong, just that we are blaming polls for our own collective lack of understanding of what they do and how we should read them, and forgetting that voters are human beings who can and will change their minds till the last minute. This can be because of what they believe others will do based on polls themselves, but also because anyone who has ever been to a polling station knows that this is not a meaningless moment and that there are a whole range of thoughts, memories, and emotions that come to our mind at the moment we stand in the polling booth that we had not “seen coming” and which only occur because of the solemnity of the place and of the role (or identity) that we enact as voters.

References

  • Berger M, Wheeler (2008) “Contextual Priming: Where People Vote Affects How they Vote.” Proceedings of the National Academy of Sciences 105(6): 8846-8849.
  • Bruter M, Harrison S (2020) Inside the mind of a voter: a new approach to electoral psychology. Princeton: Princeton University Press, USA.
  • Bruter M, Harrison S (2017) ‘Understanding the emotional act of voting’. Nature Human Behaviour 1: 24.
  • Bruter M, Harrison S (2017) ‘Paradoxes of electoral behaviour in the 2017 General Election’ LSE Brexit.
  • Bruter M, (2005) Citizens of Europe? Basingstoke: Palgrave.
  • Campbell A, Converse P, Miller W, Stokes D (1960) The American Voter. Ann Arbor: University of Michigan Press, USA.
  • Fink A (2002) How to ask survey questions. 1: 2-20.
  • Franklin MN, Mackie T, Valen H (1992) Electoral Change: Responses to Evolving Social and Attitudinal Structures in Western Nations (Cambridge 1992: Cambridge University Press), USA.
  • Harrison S (2018) Young Voters in the General Election 2017 Parliamentary Affairs 71(1): 255-266.
  • Klayman J (1995) Varieties of confirmation bias. In Psychology of learning and motivation Academic Press 32: 385-418.
  • Krosnick JA (1999) Survey research. Annual review of psychology 50(1): 537-567.
  • Lakoff G, Johnson M (2008) Metaphors we live by. Chicago: University of Chicago Press.
  • Lord Ashcrorft Polls (2016) How the United Kingdom voted and Why.
  • UK Electoral Commission (2019) Study on incorrectly registered voters.
  • Van Der Brug W, Van Der Eijk C, Franklin M (2007) The economy and the vote: Economic conditions and elections in fifteen countries. Cambridge University Press, USA.
  • Van Der Eijk C, Franklin M (1996) Choosing Europe. Ann Arbour: University of Michigan Press, USA.
  • https://www.high-endrolex.com/21