Imagining the pandemic public: conspiracy-minded or healthily sceptical?
by Dr Warren Pearce, iHuman/Department of Sociological Studies
Public scepticism has proved an invaluable bulwark against excessive expert authority during the COVID-19 pandemic. In the current climate, this may appear a surprising or even provocative statement. After all, do we not find ourselves in the midst of the post-truth era, where misinformation and conspiracies related to COVID-19 and other scientifically important issues (such as climate change) run riot on social media platforms, confusing the public and eroding our democracies?
Well, yes, all of those statements have some truth in them: politicians often have a ‘flexible’ relationship with facts, conspiracy theories remain in circulation regarding the origins of COVID-19 and the motivations behind vaccine programmes, and the vaunted ideal of the public having a ‘shared set of facts’ seems further away than ever. Post-truth concerns that emerged during the Brexit referendum have reached dizzy new heights in the last 12 months, for example that “[t]he pandemic has generated more […] misinformation than the assassination of JFK, the moon landings and 9/11 put together”. However, some of the misinformation and conspiracy research set in train by these developments has its own problems, both in terms of its quality and its implications for democracy.
In this post, I argue that academic research is imagining the pandemic public as conspiracy-minded, failing to account for the instability and secrecy surrounding scientific knowledge during the pandemic, with potentially bad political outcomes that could backfire on public trust in expertise.
Conspiracy thinking or healthy scepticism?
Academic researchers have been quick to respond to what the World Health Organisation has dubbed an ‘infodemic’. A remarkable 724 articles on social media misinformation were published in 2020 alone (up from 306 in 2019), helping to drive a torrent of media articles criticising social media companies. With funding awarded to multiple research projects about online Covid misinformation, we can expect this research agenda to go from strength to strength in 2021 and beyond.
However, while there is little love lost for social media companies, there is reason to be cautious about the potential impacts of misinformation research on what Noortje Marres calls the ‘knowledge democracy’, where knowledge and inquiry remain central to public life but have also been transformed by the arrival of digital societies. If research focuses too narrowly on misinformation, and is too expansive in its definition of conspiratorial thinking, then it presents a threat to the principle of experts remaining accountable to the public. In short, if the public is imagined to be irrational and conspiracy-minded, then it relieves experts of the need for them to prove their own trustworthiness.
That is not to say that scientists should be subjected to regular questioning by QAnon devotees. Extreme cases are easy to judge. The problem comes in the fuzzy middle, where the demarcation problem rears its head: how to differentiate between science and non-science, legitimate challenge and ‘bad faith’ attacks. This problem, and its importance for conspiracy theory research, is neatly encapsulated in Freeman et al.’s paper “Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England”, published in Psychological Medicine in May 2020. Thus far it is the most-cited academic article on Covid-19 conspiracies and misinformation, as well as having significant public impact with 141 media mentions and two citations in policy documents according to Altmetrics.
In presenting evidence for a “public health information crisis” the authors argue for a distinction between conspiracies and ‘healthy scepticism’, one that I would agree needs to be made. Unfortunately, the article’s methods do the opposite, calculating “coronavirus conspiracy scores” from survey data on both obscure theories (e.g. “Coronavirus is a plot by globalists to destroy religion”) and more modest standpoints (e.g. “I don’t trust the information about the virus from scientific experts”). Not surprisingly, the levels of disagreement are much higher for the former (78% for the religion theory) than the latter (44% on distrusting information from experts). The risks of this conflation are twofold.
First, including expressions of healthy scepticism in a definition of conspiracy theorising makes the latter appear more prevalent and generates unreliable evidence for the supposed information crisis. Second, there is the clear implication that members of the public who merely ‘distrust’ experts should be put in the same basket as the most extreme fringe theories. At this point, you may be thinking that distrusting information from experts is irrational and can legitimately be grouped together with fringe conspiracists. To see the problem with this view, one only has to look at the uneven history of Covid-19 science over the last 12 months.
The instability of science
As has been widely documented there were very good reasons to be sceptical about the public information being provided by UK government scientists in the early days of the pandemic. In particular, statements about the UK’s increase in cases being four weeks behind Italy were subject to immediate public scrutiny and challenge. As modeller James Annan noted on April 20th 2020 (two weeks prior to Freeman et al.’s fieldwork began):
“It’s fair to say the “4 weeks” comment was met with a bit of scepticism by the general public, eg here. And here. When the Govt’s Chief Scientist is being openly mocked for his comments, it seems to me that something is seriously wrong. For context, on the 12th March we’d had about 500 cases and 8 deaths. 15 days earlier on the 26 Feb, Italy had had very similar numbers – in fact slightly fewer cases and more deaths. In both countries, the numbers of cases and deaths were doubling roughly every 3 days, meaning we would get to Italy’s then current values of 20,000 cases and 144 deaths in about a fortnight or so (5 doublings = 32x). 4 weeks was obviously risible."
As I discussed last year in “Trouble in the Trough”, statements about the growth rate in cases were a key reason provided by government experts for not rapidly following Italy into stringent lockdown measures, a decision that scientists have since admitted “cost a lot of lives”.
Case growth rates was not the only example of controversies surrounding government experts. Here I briefly mention two more. First, in their mapping of Long Covid’s emergence, Felicity Callard and Elisa Perego contrast science advisers’ “[e]mollient descriptions of mild illness” with the “often overwhelming experiences” of patients suffering a range of debilitating and unpredictable symptoms for months on end. A year on, and following sustained pressure and organising from Long Covid patients, the condition is now recognised by experts and the subject of widespread public concern. Second, Emily So and Hannah Baker show how, in the space of two months, UK advice went from dismissing face coverings as something ‘wired into’ southeast Asian cultures to becoming mandatory on public transport. Here, public pressure again had a role to play alongside those experts arguing that government advisors should adopt a more precautionary interpretation of the evidence for face mask effectiveness.
I raise these examples not to criticise government advisers for changing their mind based on the available evidence, or to downplay the difficulty of providing scientific advice in a pandemic. Rather, these controversies illustrate the instability of scientific knowledge, and how the government has found itself under public pressure to introduce more stringent virus suppression measures.
Science in secret
To make matters worse, these controversies took place against a backdrop of scientific secrecy. As Sheila Jasanoff reminds us, “experts should be seen as authorized to act only on behalf of their public constituencies and only within parameters that are continually open to review”. Yet during the crucial months of March and April, when the fate of the nation was so dependent on how scientific evidence was interpreted and acted upon, minutes and reports from SAGE and other advisory committees were published either with considerable delay or not at all. Even the memberships of advisory committees (with the notable exception of NERVTAG) were kept secret until a leak to The Guardian in late-April.
This lack of transparency provides crucial context for understanding public attitudes to experts during the pandemic. Believing that important information was being kept secret was an understandable conclusion to reach given the situation at the time. Yet, rather than acknowledging such views as being within the bounds of reasonable discourse, Freeman et al.’s conspiracy research states that a belief that important events are kept secret from the public is a measure of “conspiracy mentality” and “excessive mistrust”. A failure to acknowledge these twin prevailing conditions of unstable knowledge and scientific secrecy risk inflating the idea that a conspiratorial mindset is at large within the public.
What to do with a conspiratorial public?
This all matters because once the imagination of a conspiratorial public in academic research, media articles and policy documents leads to some worrying conclusions. For example, prominent sociologist and government scientific adviser, Professor Melinda Mills, recently argued in the British Medical Journal that spreading Covid-19 misinformation should be criminalised.
While Professor Mills acknowledged the challenge of defining misinformation, and that legislation would have a chilling effect on public criticism of the government, she still argued that it should still be considered due to the potential impact of misinformation on vaccine take-up. The lesson taken here from COVID-19 is that the public is vulnerable to misinformation and unable to reflect on the content and motivations of information from experts and non-experts. This is a disappointingly one-dimensional approach to the relationship between experts and publics, all too familiar to scholars from the early 90s to the present day. In fact, the pandemic has proven scientific knowledge to be unstable, with expertise in key areas being subject to consistent and often persuasive public challenge.
Experts may be sorely tempted to restrict what my iHuman colleague Stevienna de Saille calls unruly publics and their ability to challenge the received wisdom. Such a move would be in tune with the authoritarian restrictions on public protest we are seeing elsewhere in the UK, but we should think long and hard about how and why academic research is presenting the public as conspiracy-minded, and the consequences for democracy.
A more productive response to the challenges facing knowledge democracies would be to learn lessons from previous public knowledge controversies, and pursue a more cosmopolitan approach to science that is open to uncertainty and embraces epistemic diversity and public reasoning. The presentation and use of expert knowledge is an important concern, particularly when new media technologies have transformed the democratic dynamics around expertise. However, those doing research on these issues should resist the lazy option of imagining the public as irrational and unreflective, and instead reflect themselves on what their own research, and the potential political consequences, will do to public trust in expertise.
If you have any comments on this piece, please get in touch via email or on Twitter @WarrenPearce.
Thanks to Tom Stafford for comments on this post. Responsibility for the views expressed and any shortcomings in the argument are my own.
How we understand being ‘human’ differs between disciplines and has changed radically over time. We are living in an age marked by rapid growth in knowledge about the human body and brain, and new technologies with the potential to change them.
The University’s four flagship institutes bring together our key strengths to tackle global issues, turning interdisciplinary and translational research into real-world solutions.