
Health-related mis- and disinformation has increased dramatically since the pandemic and is coupled with a broader fall in trust in institutions. In parts of Europe, trust in medical professionals fell by 30% during the COVID-19 pandemic. Consequently, we are seeing significant real-world impacts on public health. An example of this is the drop in measles, mumps, and rubella (MMR) vaccination rates for children up to the age of five to 83.9% compared to 88.6% from a decade ago, well below the WHO recommendation of 95%. Other examples from around the world include: in the UK, there was mis- and disinformation about birth control pills and statins; in Uganda, some church groups made false claims about the malaria vaccine, and in Samoa, where antivax campaigners seized on two fatal vaccine administration errors to spread conspiracy theories about the vaccine, leading to a large outbreak and more than 80 deaths.
Health-based mis- and disinformation also poses a great risk to the pharmaceutical industry at both the product and organisational levels. False claims about a company’s ethics, values, or behaviours can reduce trust in an organisation as a whole and reduce uptake across all product lines.
Such trends can only become worse with rapid developments in AI capabilities. Generative AI is already being used to create highly pervasive and effective mis- and disinformation through audio and video deepfakes, as well as text content. A recent study showed that AI enabled an individual to create more than 100 highly realistic false blog articles in only an hour. AI has also made dissemination and amplification easier and more effective than ever before. AI-created bots can evade traditional detection mechanisms on social media platforms and can be used to distribute disinformation widely, as well as engage through provocative comments or gamifying algorithms to artificially boost content.
Pharmaceutical companies and healthcare professionals (HCPs) may be medical and scientific experts, but this does not make them communications or counter disinformation experts. Optimising counter disinformation campaigns requires a deep understanding of influence operations (information campaigns to influence the public), psychological manipulation techniques, cognitive fallacies, the target audience and networks used to disseminate and amplify mis- and disinformation. Since industry may lack the resources or expertise to identify and understand how best to counter mis- and disinformation, a closer working relationship with the counter mis- and disinformation industry can help illustrate best practices, as well as learn from counter disinformation campaigns across other industries.

On the identification side, early identification of mis- and disinformation strengthens the efficacy of counter-efforts, as well as triaging and knowing what to ignore and what to respond to. Computational network and sentiment analysis (which involves using natural language processing to classify text based on the mood or mentality expressed in the text) can identify narrative virality, efficacy and spread, to highlight which mis- and disinformation narratives pose the greatest risk and warrant a response. We may identify those at higher risk of falling for mis- and disinformation by understanding the information environments in which these individuals operate, for example, social media networks and forums, and who they trust to deliver that message.
There are also a variety of interventions we may use to counter mis- and disinformation. Raising awareness of mis- and disinformation practices, such as technique-based inoculation (exposing individuals to commonly used misinformation techniques in a weakened form, similar to a vaccine, so they can identify and resist them later), or the use of question and answer formats in corrective messaging over the provision of facts alone, can empower the pharmaceutical industry and HCPs to counter disinformation more effectively. Evidence-based responses that incorporate an understanding of the psychology of the target audience, their preexisting beliefs and values, and their trusted networks and channels can increase the likelihood that counter-campaigns dispel mis- and disinformation. For example, academic studies consistently show that pre-bunking outperforms debunking in addressing mis- and disinformation. Pre-empting false claims or psychological manipulation techniques that may be used and educating people about this not only improves health literacy but reduces the likelihood that they believe mis or disinformation. This was implemented successfully during the COVID-19 pandemic in the face of vaccine hesitancy.
Equip yourself with the tools to counter mis- and disinformation
The essentials of pharmacovigilance communications, a self-paced course by UMC, teaches you how to communicate pharmacovigilance to the masses as well as lead successful pharmacovigilance campaigns, like ones to address mis- and disinformation. Find the course here.
Connecting the pharmaceutical and healthcare industries with counter disinformation experts can ensure an effective response to mis- and disinformation campaigns, with responses rooted in the learnings from the literature. Counter disinformation experts can also guide the design of smart monitoring programmes to identify mis- and disinformation and proactive communications such as pre-bunking to reduce misinformation susceptibility within the public, which can reduce the spread and effectiveness of mis- and disinformation.
Although competitive, there are collaborative initiatives that the healthcare industry can take to reduce the impacts of mis- and disinformation, notwithstanding the impact of generative AI on its power and spread. Collaboration between healthcare and counter disinformation experts to help educate HCPs on how to best counter mis- and disinformation, use of natural language processing to help prioritise which misinformation campaigns to respond to, and focusing on pre-bunking rather than debunking are effective tools at our disposal that we must become acquainted with sooner rather than later. With coordinated effort and shared responsibility, we can turn the tide against health misinformation and ensure that accurate, trusted information reaches those who need it most.