Efforts to achieve herd immunity to COVID-19 in the U.S. have stalled out. Currently, just about 68% of all U.S. adults have received at least one dose, despite vaccines being widely available. This is due, in great part, to a “misinformation pandemic” that now threatens public health.
With many people still spending considerable time working from home and online during a time of continuing uncertainty, misinformation can spread like a virus. Social media, in particular, has become a regular source of health information for much of the population. It has helped fuel vaccine hesitancy both in the U.S. and worldwide.
Vaccine hesitancy, defined by the World Health Organization as “the reluctance or refusal to vaccinate despite the availability of vaccines,” is one of the top ten threats to global health. And the U.S. Surgeon General recently issued an advisory [pdf] labeling vaccine misinformation a “serious threat to public health”. This misinformation pandemic presents perhaps the most significant challenge in ending the COVID-19 pandemic.
Vaccine hesitancy is not a new problem. The anti-vaccine movement—composed of vaccine advocates, conspiracy theorists, alternative health practitioners, and wellness entrepreneurs—now capitalizes on the ability to mass promote misleading, dangerous content online. They now use internet marketing strategies and have learned how to navigate algorithms and data voids. As a result, the anti-vaccine movement has grown its support by an estimated 8 million followers since 2019. And research suggests that anti-vaccination views may dominate overall opinions on vaccination within ten years.
How algorithms drive vaccine misinformation
Understanding this recent rise of the anti-vaccination movement requires understanding how Big Tech’s algorithms work to fulfill one goal: keeping users engaged. The longer a user is on their site, the more advertisements they are shown, and the greater the revenue generated. Users are shown other content based on patterns of their online interactions. When users respond positively to a specific type of content, the algorithm will ensure you see more of it, and more like it.
On Facebook—where 70% of U.S. adults are daily users—anti-vaccine organizers purchase ads to promote their content to targeted audiences. With the help of algorithms, these audiences can include new parents and pregnant women. And more generally, they can reach anyone searching the word “vaccine” on the site.
Instagram, owned by Facebook since 2012, is also used by anti-vaccine organizers to reach potentially receptive audiences. For instance, including #GMO in a group of hashtags might drive their content into the feeds of anti-GMO supporters. And research has shown that users following health authorities and conspiracy accounts are more likely to be pushed towards anti-vaccine content [pdf] than those who exclusively follow health authorities.
Data voids create opportunities for exploitation
Search engines also contain algorithmic flaws known as “data voids” that are exploited by promoters of vaccine misinformation. These occur where data for a specific search term are scarce, non-existent, or difficult to come by. Search results then return content produced intentionally by a niche group. Anti-vaccine advocates have learned to leverage data voids by creating keyword-specific websites.
They then encourage their followers to seek information using those same keywords. For example, if someone searches, “Pfizer vaccine causes nerve damage,” it is likely that misinformation confirming their query will top the search results. Scientifically-backed information from the CDC, which may not address that misinformation specifically, remains relatively hidden a few articles below.
Public health professionals and medical organizations should work to become aware of “data voids” in an effort to strategically present evidence-based counter-content. This approach also provides an opportunity to more proactively identify and fill data voids, which would aid public health authorities in controlling the spread of medical misinformation during future crises with tailored responses.
The need for de-platforming, regulation, and education
Following a series of measles outbreaks in 2019, several social media platforms pledged to act against the growing anti-vaccine movement. This past February, Facebook doubled down on that pledge. Their updated policy included four specific actions: update misinformation policies, improve its COVID-19 information center, provide ad credits to health agencies, and make it harder to find vaccine misinformation. However, these efforts have not been enough. Facebook is reportedly continuing to recommend anti-vaccine groups and pages to users.
By failing to “de-platform” these influential figures and remove false content, Facebook and other social media sites make it difficult to address this misinformation pandemic. They are leaving much of the duty of countering misinformation up to individual organizations. For example, the Mississippi State Department of Health recently began blocking comments on its own COVID-19 Facebook posts to prevent spreading misinformation.
It is perhaps not surprising that these sites have been slow to address the problem. The Center for Countering Digital Hate currently estimates the anti-vaccination base is worth $1 billion in annual revenue to social media platforms. And profiting this way remains relatively safe because Big Tech is not currently legally liable for the effects of misinformation.
Section 230 of the Communications Decency Act is an outdated regulation that protects online platforms from lawsuits over user-generated content. While legislative proposals circulate to update Section 230, such as Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (Safe Tech) Act, the government seems unable to take action. Policymakers should, however, look at Section 230 reforms as a public health strategy for confronting vaccine hesitancy.
The race to stop vaccine misinformation
As we race to control the COVID-19 pandemic through vaccination, we must also fight the spread of online health misinformation. Today’s anti-vaccination movement uses some of the same anti-vaccine messaging from 100 years ago. A big difference, however, is the amplification provided by modern communication methods. The use of highly skilled social media, combined with Big Tech’s profit-motivated carelessness, endangers us all. If this misinformation pandemic continues to spread, so may fatal, vaccine-preventable diseases.