close
close

What “death commissions” can teach us about health misinformation

What “death commissions” can teach us about health misinformation

In 2009, former Governor Sarah Palin took a provision of the Affordable Care Act (ACA) that allowed doctors to spend time with patients to talk to them about end-of-life care and pixie dusted it into the idea that there were “death panels” in Obamacare. It was like pouring gasoline on the already fierce partisan battles over the law. Republican politicians began repeating the myth. Death panels were quickly all over the news. In a thoroughly well-intentioned attempt to play referee for the American people, the news media tried to provide the facts, particularly by debunking the lie about death panels. Experts, reporters and professionals on TV news and newspaper articles brought the truth to light—there were no death panels in the law. Some of the facts may have reached the public, but what people mostly “heard” was a constant refrain: DEATH COMMISSIONS. In the end, the lie was magnified – not debunked – and the news agenda was hijacked by the misinformation it sought to correct.

For Palin’s disinformation ploy to succeed, it had to draw on something real (usually disinformation has some sliver of belief or fact behind it that people can latch onto). In this case, it was the far right’s dislike of President Obama and the federal government. And it was true that doctors should have end-of-life conversations with seniors, something nearly everyone in health care called for, although that involved respecting patients’ wishes, and notably the provision authorizing Medicare payment was not included in the final bill.

One way we know that the death commission myth is perpetuating is from our polls (which we now repeat in our surveys tracking health care misinformation). In 2010, a remarkable 41% of the public said they believed the ACA had death commissions. And the lie persisted: In 2014, the same number said the law had death commissions. In 2019, the number was still 38%. By 2023, as the law became more popular and Obama faded from the scene, the number who believed the lie dropped to 8%. But the myth still persisted: 70% still said they weren’t sure the ACA had death commissions.

The example of the Palin death commissions is now eerily familiar. Most health misinformation today is initially generated by a small number of actors and, while it may seem to be everywhere, is seen by only a relatively small number of people on social media. An even smaller number actively engage with it by posting about it or sharing it with others. It is only when misinformation is incorporated into policy and news media coverage and amplified that it can spread, reach significant numbers of people and have a larger national impact.

The big, if somewhat unique, example of this multiplier effect was the COVID-19 vaccine. Former President Trump, some other Republican governors, and conservative media outlets turned the vaccine into a symbol of resistance to the heavy hand of the federal government and turned not getting vaccinated into an affirmation of personal freedom, sharply dividing the country along partisan lines on the COVID vaccine (Understanding the US Failure on Coronavirus – an essay by Drew Altman | The BMJ). The result: In our monthly Vaccine Monitor polls throughout the pandemic, party affiliation was the strongest predictor of nearly every position we asked about on COVID. However, most cases of health misinformation are not fueled by a president and do not capture the attention of the entire nation. Vaccines are also a pretty unique case because there has long been a well-organized anti-vaccination movement.

Take the “Meet Baby Olivia” video as another example. Baby Olivia was a video about fetal development posted to Facebook in 2021 by the anti-abortion group Live Action. The American College of Obstetricians and Gynecologists said the video was “designed to manipulate viewers’ emotions rather than convey evidence-based, scientific information about embryonic and fetal development.” At its peak in June 2022, the Baby Olivia video generated 4,700 comments on Facebook, and as an estimate based on similar social media posts, perhaps three or four times as much total engagement in the form of likes, shares or comments. That’s a lot of people at a town hall meeting or campaign rally, but a vanishing number both on Facebook on any given day and in terms of potential impact on the public. But then, like a smaller version of the dynamic we’ve seen with death panels, Baby Olivia became a political issue. In North Dakota and then in nine other states, a bill was introduced that would have required schools to show Baby Olivia or a similar video to their students. Media coverage of the controversy surrounding the bill exploded. Baby Olivia moved beyond its initial niche on Facebook and became a much larger phenomenon.

We may be exaggerating the impact of much sensationalist, untruthful, and ideologically motivated misinformation on social media, in part because it can be so egregious when it actually reaches only a small number of already like-minded people who are looking for it. Perhaps more important is how it sometimes spreads from social media into politics, finding prominent political surrogates and attracting general media attention. Then it reaches much larger populations who may be uncertain about what is true and what is not, and can be persuaded by it. Media segmentation along partisan lines and the hunt for clicks creates a perverse incentive that can amplify misinformation even further: the more egregious and sensational the misinformation, the more attention it and its spreaders are likely to receive. Amplification by politics and the news media then leads to renewed attention on social media, creating a pernicious cycle of misinformation.

The best solution is to prevent misinformation and its spreaders from gaining a foothold on social media in the first place. But combating misinformation on social media is largely the job of social media platform companies, which are refraining from self-regulation in the protection of the tech boom. In the US, the government does not have the authority to regulate misinformation on the platforms, although a recent Supreme Court ruling has given the government permission to continue communicating with the platform companies about misinformation for now.

The news media will and should cover exciting and timely health policy issues, such as anti-Obamacare rallies or state laws requiring anti-abortion videos in schools. As the experience of the death commissions shows, it can sometimes be difficult to do this without inadvertently promoting misinformation and its spreaders. But the primary focus of reporters and editors is on their issues and stories, not on addressing misinformation. Fact-checking in news organizations is organized as a separate function and product, and its purpose is narrower; it is primarily about holding candidates and elected officials accountable for false numbers and statements. There are a few journalists who have made misinformation a topic or a regular focus, and very few in the health space (including several in our newsroom).

Even if this is not health misinformation, the media is struggling to walk the fine line between denouncing extreme misinformation on social media and spreading it in the wake of the Trump assassination attempt.

More fundamentally, the news media generally views itself as a reporting industry, rather than a source of information for the public or an approach to addressing public knowledge deficits. How to navigate the minefield of disinformation is one of the issues we hope to address with the journalism community across the country in our new Health Misinformation and Trust initiative.

View all Beyond the Data columns by Drew