When then-President Donald Trump announced his controversial decision to pull out of the Paris Climate Agreement, social media accounts around the country engaged in a passionate debate about the merits of combatting global warming. Nearer the end of his term, dual waves of race and civil justice crashed through virtual discussions across Twitter, Facebook, and beyond.
These conversations were not new; modern climate activism predates Twitter by decades, and present-day racial activism predates the internet by well over a century. An analysis from Pew Research found the #BlackLivesMatter hashtag has been a relatively consistent presence on Twitter for the last five years, with periodic increases in usage around key events. The nature of these discussions has changed and supposedly they have been democratized in the era of mass social communications. As these long-overdue reckonings played out in this virtual arena, they brought with them a new voice, often underappreciated by those most involved.
Social media disinformation bots are computer programs that post autonomously on social media platforms, amplifying information or disinformation over hundreds of thousands of digital voices. Through practices such as “astroturfing” or “Twitter bombing,” these bots are deployed to post tens or hundreds of thousands of messages directly displaying false information or directing users to websites with fake news. Social media bots are responsible for as much as 19% of the content on platforms such as Twitter, and analyses show they play a significant role in influencing decision making. A long-awaited Senate report from the Select Committee on Intelligence found that Russian interference, including its use of disinformation bots, even influenced the 2016 U.S. presidential election, with 85 pages dedicated to “Russia’s Use of Social Media.” In particular, the report emphasized the “high volume, multiple channel” strategy enabled by social media disinformation bots.
An analysis published in the journal Climate Policy assessed 880,000 tweets in the two months following the U.S. withdrawal from the Paris Climate Agreement, finding that bots were “not just prevalent, but disproportionately so in topics that were supportive of Trump’s announcement or skeptical of climate science and action.” Despite social media bots contributing as much as a quarter of climate-related tweets, many users fail to recognize their prevalence, and no major medical organizations have publicly acknowledged their threat.
Another bot-tracking project, Bot Sentinel, detected, “an uptick in inauthentic activity by accounts that were already active and also new accounts created” to spread disinformation during the height of #BlackLivesMatter. Oumou Ly, a staff fellow at Harvard University’s Berkman Klein Center, observed that the majority of the disinformation seems to be coming from the far right. “The online information environment is very asymmetric,” she told the publication Digital Trends. “The right participates more often and in a more sustained way in spreading disinformation because they have more to gain politically from it. It’s part of their political strategy.”
The spread of information on the internet is an inconsistent phenomenon, with select content “going viral” through social media shares and links. Online disinformation and misinformation may have a competitive advantage over more accurate, less sensational content. When researchers at the Massachusetts Institute of Technology (MIT) analyzed over 125,000 tweeted messages, they found that misleading or incorrect stories traveled six times more quickly and reached more users than true stories. “No matter how you slice it, falsity wins out,” said Deb Roy, who runs MIT’s Laboratory for Social Machines and previously served as chief media scientist at Twitter.
In the past decade, APA has increasingly recognized the vital role that civil rights activism, anti-racism, and climate advocacy play in the overall health of patients. Expanding our efforts into these areas opens a broader concept of what it means to be a psychiatrist or to advocate for psychiatric patients. This reconsideration of our role casts a spotlight over issues previously untouched by physicians as advocates, with social media increasingly a target. With these and other psychiatric advocacy efforts negatively impacted by social disinformation bots, we discover a broader risk to advancing the goals of American psychiatry.
Americans, including our patients, increasingly utilize social media, with Pew Research finding that 70% of Americans use social media to connect with each other, find news content, entertain ourselves, and exchange information. High-quality research increasingly ties social media to hyperpolarization, cyberbullying, depressive symptoms, addictive behavior, and more, suggesting social media bots are just one contributor to a larger risk to mental health.
As American psychiatrists look to take on these and other challenges in an increasingly digital landscape, we must start by publicly acknowledging these new threats. APA must call upon governing bodies and social media platforms to further research on social media bots and work to limit the damage they can inflict. APA and APAPAC have an opportunity to distinguish themselves as leaders on this issue by supporting legislative policies that constrain astroturfing and other deceptive bot behaviors. Psychiatrists may also work to better understand this threat through our own research tying disinformation bots to psychiatric outcomes more directly. Ultimately, protecting our patients from online misinformation and disinformation will require a multifaceted approach. ■
This column was adapted from its earlier publication in the Missouri Psychiatric Physicians Association’s Show Me Psychiatry newspaper and is reprinted with permission.