AI Chatbots Cut Conspiracy Beliefs by 20%, and the Effects Last For Months

SuchScience
4 min readMay 5, 2024

--

In a new study, researchers from MIT and Cornell have found that just a few brief conversations with an AI chatbot can reduce a person’s belief in conspiracy theories by around 20% on a 100-point scale, with the effects lasting up to two months.

These findings suggest that personalized, interactive debunking can be a powerful tool in the fight against misinformation.

Challenging Deep-Rooted Beliefs with Personalized Dialogues

The study, conducted by Thomas Costello and David Rand of MIT and Gordon Pennycook of Cornell University, involved 2,190 participants across two experiments.

The participants were recruited through CloudResearch’s Connect platform.

The sample included about 48% men and 52% women, aged 18–80, with an average age of 46 in Study 1 and 42 in Study 2.

Participants’ education levels varied, with the most common categories being “Some College” and “Bachelor’s Degree.”

The study also collected data on participants’ political affiliation, religiosity, and other demographic factors to examine their potential influence on the effectiveness of the AI-led debunking.

The sample was quota-matched to the U.S. census demographics for age, gender, race, and ethnicity to ensure representativeness.

The participants all completed the experiment from their own devices, in their chosen environment.

What Do You Believe To Be True?

They participants were asked to describe a conspiracy theory that they believed in, and to provide supporting evidence for that theory.

These conspiracies spanned a wide range of topics, from COVID-19 being a bioweapon to the moon landings being faked.

Other popular theories included 9/11 being an inside job, the Illuminati secretly controlling world events, and Princess Diana’s death being orchestrated by the royal family.

To quantify the participants’ belief in conspiracy theories, the researchers used a 100-point scale.

The participants rated their belief in their chosen conspiracy from 0 (“Definitely False”) to 100 (“Definitely True”), with 50 indicating uncertainty.

Participants then engaged in a three-round conversation a chatbot using GPT-4, an AI model developed by OpenAI.

The chatbot used the specific information provided by each participant to generate personalized counterarguments and evidence debunking their specific conspiracy theory.

Lasting Impact Across Demographics

The results were striking: participants who chatted with the AI showed an average 20% reduction in belief in their chosen conspiracy.

This effect held true across all age groups, education levels, and political ideologies, and factors such as religion, ethnicity, age, and political orientation did not significantly influence the reduction in conspiracy beliefs after the AI conversations.

In the first experiment, involving 774 participants, pre-chat belief in the given conspiracy averaged 83.8 points.

After the AI conversations, these beliefs dropped by an average of 16.5 points, a 21.4% decrease.

The second experiment, with 1,553 participants, replicated these findings.

The conspiracy belief levels dropped by an average 12.4 points, or 19.4%.

More than a quarter of participants in the debunking groups became uncertain about their conspiracy belief (dropping below 50 points) after the conversations.

And the impact of the AI conversations also extended beyond the targeted conspiracy: participants’ belief in other, unrelated conspiracy theories also decreased, suggesting a shift in overall conspiratorial thinking.

The participants also reported increased intentions to unfollow social media accounts promoting conspiracies, and to challenge conspiracy believers in discussions.

AI as a Tool Against Misinformation

“We find robust evidence that the debunking conversation with the AI reduced belief in conspiracy theories by roughly 20%,” the authors write. “This effect did not decay over 2 months time, was consistently observed across a wide range of different conspiracy theories, and occurred even for participants whose conspiracy beliefs were deeply entrenched and of great importance to their identities.”

The study’s findings challenge the notion that conspiracy believers are unwaveringly resistant to counterevidence.

By engaging believers in interactive, personalized dialogue, AI chatbots were able to deliver compelling, targeted debunking that led to meaningful, lasting belief change.

“These findings profoundly challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’,” the authors conclude. “Instead, our findings are more consistent with an alternative theoretical perspective whereby epistemically suspect beliefs — such as conspiracy theories — primarily arise due to a failure to engage in reasoning, reflection, and careful deliberation.”

This study highlights how even people who strongly believe in conspiracy theories can change their minds when presented with sufficiently compelling evidence tailored to their specific beliefs.

And AI chatbots offer “a promising new tool,” the authors write, “for delivering such personalized debunking at scale.”

Study Details

Originally published at https://suchscience.net on May 5, 2024.

--

--

SuchScience
SuchScience

No responses yet