Can artificial intelligence technology help solve difficult problems? This is a common question you see all the time these days, and the answer is usually no. But a new study published in the journal Science hopes that large language models may actually be a useful tool for changing the minds of conspiracy theorists who believe incredibly stupid things.
If you’ve ever talked to someone who believes in ridiculous conspiracy theories—from the belief that the Earth is flat to the idea that humans never actually landed on the moon—you know they can be stubborn. They often resist changing their minds because they insist that certain things in the world are actually explained by some very implausible theories, and so their thoughts get deeper and deeper.
A new paper titled “Persistent reduction of conspiracy theories through conversations with artificial intelligence” tests the ability of artificial intelligence to communicate with people who believe in conspiracy theories and persuade them to reconsider their worldview on a specific topic.
The study involved two experiments involving 2,190 Americans who used their own words to describe the conspiracy theories they sincerely believed. Participants are encouraged to explain the evidence they believe supports their theory, and they then engage in a conversation with a bot built on the large language model GPT-4 Turbo, which will respond to the evidence provided by human participants. A control condition involved people talking to the AI chatbot about something unrelated to conspiracy theories.
The study authors wanted to try using artificial intelligence because they hypothesized that the problem with tackling conspiracy theories is that there are so many of them, which means combating these beliefs requires a level of specificity that can be difficult without special tools. Authors recently interviewed by Ars Technica are encouraged by the results.
“The ability of AI chatbots to sustain tailored rebuttals and personalized in-depth conversations led them to reduce conspiracy beliefs over several months, challenging research that suggests such beliefs do not change,” Editor Ekeoma Uzogara wrote in an article about the study.
Dubbed Debunkbot, the artificial intelligence chatbot is now even publicly available for anyone who wants to try it out. While more research into these types of tools is needed, it’s interesting to see people finding these tools useful in combating misinformation. Because, as anyone who has been online recently will tell you, there is a lot of nonsense on the Internet.
From Trump’s lies about Haitian immigrants eating cats in Ohio to claims Kamala Harris wore secret headphones in her earrings during the presidential debates, countless new conspiracy theories have surfaced on the Internet this week alone. There’s no sign that the pace of new conspiracy theories is slowing down anytime soon. If artificial intelligence can help combat this problem, it can only be good for the future of humanity.