Meta’s artificial intelligence assistant incorrectly stated that a recent assassination attempt on former President Donald Trump did not occur, a mistake that company executives now attribute to the technology that powers its chatbot and other bots.
Meta’s head of global policy, Joel Kaplan, called its artificial intelligence’s response to questions about the shooting “regrettable” in a company blog post on Tuesday. Meta AI was initially programmed not to respond to questions about assassination attempts, but the company removed that restriction after people started taking notice, he said. He also acknowledged that “in a few cases, Meta AI continues to provide incorrect answers, including sometimes asserting that the event did not occur – an issue we are working to address quickly.”
“These types of reactions, known as hallucinations, are an industry-wide problem we see in all generative AI systems and are an ongoing challenge for how AI handles immediate events in the future,” said the author. Meta Lobby’s Kaplan continued. “As with all generative AI systems, models may return inaccurate or inappropriate output, and we will continue to address these issues and improve these features as they evolve and more people share feedback.”
It’s not just Meta that’s in trouble: On Tuesday, Google also had to push back on claims that its search autocomplete feature was censoring results about assassination attempts. “Here we go again, another attempt to rig the election!!!” Trump said in an article published on “Truth Social”. “Follow META and GOOGLE.”
Since the emergence of ChatGPT, the technology industry has been grappling with how to limit the counterfeiting tendencies of generative artificial intelligence. Some players, like Meta, try to bolster their chatbots with high-quality data and instant search results as a way to compensate for the illusion. But as this particular example shows, it’s still hard to overcome what large language models are essentially designed for: making stuff up.