Meta apologies after its AI chatbot said Trump shooting didn’t happen
Meta’s AI assistant incorrectly said that the recent attempted assassination of former President Donald Trump didn’t happen, an error a company executive is now attributing to the technology powering its chatbot and others.
In a company blog post published on Tuesday, Joel Kaplan, Meta’s global head of policy, calls its AI’s responses to questions about the shooting “unfortunate.” He says Meta AI was first programmed to not respond to questions about the attempted assassination but the company removed that restriction after people started noticing. He also acknowledges that “in a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address.”
“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,” continues Kaplan, who runs Meta’s lobbying efforts. “Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”
It’s not just Meta that is caught up here: Google on Tuesday also had to refute claims that its Search autocomplete feature was censoring results about the assassination attempt. “Here we go again, another attempt at RIGGING THE ELECTION!!!” Trump said in a post on Truth Social. “GO AFTER META AND GOOGLE.”
Since ChatGPT burst on the scene, the tech industry has been grappling with how to limit generative AI’s propensity for falsehoods. Some players, like Meta, have attempted to ground their chatbots with quality data and real-time search results as a way to compensate for hallucinations. But as this particular example shows, it’s still hard to overcome what large language models are inherently designed to do: make stuff up.