AI chatbots have become ubiquitous tools for delivering news and information, but a recent international study coordinated by the European Broadcasting Union (EBU) and spearheaded by the BBC has shed light on a concerning trend. The study, encompassing 22 public service broadcasters across 18 countries and 14 languages, revealed that AI assistants are distressingly inaccurate, getting news content wrong a staggering 45% of the time.
Professional journalists meticulously combed through over 3,000 AI responses from platforms like Chat GPT, Copilot, Gemini, and Perplexity. The findings were alarming, indicating that 31% of responses lacked proper source citations, 20% contained significant factual errors such as outdated information or even fabricated details, and a whopping 45% had at least one serious error.
Among the AI assistants analyzed, Google’s Gemini emerged as the weakest link, with a staggering 76% of its responses marred by issues primarily stemming from a lack of proper source attribution. EBU Media Director and Deputy Director-General Jean Philip De Tender emphasized the gravity of these findings, highlighting how systematic flaws in AI-generated news content erode public trust, potentially leading to a widespread disengagement with information altogether.
In response to these critical findings, the EBU and the BBC have taken proactive steps by launching the “News Integrity in AI Assistants Toolkit.” This toolkit aims to equip AI developers and users with the necessary resources to enhance response quality and promote media literacy in the realm of AI-generated news content. Additionally, the organizations have called upon both EU and national authorities to enforce existing regulations pertaining to information integrity, digital services, and media pluralism, proposing the implementation of ongoing independent reviews of AI assistants to ensure accountability.
Despite these significant revelations, major players in the AI industry such as Open AI, Microsoft, Google, and Perplexity AI have remained silent on the study’s outcomes, raising questions about their commitment to addressing the pressing issues of accuracy and reliability in AI-generated news dissemination.
This study serves as a stark reminder of the challenges that accompany the integration of AI technologies into critical functions such as news delivery. While AI chatbots offer unmatched speed and efficiency, the imperative of ensuring accuracy, credibility, and transparency in news content cannot be understated. As we navigate this era of rapid technological advancements, it is crucial for stakeholders to prioritize the ethical and responsible deployment of AI in the dissemination of information, ultimately safeguarding the integrity of news and the public’s trust in the media landscape.