To stop Russian propaganda in chatbots, AI designers must promote reliable news sources
Several recent studies by watchdog organisations have highlighted how chatbots tend to regurgitate disinformation and propaganda, especially from Russian sources. Reporters Without Borders (RSF) calls on governments and the designers of these artificial intelligence systems to adopt radical measures to safeguard the right to reliable news and information.
A disturbing study published in June by NewsGuard, a watchdog that identifies disinformation sources, found that chatbots using generative artificial intelligence (AI) are completely exposed to propaganda and disinformation campaigns. The top ten chatbots audited by NewsGuard, including OpenAI's ChatGPT, provided false information in their answers. Mis- and disinformation were particularly prevalent in the chatbots’ responses to prompts about the US elections, as answers often included Russian disinformation narratives that had been posted on fake news sites.
NewsGuard’s audit is the latest of many reports sounding the alarm about the risks chatbots pose to reliable online information – and to healthy democracies. The NGO Algorithm Watch published a study in October highlighting how large language models, a deep learning algorithm used in generative AI, tend to generate false information during elections. These technologies are “immature and dangerous,” the study found. In RSF’s view, it is clear that generative AI chatbots indiscriminately recycle disinformation and propaganda picked up online without giving the public the means to distinguish these falsehoods from reliable information.
“Chatbots can be massive propaganda vectors. It is imperative to treat them as high-risk systems under the terms of the European Union’s AI Act, and to impose strict technical standards, so that they respect the public’s right to reliable information. These chatbots and the search engines they use must guarantee the promotion of trustworthy sources of information in accordance with standards for journalistic ethics, such as the Journalism Trust Initiative (JTI). And their designers must give a firm undertaking to support media that respect journalism’s fundamental principles – accuracy, impartiality and accountability.”
The spread of propaganda via chatbots raises crucial questions about the reliability of AI-generated information. If a chatbot’s database is full of erroneous or misleading information – which is often the case with generative AI bots, as their databases include most content on the internet – the bot is then highly susceptible to regurgitating this information in its answers. For this reason, RSF called on the European Union to consider the AI systems distributing general information as a “high risk” in their AI Act. RSF calls for a firm and immediate response from authorities and legislators worldwide to force AI system designers to develop systems that do not violate the public’s right to access reliable information, following RSF’s recommendations on AI and the right to information. Frenzied innovation and competition among AI designers must not overshadow the most fundamental problem with this technology: without a database full of quality data, there is no reliable AI.