AI “foundation models” must be regulated to protect right to information
Reporters Without Borders (RSF) calls on those negotiating the final form of the European Union's proposed Artificial Intelligence Act to regulate “foundation models,” the large machine-learning models that are the cornerstones of the AI industry, or else they will threaten the right to reliable news and information. The EU negotiators are scheduled to hold their next “trilogue” meeting, scheduled December 6 2023.
Foundation models such as GPT-4 and Llama have been designed by leading AI sector companies such as OpenAI to be reused by developers to produce applications dedicated to specific tasks such as ChatGPT. In the media domain, they are already being used to produce content. But these models, which are trained on vast amounts of data, contain biases and are capable of producing false information. Without regulation they pose a threat to the integrity of the news and information available to the public.
RSF, which has already called for the AI Act to include measures to protect the right to information, is convinced that that these models must be regulated. But some European countries, including France, are advocating self-regulation based on codes of conduct. This will not suffice because these codes of conduct are non-binding and rely solely on the goodwill of AI companies.
If accepted, self-regulation would threaten the right to information, and would obstruct the ethical use of AI in the media in accordance with the commitments given in the Paris Charter on AI and Journalism.
“Under the Paris Charter on AI and Journalism, AI systems must be fully audited to verify their compatibility with journalistic ethics before they can be used. The AI Act must therefore make it a requirement for foundation models to comply with standards of openness, explainability of operation and transparency of systems as well as with measures to protect the right to information.”
Vincent Berthier
Head of RSF’s Tech Desk
In May, RSF already denounced the existence of fully automated fake news sites, some of which had published false news reports on international political matters. Algorithms that do not incorporate the necessary safeguards are dangerous and should not be allowed onto the market.