Reuters Institute: What the 2026 AI and Future of News conference revealed
Published March 18, 2026, this article from the Reuters Institute for the Study of Journalism recaps their second annual “AI and the Future of News” conference, held on March 17. The event drew more than 3,000 participants and ran five panels: how journalists cover AI itself, how AI is used in investigative work, the changing nature of fact-checking, AI’s broader societal impact, and the editorial approach of the Guardian.
Covering AI as a beat
Panelists observed that most AI reporting defaults to framing that makes the technology seem either alarming or inevitable — language that discourages journalists from applying the same scrutiny they bring to other complex subjects. The recommendation from journalists on the panel was to collaborate with academic researchers and to follow up on claims that AI companies announce publicly, tracking whether those claims actually materialize in the ways described.
AI in investigative reporting
The investigative panel drew a clear line between AI’s capacity to analyze data at scale and the accountability that investigative journalists carry for how that analysis is used. Tools that surface patterns in large datasets are practically useful, but reporters who use them in investigative contexts are editorially and legally accountable for how patterns are interpreted and reported. The panel’s message was that useful AI assistance in investigative work does not reduce the verification burden — it may increase it.
Fact-checking under pressure
One of the most concrete findings from the conference came from a Brazilian fact-checking organization: claims involving AI-generated content rose from 7% to 16% of their total workload year over year. Fact-checkers are building AI-assisted detection tools to identify false claims at scale, while dealing with the additional challenge that AI-generated disinformation is often optimized for emotional resonance rather than surface plausibility.
The Guardian’s approach
Rather than deploying public-facing chatbots or AI-generated content, the Guardian built its AI program around mandatory staff training covering how large language models work and what their failure modes are. The framing was explicit: treat AI as a tool with known limitations, not as a capability to be maximized. This is a deliberate position — editorial accountability over feature deployment — not a cautious default.
Who it is useful for
Journalists, editors, and newsroom managers who want a current-state survey of how AI is affecting the field from practitioners across investigative, editorial, and fact-checking functions. The conference is one of the primary venues where practitioners share specific operational data and approaches, which makes the recap more concrete than most general commentary on AI and journalism.