Skip to content
Article Poynter Apr 2026

Poynter: Why poorly run AI rollouts fail newsrooms — and what to do instead

“Why can’t newsroom leaders just be normal about AI?” by Alex Mahadevan, published in Poynter on April 24, 2026, examines a pattern of high-profile AI experiments at news organizations that went wrong not because the technology failed but because the decision-making process surrounding them did.

Mahadevan focuses on three cases. At the Cleveland Plain Dealer, an initiative to produce AI-generated vertical videos featuring a talking building and reporter avatars met with immediate audience rejection — commenters specifically asked for human content creators to return. At McClatchy, a “content scaling agent” was introduced without staff consultation; when reporters raised concerns about their bylines appearing on AI-generated content, management stated they would use reporter names regardless of employee objections. Nota News, a hyperlocal startup, shut down after it was found to have run over 70 stories plagiarized from 29 outlets and 53 journalists through AI tools, published under editor names.

The analysis is not that AI tools are unsuitable for editorial work. The argument is that leadership failures — not technology failures — drove each of these outcomes. Mahadevan identifies six recurring breakdowns: not identifying a real problem that the initiative addresses, not consulting the audience before launch, ignoring or dismissing internal skeptics, lacking transparency with readers, missing internal organizational buy-in, and framing AI initiatives as novelty rather than as editorial decisions with editorial criteria.

The practical framework the article proposes treats AI initiatives as editorial decisions subject to the same process as any other: identify the specific reader or reporter need, consult stakeholders including those with objections, be transparent about what the technology is doing and why, and test before announcing. The cases Mahadevan examines are characterized by speed and unilateral decision-making — qualities that are normal in technology product launches but that create predictable failure modes in editorial environments where trust is the primary asset.

For writing teams, content operations, and editorial managers outside journalism, the same failure modes apply. Organizations that introduce AI writing tools through top-down mandates without staff consultation, without clear policies about when AI output is published and how it is disclosed, and without genuine problem definitions tend to produce versions of the outcomes Mahadevan describes. The article is specific enough to serve as a discussion document for any team evaluating an AI writing integration.