Poynter: Journalism students are more skeptical of AI than you might think
Dan Kennedy is a journalism professor at Northeastern University and a longtime observer of the media industry. His March 2026 piece for Poynter describes an experiment he ran with a graduate ethics seminar: students used Claude to complete a set of journalism tasks and then reflected, in writing, on what AI could and could not do for their reporting.
The results surprised Kennedy. Rather than enthusiasm for the time-saving possibilities, most students expressed skepticism that was both personal and professional. One rejected AI outright, describing it as something she had not opened and was not willing to use. Others framed their concern around specific losses: the skills built through repeated drafting, the human texture in language that comes from actual experience, and the editorial judgment that develops only through the friction of writing something badly and figuring out why. Several students concluded that even when AI could handle a task efficiently, they would rather do it themselves — because doing it themselves was how they learned.
There was broad agreement on one boundary: any AI involvement in a piece should be disclosed to readers. Students treated this not as a legal question but as an ethical one — a matter of honesty with an audience that has a legitimate interest in knowing how a piece was made.
Kennedy does not frame the experiment as evidence that students are right to be skeptical. He notes that AI can perform useful supplementary functions — organizing data, generating initial frameworks, summarizing background documents — and that blanket refusal forecloses tools that will likely become standard in some newsroom contexts. But he takes the skepticism seriously as information. His students are prioritizing the formation of writing skills and community relationships over efficiency gains, and that prioritization reflects something real about what journalism is for.
The article is useful for editors and journalism educators thinking about how to introduce AI tools into writing programs without positioning AI as a replacement for the practices that build competence. It is also worth reading for working journalists who feel pressure to adopt AI at every stage of their process but are uncertain whether that adoption serves the work or just the timeline. Kennedy’s students offer a model of engagement that is neither technophobic nor credulous: willing to test the tools, clear about what the tools cannot replace.