Skip to content
News Slate Apr 2026

Slate: AI writing detectors are solving the wrong problem

On April 17, 2026, Slate published an investigative essay by Tim Requarth examining the growing controversy around AI writing detection — and arguing that the entire framework for detection is pointed at the wrong target.

What the essay argues

Detection tools, including Pangram, focus on the final prose output of a piece of writing. If the text reads like a human wrote it, it passes. If it reads like AI, it flags. Requarth’s argument is that this approach misses where AI most changes journalism and professional writing: not in the last draft, but in the research, framing, and story selection that precede any writing at all.

His example: a journalist who uses an AI-generated summary of a scientific study to frame a story will write prose that sails through any detector, because the prose itself is human. Meanwhile, the AI’s framing — which questions the study raised, which findings it highlighted, which context it omitted — has already shaped the entire piece. A journalist who used AI only to clean up editing might fail a detection test despite having done more original intellectual work.

Why this matters for editorial teams

The current institutional response to AI in newsrooms has largely been to draw red lines around published prose. Requarth argues this creates a false sense of control: organizations appear to be managing AI use while leaving the harder question — how AI shapes what journalists choose to investigate and how they frame it — entirely unaddressed.

The essay also documents a documented fairness problem: AI detection accuracy varies across demographic groups, with non-native English speakers facing substantially higher false-positive rates. Writers whose prose patterns differ from the model’s training distribution face disproportionate risk of inaccurate flagging regardless of whether they used AI.

Context

The essay appeared during a period of active controversy over AI detection tools in professional publishing — specifically around the Grammarly Expert Review feature and several cases where established writers were flagged incorrectly. The broader point Requarth makes is that detection itself is an arms race: as more humans adopt AI tools and as AI tools produce more human-like output, the signal detection tools use becomes less reliable over time.