Skip to content
Article Nieman Lab Mar 2026

Nieman Lab: Grammarly's CEO defends Expert Review, and what the case reveals about AI consent

Joshua Benton’s March 23 Nieman Lab piece reconstructs the full arc of Grammarly’s Expert Review feature — from launch to lawsuit to shutdown to CEO defense — and analyzes what Grammarly’s CEO actually said in his public response, and what he did not say.

Expert Review launched in late July or early August 2025 as part of Grammarly Pro. The feature generated AI writing suggestions and attributed them to named real-world writers and journalists: Stephen King, Carl Sagan, Kara Swisher, Nilay Patel, Julia Angwin, and hundreds of others, including Benton himself. None were contacted. None were compensated. Grammarly charged $12 per month for the subscription that included the feature. Writer Ingrid Burrington coined “sloppelgangers” to describe the phenomenon — AI impersonators of real, named individuals.

On March 9, 2026, journalist Julia Angwin filed a class-action lawsuit in the US District Court for the Southern District of New York, alleging violations of New York and California laws that bar commercial use of someone’s name without consent. Minimum damages sought were $5 million, with actual damages to be calculated from the feature’s revenue. Her attorney reported hearing from 40 to 50 people who objected to being listed. Angwin’s own statement: “I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.”

Grammarly disabled the feature on March 11, after eight months of operation. CEO Shishir Mehrotra appeared on Nilay Patel’s Decoder podcast around the same time as Benton’s piece. On the podcast, Patel asked in three different ways: “How much should you pay me to use my name?” Mehrotra kept returning to disclosure language — that suggestions were “inspired by” named experts — without addressing the consent question directly. Mehrotra also acknowledged he had not personally used the feature before the backlash.

Benton’s analysis focuses on what the CEO’s framing avoided: Mehrotra consistently described Expert Review as an execution problem (“off-strategy”) rather than an ethics problem. He proposed a replacement where writers could opt in and be paid — a “YouTube model” — without addressing whether past use required any remedy. Mehrotra’s emailed statement: “The feature was not a good feature. It wasn’t good for experts, it wasn’t good for users.”

The piece matters for anyone working as a writer or editor who thinks about AI tools professionally. Benton traces exactly how a writing assistance product moved from a plausible use case (expert-framed feedback) to an unauthorized identity product without the company appearing to register that distinction. The legal and ethical boundaries the case draws — around commercial use of a writing professional’s name, consent, and compensation — are ones that will likely appear again as AI writing tools expand.