Skip to content
News TechRadar Mar 2026

TechRadar: Grammarly's CEO admits Expert Review was 'not a good feature'

On March 24, 2026, TechRadar published Lance Ulanoff’s account of Grammarly CEO Shishir Mehrotra’s public admission that the Expert Review feature — launched in August 2025 as part of Grammarly Pro — was a failure. Mehrotra’s quoted statement from the Verge’s Decoder podcast: “The feature was not a good feature. It wasn’t good for experts, it wasn’t good for users.”

Expert Review generated AI writing feedback attributed by name to real writers and public figures: Stephen King, Kara Swisher, Neil deGrasse Tyson, Carl Sagan, Casey Newton, and others. None were contacted, compensated, or asked for consent. Some, including Carl Sagan, are deceased. A fine-print disclaimer stated the names did not indicate “affiliation or endorsement,” but the feature presented the AI output as if it reflected how those individuals actually think and write.

When Platformer’s Casey Newton tested the feature, fake “John Carreyrou” advice included suggesting he add “sensory imagery” and “chalk dust” — unrelated to how Carreyrou actually writes. A fake “Kara Swisher” recommended an unrelated narrative digression. The advice was generic and disconnected from the named experts’ actual styles. Mehrotra acknowledged he had never used the feature himself before the backlash erupted, despite it having been live since August.

Journalist Julia Angwin filed a class-action lawsuit in the US District Court for the Southern District of New York, alleging violations of privacy and right-of-publicity law. Damages sought exceed five million dollars. The feature was disabled on March 11, 2026.

Mehrotra proposed a replacement built on voluntary participation — a “YouTube model” where writers opt in, control their involvement, and can be paid for contributions. He did not acknowledge any obligation to compensate experts for past use. Analysts quoted in the piece suggest Superhuman may pivot toward generic AI personas — “The Academic,” “The Copy Editor” — rather than named individuals, to reduce legal exposure going forward.

Ulanoff’s analysis extends the concern beyond Grammarly: he demonstrated that Gemini, given a TechRadar article and asked what “Lance Ulanoff would say,” produced a detailed attributed editorial response. The persona-impersonation issue is not specific to Grammarly’s implementation — it is possible on any major AI platform and raises questions about what protections, if any, professional writers have against synthetic representations of their voice.