Skip to content
Article Nieman Lab Mar 2026

Grammarly AI expert review controversy — Nieman Lab analysis

Grammarly’s Expert Review feature, launched in August 2025, offers users AI-generated writing feedback presented as coming from the perspective of specific subject-matter experts, including well-known journalists, editors, and authors. The feature uses real names without the individuals’ permission, a practice that has drawn criticism from the journalism community and led to a class action lawsuit.

Context

The feature presents revision suggestions framed as if they come from identifiable people. Nieman Lab’s Laura Hazard Owen tested the feature and found that AI editing suggestions were attributed to prominent journalism figures including Marty Baron, Margaret Sullivan, and Penny Abernathy. None of these individuals were consulted or compensated.

Grammarly’s parent company Superhuman defended the feature by saying these experts are mentioned “because their published works are publicly available,” and that references are “for educational purposes only.” Historian C.E. Aubin responded in Wired: “These are not expert reviews, because there are no ‘experts’ involved in producing them.”

In March 2026, journalist Julia Angwin filed a class action lawsuit over the feature, which Grammarly has described as “sloppelgangers” in internal discussions.

Key takeaway

For writers and editors, this case raises a direct question: when AI tools claim to provide feedback “in the style of” a known writer, what exactly is being offered? The answer, in this case, is AI-generated suggestions with a human name attached for credibility. Understanding this distinction matters for anyone evaluating AI writing feedback tools.

Who should read this

Writers who use AI editing tools, editors evaluating feedback software for their teams, and anyone interested in the ethical boundaries of AI in writing assistance.