Mohit Aggarwal: AI-assisted writing authenticity — keeping your voice when using Claude or ChatGPT
Mohit Aggarwal, a product manager with eight years of experience who builds AI workflows for teams, published this piece in March 2026 as a first-person account of what actually determines whether AI-assisted writing sounds authentic. The article does not provide a prompt template or a tool ranking. It describes a specific moment — the near-withdrawal of a piece he had written with Claude’s help — and uses it to examine why readers responded to it as they did.
The article’s starting point is a piece Aggarwal almost did not publish. He had drafted it with substantial AI assistance and felt uncertain whether it should be attributed to him. He published it anyway. Readers described it as “the most honest piece of writing” they had encountered on that subject. The disconnect between his anxiety about authenticity and the readers’ experience of it pushed him toward the article’s central question: what is actually creating the reader’s sense that a piece is genuine?
His answer centers on what he calls honest workflow. The authenticity readers respond to does not come from minimizing AI involvement; it comes from having something specific to say and using AI to express it more clearly rather than to supply the ideas. The framing matters: if the author’s intent is present and the editing refines rather than replaces that intent, the output carries the author’s perspective regardless of how many tools were involved.
The article is critical of the common alternative approach — trying to make AI output sound human by introducing deliberate imperfections, varying sentence length artificially, or removing phrases that “sound like AI.” Aggarwal argues this treats the symptom rather than the cause. AI-generated text that lacks a specific perspective will feel inauthentic regardless of how well it is humanized at the surface level; text with a clear authorial intent will feel authentic even if it is grammatically polished.
The piece is grounded in practice rather than principle, which makes it more actionable than most writing-and-AI discussions. It is most relevant for professional writers and content creators who are already using AI tools and are trying to understand why some results feel like their own voice and others do not — and for anyone developing internal guidelines on AI-assisted writing who wants a framework that goes beyond surface-level rules about disclosure.