Medium: Why AI isn't equipped to replace human writers
What the article is about
Elainna Ciaramella, a professional writer with 16 years of experience, documents her hands-on testing of AI writing systems and concludes that the replacement threat is overstated — but that the gap between writers who learn to use AI well and those who do not is real and growing.
Context
When ChatGPT became widely available, Ciaramella’s response was to test rather than panic. She worked across multiple platforms — ChatGPT, Gemini, Copilot, Grok, Perplexity, NotebookLM, and Claude — and focused specifically on how each performed in professional writing contexts: research-dependent articles, sourced journalism, and content that requires accuracy verification.
The practical problem she describes most clearly is citation reliability. In testing, AI systems produced citations to sources that had no meaningful connection to the claims they were attached to. The sources existed, but their content did not support what the AI had written. For journalism and research-dependent content, this is not a minor editing problem — it creates a verification burden that may exceed the time savings from using AI drafts in the first place.
Key takeaway
Ciaramella’s argument is not that AI tools are useless. She uses them and finds value in aspects of her workflow. Her argument is more specific: AI does not replace the judgment that makes professional writing credible. It cannot distinguish a reliable source from an unreliable one in context, catch its own factual drift, or apply the kind of critical attention that experienced writers use to question their own work.
The career implication she draws is pragmatic. Writers who use AI to handle mechanical tasks while applying their own judgment to accuracy, sourcing, and quality will be more productive than peers who work without AI. Writers who hand verification responsibility to AI will have a higher error rate than their pre-AI work. The skill being tested is knowing which tasks to delegate and which to keep.
The article is honest about the limits of its scope: it reflects one writer’s professional experience and does not claim to be a controlled study. The value is in the specific examples and the practitioner’s-eye view of what the failure modes actually look like in daily work.
Who should read this
Freelance writers, journalists, and content professionals evaluating whether and how to incorporate AI tools in their work. Particularly useful for those who work in contexts where source accuracy is non-negotiable — journalism, research writing, and technical content — where the stakes of AI citation errors are higher than in other writing contexts.