Skip to content
Article Contently Dec 2025

Contently: Why human editorial judgment still sets elite brands apart

What the article is about

Published in December 2025 on Contently’s Content Strategist blog, this piece makes a specific argument about where content strategy is heading: as AI-assisted production scales up, the cost of factual errors is rising simultaneously, and the brands that maintain strong editorial governance will be better positioned for AI-mediated discovery. The article is a Contently-authored piece and reflects its position as a platform for managed content creation, but the data it cites comes from external sources.

Context

The article opens with a traffic context that shapes the rest of the analysis. Adobe reported that AI-referred traffic surged 1,200% between mid-2025 and early 2026. Gartner projected traditional search traffic will decline 25% by 2027. Semrush found that 86% of high-commercial-intent queries now trigger AI-generated responses. Against this backdrop, the channels through which content is discovered are changing faster than many editorial teams have adapted their processes.

The failure rate cited for AI-generated content is the article’s sharpest data point: MIT research found that 15–20% of AI-generated content contains significant factual errors when published without human review. This means that teams scaling AI production without corresponding editorial review are generating errors at a predictable, measurable rate — not as edge cases, but as a structural outcome.

Key argument and method

The article argues for positioning human editorial judgment at specific critical points in production rather than distributing review evenly across the full content pipeline. The suggested model: let AI handle mechanical tasks — formatting, initial drafts, metadata, tagging — while editors focus their time on strategy, accuracy validation, and final polish. This is a different question from whether to use AI at all; it is about where human review creates the most value per hour spent.

The article also addresses AI citation mechanics. Large language models and AI search systems use E-E-A-T signals — Experience, Expertise, Authoritativeness, Trustworthiness — to select sources they surface in responses. Content that lacks clear expert attribution, consistent fact-checking records, or entity disambiguation is less likely to be cited. Editorial governance, in this framing, is not just a quality question but a discoverability question in AI-mediated search.

A case study from a Fortune 500 healthcare company illustrates the argument: by combining AI production with structured expert editorial oversight, the organization achieved a 47% increase in organic traffic, a 34% improvement in content-to-lead conversion, and a 94% reduction in factual errors over four months. The article attributes these results specifically to the workflow structure — not to any particular AI tool.

Who it is useful for

The analysis is most relevant for content directors, editorial strategists, and marketing leaders at organizations that have already begun AI-assisted production and are evaluating how to structure human involvement. The framing around AI-referred traffic and E-E-A-T signals is particularly useful for teams updating their content strategy to account for how discovery patterns are changing, rather than optimizing for traditional search alone.