Skip to content
Article Medium Mar 2026

Medium: giving writers control over AI training use of their work

What the announcement is about

On March 19, 2026, Medium published an announcement addressed to writers on the platform, explaining new account controls over how AI companies can use published work for model training. The announcement is notable not as a legal settlement or regulatory response but as a platform-level decision to give individual writers an active choice rather than a default policy applied to everyone.

Context: how AI training and web content intersected

Most major AI language models were trained on web content collected at scale, without explicit consent from writers. Medium hosts a large volume of long-form writing, and like many publishing platforms, it became a training data source for models from companies including OpenAI, Anthropic, Microsoft, and Google. The new controls acknowledge this directly and apply to future use of published content.

The two options

Writers now choose between two settings in their account:

The first, “Prioritize Maximum Reach,” is the default. It requests that AI companies avoid training on a writer’s content unless they provide attribution and direct traffic back to the author. This is the option for writers who want their analysis or writing to appear as a cited source when AI assistants answer related questions.

The second, “Minimize Third-Party Training,” requests that all listed AI companies refrain from training on the content entirely, even if this means the work is cited less often in AI-generated responses.

Medium frames the policy around what it calls the “3Cs”: consent, credit, and compensation. The controls address the first two directly. Compensation — payment for past or future use of training data — is not part of this announcement.

What this means for writers

The practical trade-off is between discoverability in AI systems and protection from AI training. A writer who wants their work cited in ChatGPT or Gemini responses should leave training enabled. A writer primarily concerned with not having their voice absorbed into a model without compensation should opt out.

Neither choice is obviously correct, and Medium acknowledges this by making both options available. The controls do not retroactively address content already used in training; they apply going forward.

Who should read this

Every writer who publishes on Medium or any platform with similar policies. More broadly, this announcement is a useful reference point for understanding how AI data licensing discussions are evolving from platform-level defaults toward writer-level choices.