Skip to content
News Phys.org Apr 2026

Phys.org: DraftMarks makes AI's role in student writing visible

Researchers from Georgia Tech and Stanford University published DraftMarks, an open-source tool that layers visual annotations onto documents to show where AI was involved in the writing process. The tool was designed for academic contexts, where the question of how students use AI is distinct from whether they use it at all.

DraftMarks uses a set of visual markers to indicate different types of AI involvement. Eraser crumbs mark heavily revised passages. Masking tape marks AI-generated text that was kept with minimal changes. Glue residue indicates content that was AI-generated and then removed. Ghost text shows AI suggestions that were produced but not used. Different fonts distinguish between passages the writer composed and passages the AI produced. The visual language is meant to make the writing process itself legible, not just the final output.

The rationale is that current AI detection tools answer a binary question: was AI used or not? DraftMarks is designed around a different question: how did the human and AI interact, and at what stages? In a period when 90 percent of college students report using AI in coursework, according to the researchers’ survey data, the binary detection approach has limited utility. The pattern of human-AI collaboration matters more than its presence or absence.

For professional writing contexts, the underlying idea translates to editorial transparency. As AI-assisted drafting becomes common in content production and journalism, the question of how to represent AI involvement to readers and editors is open. DraftMarks does not solve that problem for professional contexts — it was designed for academic use — but it provides a concrete model for what granular AI attribution could look like in practice. The tool is open source and available for adaptation.

Lead author Momin Siddiqui and computing PhD student Adam Coscia led the project.