Skip to content
Video Nielsen Norman Group Mar 2026

NN/g: 3 tips to make AI a better editor

Taylor Dykes, a user experience specialist at Nielsen Norman Group, makes a case in 7 minutes for why AI editing does not remove the need for deliberate prompting. The premise is that large language models tend toward a kind of averaged prose — clear and grammatically correct, but often tonally generic. The gap between passable output and genuinely useful editing depends on how the prompt is constructed.

The video is aimed at writers, editors, and content professionals who use AI tools for revision and who have noticed that results vary unpredictably. It is especially relevant for people who treat AI editing as a one-step operation — paste in text, get back a revised version — and find the results inconsistent.

Key takeaways:

  1. Include tone words or reference examples in the prompt. When the prompt specifies a target tone — “direct and conversational,” “formal but accessible,” “the tone of this example” — AI editors produce output that is closer to what the writer intended. Without tonal guidance, the model defaults to an average of the training data, which rarely matches any specific publication’s voice. The companion article from NN/g explores the mechanics of this more thoroughly, but the core instruction is simple: name the tone or show it.

  2. Request multiple alternatives rather than a single revision. When you ask an AI editor for one revised version, you get one possible interpretation. When you ask for three or four alternatives, you get a range that reveals how the model is interpreting the editing task and gives you material to combine or select from. This approach also surfaces tonal options you might not have anticipated, which can be more useful than a single edit that goes in the expected direction.

  3. Treat prompting as part of the editing workflow, not a preliminary step. The default assumption is that you describe what you need once and the AI delivers it. Dykes argues that good AI-assisted editing is iterative — you prompt, evaluate, refine the prompt, and prompt again. This is not a limitation of current models but a description of how precise editing actually works, whether human or AI-assisted. Treating the first output as a draft to respond to rather than a final result changes what you can achieve.

The video is short enough to use as a practical reference and specific enough to change behavior directly. It does not survey AI editing tools or compare platforms — the focus is entirely on prompting technique, which makes it applicable across tools.

Worth watching if you are using AI to edit or revise writing and the results feel inconsistent or tonally flat. Also useful for teams that are standardizing AI writing workflows and want to establish shared prompting practices.