Skip to content
Video Nielsen Norman Group Mar 2026

Nielsen Norman Group: Outcome-oriented design in the era of AI

Kate Moran and Sarah Gibbons from Nielsen Norman Group recorded this three-minute video in March 2026. The video introduces a concept they call outcome-oriented design and argues that AI changes the fundamental question designers and researchers are trying to answer — away from “what works for the typical user” and toward “what does this specific user need to accomplish.”

Who the video is for: UX researchers, product designers, and research leads who are working on AI-powered products and want a concise reframe of how research goals should shift when the product itself is adaptive. The video assumes familiarity with standard UX practice. It is short and conceptual — useful as a starting point for a team discussion rather than as a how-to guide.

Key takeaways:

  1. Average-user optimization is less relevant when the product adapts. Traditional UX research identifies what works for a representative range of users and informs a design that accommodates that range. When the product can adjust to individual users in real time, designing for the average misses the point. The research question shifts from “what is the right design” to “what are the different goals users bring, and how should the system recognize and respond to them.”

  2. Outcome-oriented design means defining adaptive frameworks, not fixed interfaces. Rather than specifying a single flow or layout, designers working on AI products are defining the parameters within which the system adapts — what it can change, under what conditions, toward which user goals. Research informs those parameters rather than a single solution.

  3. Understanding individual goals becomes the core research task. If the product is going to respond to what individual users are trying to accomplish, researchers need richer goal-level data than standard usability studies typically produce. The question is not whether users can complete a task — it is what they are trying to achieve and whether the system’s interpretation of that goal is accurate.

  4. This reframe applies to researchers evaluating AI systems, not only to designers building them. When testing an AI product, standard task-completion metrics are insufficient because the product’s behavior changes per user. Evaluation needs to account for whether the system is correctly reading user intent and adapting appropriately — which requires research methods that capture goal and context alongside behavior.

The video is part of NN/g’s 2026 content series on AI’s impact on UX practice. It is paired with related articles and links to training courses. At three minutes, it covers a single idea rather than a broad argument, which makes it easy to share with stakeholders who are not embedded in UX research but need to understand how AI products differ from conventional ones.

Worth watching if: Your team is starting to design or evaluate AI-powered features and needs a concrete reframe for how research goals shift when the product is adaptive. Also useful if you are making the case to stakeholders for why AI product research requires different methods than standard usability testing.