NN/g: Don't start with AI, start with the problem
Caleb Sponheim, a user experience specialist at Nielsen Norman Group, presents a 4-minute argument for a principle that sounds obvious but gets violated constantly in practice: when building AI features, start with the problem you are solving, not with the technology available to you.
The video is aimed at designers, product managers, and strategists who are being pushed to add AI to existing products or build AI-native features. It is most useful early in a project, before tooling decisions have been made, and for teams facing internal pressure to ship something AI-powered without a clear user rationale.
Key takeaways:
-
Starting with technology inverts the design process. When a team begins with “we should use AI for this,” the default mode becomes finding a use case that fits the technology rather than finding technology that fits the use case. Sponheim describes this as structurally difficult to recover from, because early framing shapes every subsequent decision.
-
The question to ask first is what users actually struggle with. This is not a novel idea, but Sponheim makes a specific point about AI: because large language models and generative AI can produce something plausible-sounding for almost any task, they create an unusually strong pull toward solution-first thinking. A prototype that looks convincing early in the process can make it easy to skip the problem definition step.
-
Problem clarity protects against feature drift. When the user problem is documented and agreed on before any technical exploration begins, teams have a reference point for evaluating whether a proposed AI approach is actually the right fit. Without that reference point, feature scope tends to expand to match whatever the model turns out to be capable of.
-
Not all user problems are AI problems. Sponheim does not argue against AI; he argues against treating AI as the answer before the question is known. Some problems are better solved through better information architecture, clearer copy, or a simpler interaction flow. Identifying which problems genuinely benefit from AI capability is itself a design skill.
-
The principle applies to teams at any level of AI maturity. Whether a team has shipped multiple AI features or is evaluating the first one, the habit of starting with user problems rather than technical possibilities is described as the single most consistent predictor of whether AI adds genuine value or produces a feature that ships and then quietly goes unused.
Worth watching if your team is under pressure to add AI to a product and the conversation has started with the technology rather than the user. Also useful as a short, shareable reference for anyone who needs to make the case internally for doing discovery work before committing to an AI-based solution.