Skip to content
Video Nielsen Norman Group Jan 2026

Nielsen Norman Group: Don't start with AI, start with the problem

Caleb Sponheim, User Experience Specialist at Nielsen Norman Group with a background in computational neuroscience and quantitative research, recorded this four-minute video in January 2026. The argument is deliberately compressed: starting with AI as the assumed answer to a product or research question is a structural failure, not a matter of taste or enthusiasm.

Sponheim’s background gives the argument particular weight. He is not approaching AI from a humanist critique of technology — he has the statistical and computational grounding to evaluate AI capabilities directly. His case against AI-first thinking is that it wastes effort on features that deliver no real user value, not that AI is generically overhyped.

Who the video is for: Product managers, UX researchers, and strategists who face pressure to incorporate AI into work before the problem to be solved has been clearly defined. It is also relevant for teams reviewing new feature proposals that begin with an AI capability rather than a user need.

Key takeaways:

  1. Technology-first thinking is a structural design failure. Asking “how can we use AI here?” before defining the problem inverts the sequence that produces useful outcomes. Sponheim frames this as a process error: the issue is that the wrong question is being asked first, not that people are too enthusiastic about AI.

  2. AI features without grounded user needs deliver no value. Effort spent building AI capabilities that do not correspond to defined user problems does not become valuable by virtue of being technically sophisticated. The result is wasted development time and features that users ignore or distrust after initial exposure.

  3. The correct starting question is “what problem are we solving?” Once the problem is clearly articulated, the decision about whether AI is the right tool can be made on the merits — and sometimes the answer will be that it is. The video is not anti-AI; it is an argument for rigor in deciding when AI applies.

  4. Teams under pressure to “do AI” face a predictable failure mode. When AI adoption is driven by business mandate rather than user need, the result is features that generate initial attention but fail to sustain engagement. The video is in part an argument that research and design teams should push back on AI-first mandates by returning the conversation to problem definition.

The video is accompanied by a companion study guide on designing AI products and features. Its brevity is intentional — the argument is simple enough to state in four minutes, and Sponheim does not extend it beyond what needs to be said.

Worth watching if: You are in a planning session where a proposed project has started from an AI capability rather than a user problem, or if you need a clear, short reference point for arguing that a feature proposal lacks a defined use case before AI is introduced.