Skip to content
Article Nielsen Norman Group Mar 2026

Nielsen Norman Group: GenUI vs. vibe coding — who's designing?

What the article is about

Kate Moran of Nielsen Norman Group makes a distinction that is easy to miss in casual conversation about AI and design: generative UI and vibe coding are not synonyms. The difference between them determines who holds design responsibility and what kind of failure to expect.

Context

Moran’s core argument is that agency — who initiates the design decision — separates these two approaches. In generative UI (genUI), the AI system decides that an interactive element would serve a user better than plain text, and it generates that element without being asked. In vibe coding, a person describes what they want and the AI builds it.

This sounds like a technical distinction, but it has practical consequences. Vibe coding places design burden on the user: they must be able to conceptualize what they need, describe it at some level of specificity, and evaluate whether the output matches their intent. Extensive user research has consistently shown that most people are not good at this. They can identify problems with an existing interface but struggle to specify a new one from scratch.

GenUI carries the design burden inside the system. The AI must make sound judgments about when interactive elements actually help users, rather than creating friction or noise. This requires design maturity of a different kind — not prompt-following ability, but the capacity to read context and make defensible decisions on behalf of users who haven’t specified a preference.

Within vibe coding, there is a further spectrum. A vague prompt gives the AI substantial design latitude; a detailed prompt positions it more as executor of user intent. Neither is inherently better, but they distribute accountability differently and fail differently when they go wrong.

Key takeaway

Failure modes are the most practically useful part of the article. Vibe-coded outputs fail through execution problems — the generated artifact doesn’t match what the user wanted. GenUI fails through judgment errors — the system generates an interactive element that wasn’t needed or appropriate. These require different evaluation methods. Vibe coding quality can be checked by matching output against intent. GenUI quality requires traditional UX methods: user research, task completion measurement, and analysis of whether the system’s decisions actually advanced the user’s goals.

For designers, the article suggests that genUI — where it appears in products — represents a significant expansion of what the design function needs to define. Setting the parameters, rules, and intent that guide AI-generated interfaces is itself a design activity. It is less visible than wireframing, but it shapes what users actually encounter.

Who should read this

UX designers, product managers, and design leaders trying to think clearly about how their work is changing as AI-generated interfaces become more common. The distinction Moran draws is foundational for any team evaluating whether they are building genUI features, vibe-coded tools, or some combination of both.