Smashing Magazine: Designing for agentic AI — practical UX patterns for control, consent, and accountability
Published in February 2026, this article by Victor Yocco, a UX researcher at ServiceNow, addresses a gap in most writing about AI in design: the difference between designing with AI tools and designing for AI systems. Where most articles focus on how designers can use AI to accelerate their own work, this one focuses on what the interface of an agentic system needs to provide in order to be trusted by the people using it.
The central argument is compact: “Autonomy is an output of a technical system. Trustworthiness is an output of a design process.” A system that acts autonomously is not automatically trustworthy just because it works correctly. Trust has to be designed in.
Six UX patterns
Yocco describes six patterns that address the different points where agentic systems tend to break user trust.
Intent preview asks the system to surface what it is about to do before doing it, giving the user a chance to approve, modify, or take over. This establishes informed consent rather than presenting users with outcomes they had no chance to influence.
Autonomy dial lets users set their preferred level of involvement per task, from “observe and suggest” at one end to “act autonomously” at the other. The idea is that risk tolerance varies by task, not by person — the same user might want to review every email draft but fully delegate calendar scheduling.
Explainable rationale requires the system to anchor its reasoning in the user’s own stated preferences: “Because you said X, I did Y.” This prevents users from experiencing autonomous actions as arbitrary or random.
Confidence signal asks the system to surface its own uncertainty, so users know when to scrutinize a decision more carefully. This counters automation bias — the tendency to accept AI output uncritically when the interface gives no indication of uncertainty.
Action audit and undo provides a persistent log of what the system has done, with easy reversal. This is the most direct mechanism for reducing the perceived risk of delegation.
Escalation pathway describes how the system should ask for help when uncertain, rather than guessing. An agent that recognizes its own limits and asks for clarification reads as more reliable than one that always produces an answer.
Governance and implementation
The article closes with a section on how to implement these patterns in practice, including a phased rollout (intent preview and undo first, then autonomy controls, then autonomous actions for low-risk pre-approved tasks) and the recommendation to form a cross-functional AI ethics group that includes UX, legal, compliance, and support — not just engineering and product.
Who it is useful for
Product designers and UX leads working on any product that incorporates agentic AI features — copilots, automated workflows, AI assistants with task-execution capabilities. The patterns are general enough to apply across contexts but specific enough to use directly in design reviews or product specifications.