Skip to content
Article Smashing Magazine Apr 2026

Smashing Magazine: When to show users what an AI agent is doing

Victor Yocco’s April 2026 article on Smashing Magazine addresses a problem that has become urgent as AI agents take on complex multi-step tasks: deciding not whether to be transparent, but when transparency is warranted. The piece introduces a structured approach to identifying the moments when users genuinely need to see what an agent is doing, rather than attempting to surface everything or hiding all activity behind a generic progress indicator.

The core method is the Decision Node Audit — an eight-step process that brings designers and engineers into the same room to map backend logic and locate what Yocco calls “ambiguity points.” These are moments when an AI makes a probabilistic choice rather than a deterministic one. Not every step an agent takes needs to be surfaced; the audit exists to find the ones that do.

Once decision nodes are mapped, an Impact/Risk Matrix categorizes each one along two dimensions: how high the stakes are and how reversible the outcome is. An action that can be undone, like reordering a product recommendation list, requires less transparency than one that cannot, like submitting a claim or executing an irreversible payment.

The article illustrates the method with an insurance processing case study. A vague status message — “Calculating Claim Status” — becomes three specific, visible steps: assessing damage photos, reviewing a police report, and verifying policy details. This specificity is where transparency actually lives. Generic progress indicators signal activity; specific steps build trust by showing what the system is actually doing and what information it is using.

The “Wait, Why?” test adds an empirical check: observing real users during agent task completion and noting the moments when they express confusion. These are the transparency moments that matter in practice, and they often differ from what teams anticipate when planning in isolation.

A pattern selection rubric ties the audit to actual UI choices. High-stakes, irreversible decisions call for Intent Preview — showing the user what the agent plans to do before doing it, creating an opportunity to pause or redirect. Reversible decisions suit the Action Audit pattern, which makes completed steps reviewable after the fact without interrupting the workflow.

The methodology requires cross-functional collaboration. Engineers need to expose the right decision points in the interface, and product managers need to define what counts as high stakes for a given context. The framework is most useful for teams early in building an agentic feature, before UI patterns have calcified around whatever the team assumed users would accept.

This is part one of a two-part series. Designers and product managers working on agentic AI features in insurance, finance, healthcare, or any domain where autonomous decisions carry real consequences will find the audit process gives the discussion a concrete starting point.