Skip to content
Article Smashing Magazine May 2026

Smashing Magazine: Five patterns for showing what AI agents are doing

Victor Yocco’s May 2026 follow-up on Smashing Magazine moves from identifying when transparency is needed to filling those moments with concrete UI patterns. The premise is direct: when a generic spinner appears while an AI agent works through a complex task, users have no way to tell whether the system is stuck, processing normally, or handling something unexpectedly complicated. Effective agentic interfaces replace ambiguity with specificity.

The article introduces five patterns, each suited to a different level of risk and required visibility.

The Living Breadcrumb works for low-stakes background tasks. It appears as a subtle status indicator that updates incrementally — from “Reading email” to “Drafting reply” — without demanding the user’s attention. It signals progress without creating interruption.

The Dynamic Checklist suits high-stakes workflows. It shows completed steps, what the agent is currently handling, and what remains. Unlike a progress bar, it tells users exactly where the agent is in a process, which makes it easier to spot problems early and reduces the anxiety that comes from watching an opaque timer.

The Thinking Toggle gives expert users access to sanitized processing logs on demand. Rather than surfacing full detail by default, this pattern makes deep transparency available without forcing it on users who have no need for it.

The Audit Trail is a persistent record of the agent’s decision sequence. After a task completes, users can review what the agent did and why. This matters in any domain where accountability outlasts the interaction itself — legal work, financial decisions, medical triage.

Partial Success Design handles the case where an agent completes some parts of a task but not others. Instead of a binary pass/fail message, it reports which steps succeeded and which did not, giving users a clear picture of what requires follow-up.

Running through all five patterns is what Yocco calls the Agentic Update Formula: an action word, a specific item, and the relevant limits or rules. The difference between “Loading…” and “Scanning Lufthansa and United prices to find anything under $600” illustrates the principle — the second tells the user what the agent is acting on, what it is doing to it, and what constraint it is working within.

The article draws on Perplexity AI and Devin AI as examples of these patterns done well, and uses ChatGPT’s opaque memory system as a cautionary case of transparency deferred.

A secondary point concerns tone. In low-stakes moments, a friendly, conversational voice reduces friction. In high-stakes decisions, a more neutral, precise voice signals reliability. The appropriate voice for canceling a meeting differs from the appropriate voice for executing a financial transaction, and designers need to calibrate that difference deliberately.

Read alongside part one of the series — which covers the Decision Node Audit for identifying when transparency is needed — this piece gives designers a complete toolkit for the when and the what.