John Maeda: Design in Tech Report 2026 — from UX to AX
What the report is about
John Maeda’s 2026 Design in Tech Report argues that the discipline is passing through a structural transition: from UX (user experience) to AX (agentic experience). The shift is not cosmetic. In a UX paradigm, designers help users execute tasks — the goal is reducing friction in human action. In an AX paradigm, AI agents execute the tasks, and designers must help users evaluate whether those agents did them correctly. The design problem has changed fundamentally.
Context
Maeda has published the Design in Tech Report annually since 2015. The 2026 edition is the first to center entirely on the implications of AI agents rather than AI tools. The distinction matters: tools augment human action, while agents act on behalf of humans. This moves design responsibility toward what Maeda calls “the gulf of evaluation” — borrowing from Don Norman’s framework of cognitive gulfs — rather than the historically more familiar “gulf of execution.”
The report draws on interviews and data from design leaders at major technology companies, and situates the current moment as comparable in structural importance to the transition from desktop to mobile.
Key argument
The report introduces the concept of the feedback loop as the defining design artifact of the agent era. An AI agent operates as a loop: action, outcome, feedback, correction. The designer’s job is to make that loop legible and manageable for the human overseeing it. This means designing evaluation interfaces, correction mechanisms, and transparency layers — not just task flows.
A secondary argument concerns design systems. In an agent-first environment, design systems serve a dual audience: human designers and AI agents that read the system to generate new components and layouts. Maeda argues that well-structured design systems with consistent naming and documented conventions will produce better AI-generated output, which raises the business case for investing in system quality.
The report also notes that design is harder for AI to automate than code because design quality depends on subjective judgment — taste, contextual appropriateness, tone — where correctness is not verifiable the way software tests can verify code behavior.
Who should read this
Designers, design leads, and product leaders who want a strategic framework for understanding where design expertise remains necessary in an AI-heavy product environment. Also useful for those who need to make the case internally for continued investment in design thinking as AI takes over more execution tasks.