Figma: Release Notes May '26 — live demos of AI agents co-designing in Figma
Figma’s monthly release notes video for May 2026 is a livestream from the official Figma channel that walks through the month’s product updates with live demonstrations. Unlike typical feature announcement posts, the format here is practical: engineers and designers show the tools working in real projects rather than in isolated demos.
The video is aimed at working designers and frontend developers who want to understand what Figma’s AI features can actually do in a professional context, rather than in a controlled showcase. It is most useful for teams that have been watching Figma’s agent capabilities from a distance and want a concrete sense of what has changed before adopting new workflows.
Key takeaways:
-
Co-designing with AI agents is the framing shift Figma is making explicit. The video positions recent releases — MCP server, Figma Make, agent-generated canvas content — not as individual features but as parts of a single workflow in which human designers and AI agents work on the same file simultaneously. The agent handles scaffolding and repetitive generation; the designer controls direction and quality.
-
Vibe-coded prototypes can now move into Figma without manual recreation. The new
/prototype-to-figmaskill captures each unique screen from a code-based prototype running in a local environment and places it on the Figma canvas as a design frame. This closes a gap that previously required rebuilding browser output by hand, which was one of the more time-consuming steps in early-stage handoff. -
Design system-to-code synchronization is now bidirectional. The
/figma-generate-libraryskill can read a coded design system and produce a matching Figma variable and component library, keeping design and engineering references in sync without manual maintenance. The demonstration shows this working with a real token structure rather than a synthetic example. -
Figma Make’s new question cards introduce deliberate review into AI-assisted building. The video demonstrates how Make can now pause mid-generation to offer structured options — each with a short tradeoff description — before proceeding. This is a meaningful change for teams that have found continuous AI generation produces too much output to evaluate efficiently.
-
FigJam is being positioned as an interface for coding agents, not just human collaborators. The May update adds the ability for agents to generate architecture diagrams and entity-relationship diagrams directly in FigJam using the updated
generate_diagramtool. The demonstration uses a real codebase, with the agent reading the structure and producing a visual representation without manual input.
Worth watching if your team is evaluating whether Figma’s AI features are mature enough to change how you run design sprints or prototype review. Also useful as a starting point for developers who want to understand the MCP integration before setting it up.