Skip to content
Article Figma Mar 2026

AI image tooling and interactive prototyping — Figma workflow case study

What the article is about

This Figma blog post is a hands-on workflow guide demonstrating how to combine Figma’s AI image generation tools with its interactive prototyping capabilities. Instead of treating image generation and prototyping as separate features, the guide shows a workflow where AI-generated images are created, refined, and embedded into interactive prototypes that can be tested with real users.

Context

Figma has been rolling out AI image features (including generation and editing powered by OpenAI’s models) alongside its existing prototyping tools. This workflow lab bridges the two: showing designers how AI-generated visual content can be used within interactive prototypes rather than being limited to static mockups.

Key takeaway

The workflow centers on using AI image generation to solve a common prototyping problem: realistic content. Prototypes filled with stock photos or placeholder images produce skewed user testing results because users react to the content as much as the interface. By generating contextually appropriate images through AI, designers can create prototypes where the visual content matches the intended use case.

The guide walks through prompting techniques for generating on-brand imagery, editing generated images to fit specific layout constraints, and integrating the results into interactive prototypes with working transitions and states. The practical result is a more realistic testing artifact that requires less time to assemble than sourcing and editing stock photography.

Who should read this

Product designers who build prototypes for user testing and want to improve the realism of their test artifacts without spending hours on content sourcing and image editing.