Aliena Cai: how I use (and don't use) AI to design better in 2026
What the video covers
Aliena Cai is a product design lead with 130,000 YouTube subscribers, a former senior designer at eBay, and the creator of the Fast Track UX course. In this video published March 11, 2026, she maps the specific stages of her design workflow where she uses AI tools and the stages where she deliberately does not. The video includes a live demo with Figma Make and references her free AIUX Design Guide.
The framing is practical rather than promotional: Cai is explicit about the current limits of generative tools and treats the question of when to use AI as a design decision in itself, not a default.
Who it’s for
Product and UX designers who have experimented with AI tools and found the results inconsistent, or who are unsure how to integrate AI without undermining the quality of their final output. Also useful for designers who have not yet experimented with Figma Make and want a grounded introduction from a working practitioner rather than a product demo.
Key takeaways
-
Figma Make works best downstream of design thinking. Cai demonstrates that Figma Make produces better results when the designer has already done the layout thinking and component structure upstream. The tool translates design intent into working prototypes, but the intent still has to come from the designer. Feeding Make a vague description produces generic output; feeding it a specific component structure produces something closer to the actual product.
-
Early ideation is where AI saves the most time without quality loss. Cai identifies rough concept generation — quick layout directions, mood explorations, structural variations — as the phase where AI tools speed up work most meaningfully. Generating five rough directions in a short session is faster than sketching them by hand, and the quality difference at that stage is small enough not to matter.
-
AI-generated UX copy tends toward patterns users have stopped noticing. Cai avoids using AI for final interface copy. Her reasoning is that AI-generated text defaults to familiar phrasing — error messages, empty states, CTAs — that users process without reading. Good UX copy often works by being specific and unexpected, which requires knowing the exact context of use in a way that a general-purpose model does not.
-
Final interaction design states still require human judgment. Cai describes a category of design decisions — the exact behavior of a hover state, the precise timing of a loading animation, the specific wording of an inline error — where AI output is directionally useful but never final. These decisions require close attention to the real product context rather than pattern-matching from training data.
-
The AIUX Design Guide provides a structured starting point. The guide Cai references is organized around practical decisions rather than general principles, and she treats it as a workflow scaffold for designers new to AI tools rather than a set of best practices to memorize.
Worth watching if
You are a UX or product designer who has tried AI tools and been disappointed by the results, and you want to understand whether the problem is the tool or how you are using it. Cai’s approach to Figma Make illustrates that the quality gap between AI-assisted and traditionally produced design work is often a prompting and sequencing problem rather than a capability problem.