dscout: A six-step framework for embedding AI into your UX practice
Rose Beverly, Senior Staff UX AI Researcher at PayPal, wrote this piece for dscout’s People Nerds blog in December 2025. It addresses a specific problem: practitioners either avoid AI entirely out of caution or adopt it broadly without asking which tasks should actually be automated. The MASTER framework is a structured answer to that question.
The six steps are Map, Audit, Scan, Trial, Embed, and Repeat.
The first two are diagnostic. Map asks researchers to document their full workflow — every phase, every repeatable task — before touching any AI tool. Audit turns that map into an inventory with enough granularity to evaluate individual decisions and micro-tasks.
The third step, Scan, introduces a governance matrix. Tasks are plotted on two axes: complexity and automation risk. Low-complexity, low-risk tasks — reformatting transcripts, drafting routine emails, generating consent form templates — are safe to automate. High-complexity, high-risk tasks — managing stakeholder relationships, running live presentations — should stay entirely in human hands. The two middle quadrants require a hybrid approach: AI drafts thematic analysis, humans refine it; AI generates interview guides, humans review them before use.
Trial and Embed move from analysis to action. Beverly’s advice is to pick one low-risk experiment on a real project, evaluate the output critically, and only then build it into a scalable process. The final step, Repeat, is a standing instruction to revisit and update the system as tools evolve and workflows change.
The article also frames a role Beverly calls the “orchestrator” — a practitioner who directs AI agents, combines tools and prompts with strategic judgment, and brings generalist thinking back to a discipline that has fragmented into narrow specialisms. The framing is constructive rather than threatening: AI reintroduces breadth into professional practice rather than replacing depth.
Who this is useful for: UX researchers and designers with some existing workflow experience who want a structured method for evaluating which parts of their practice can be augmented by AI — without outsourcing judgment to the tool. The governance matrix is the most immediately applicable part; it gives practitioners a decision framework rather than a general principle to interpret case by case. Mid-level and senior practitioners will find it most actionable; beginners may not yet have the workflow experience to use the matrix meaningfully.
Beverly does not review specific tools in depth, and the article does not address study design validity or participant ethics — those require separate consideration. What the framework offers is procedural clarity for a decision that many practitioners are currently making informally.