How to conduct a competitive analysis: a practical guide with AI prompts
What is competitive analysis?
Competitive Analysis is the systematic evaluation of competitor products and services to identify their strengths, weaknesses, design patterns, and strategic choices. The researcher examines competitor interfaces, user flows, content strategies, and positioning by walking through the products as a user would, then organizes findings into a structured comparison that reveals opportunities for differentiation. Competitive Analysis is one of the most accessible research methods because it requires no participant recruitment and can be conducted with publicly available products.
What question does it answer?
- How do competitors solve the same design problem, and which approaches work better than others?
- Where are the gaps in competitor experiences that represent opportunities for our product?
- What are the current design conventions and user expectations in this category?
- How does our product’s usability compare to competitors on the same core tasks?
- What features do competitors offer or lack, and how do users feel about those choices?
- Which competitor patterns should we adopt, adapt, or deliberately avoid?
When to use
- At the start of a new project or redesign, to understand the competition before making design decisions — this prevents designing in a vacuum.
- Before a major feature investment, to learn how competitors have already solved the problem and what works or fails in practice.
- When entering a new market or vertical, to map the established players, their positioning, and the experience standards users already expect.
- When conversion, retention, or satisfaction metrics are declining, to check whether competitors have introduced superior experiences that are pulling users away.
- As a recurring practice (quarterly or semi-annually) to track how competitors evolve and catch new trends before they become table stakes.
- When stakeholders disagree about design direction, to provide evidence-based comparisons instead of opinion-driven debates.
Not the right method when the question is about your own users’ specific behaviors, attitudes, or pain points — that requires primary methods like interviews or usability testing. Competitive Analysis tells you what competitors do, not why your users struggle. It is also insufficient as a standalone justification for copying a competitor’s feature: without understanding why the competitor built it and whether it actually works for users, imitation produces shallow solutions.
What you get (deliverables)
- Competitor matrix: a structured spreadsheet or table comparing competitors across consistent evaluation criteria (features, flows, content, performance).
- Annotated screenshots or screen recordings: visual documentation of key competitor flows with notes on what works, what fails, and what is notable.
- SWOT analysis: strengths, weaknesses, opportunities, and threats organized per competitor and aggregated across all competitors.
- Opportunity map: a prioritized list of gaps and weaknesses in competitor experiences that your product can exploit.
- Design pattern library: a collection of effective patterns observed across competitors that the team can reference during design.
- Executive summary: a 2-5 page brief with actionable recommendations tied to specific findings.
Participants and duration
- Participants: none — this is a no-respondents method. The analyst evaluates competitor products directly through expert walkthrough.
- Competitors to analyze: 3-5 (NN/G recommends 2-4 direct competitors plus 1-2 indirect or aspirational benchmarks).
- Setup time: 2-4 hours to define objectives, select competitors, build the evaluation framework, and set up test accounts.
- Execution time: 3-10 days depending on depth. A focused analysis of 3-4 competitors across 2-3 key flows takes about 3-5 days. A full audit covering end-to-end user journeys takes 1-2 weeks.
- Synthesis and reporting: 2-3 days to organize findings, create the matrix, write the summary, and prepare the presentation.
- Total timeline: 1-3 weeks from kickoff to deliverable.
How to conduct competitive analysis (step-by-step)
1. Define objectives and evaluation scope
Write down what you want to learn. Broad objectives (“understand the competition”) produce unfocused results; specific ones (“compare onboarding flows for time-to-first-value”) drive sharp insights. Decide whether you are evaluating the full experience or specific flows. NN/G recommends focusing on the core tasks users hire the product to do — typically 2-5 key flows.
2. Identify and categorize competitors
Build a list of 3-5 competitors divided into direct (same product category, same audience) and indirect (different product, same user problem). Include 1-2 aspirational benchmarks from adjacent industries that solve a similar interaction problem exceptionally well. Avoid analyzing more than 5 in depth — the insight-to-effort ratio drops sharply beyond that point.
3. Build the evaluation framework
Create a consistent rubric that every competitor will be evaluated against. Common dimensions include: first-time user experience, core task completion (step count, friction points, clarity), information architecture, content quality, error handling, mobile responsiveness, accessibility, and visual design consistency. Score each dimension on a standard scale (1-5 or 1-10). Using Nielsen’s 10 usability heuristics as evaluation criteria is a proven starting point for beginners.
4. Walk through each competitor as a user
Create real accounts and complete the key tasks you defined in Step 1. Record your screen and take annotated screenshots at every decision point. Document not just what you see on screen but how the interaction feels: moments of confusion, delight, friction, and unnecessary steps. Complete the evaluation framework scoring as you go, while the experience is fresh.
5. Collect external evidence
Supplement your walkthrough with outside perspectives. Read user reviews on G2, Capterra, App Store, or Trustpilot to find recurring complaints and praise that your own walkthrough might miss. Check competitor help centers, changelogs, and blog posts for strategic signals. Review social media mentions and community discussions for unfiltered user sentiment.
6. Score and compare
Fill in the evaluation matrix with scores and evidence. The matrix should make patterns visible at a glance: where does each competitor excel, where do they fall short, and where is there convergence (all competitors solving the problem the same way — a signal that users expect this pattern).
7. Run a SWOT analysis
For each competitor and for the competitive field as a whole, organize findings into Strengths (what they do well that we should learn from), Weaknesses (where their experience fails and we can differentiate), Opportunities (unmet needs or gaps across the entire competitive set), and Threats (competitor moves that could undermine our position if we do not respond).
8. Synthesize and prioritize opportunities
The most valuable output is not the matrix — it is the set of prioritized opportunities that emerge from it. Rank opportunities by two axes: impact (how much this would improve the user experience or business metric) and feasibility (how difficult it is to implement). Focus the recommendation on 3-5 high-impact actions, not a laundry list of 30 observations.
9. Present findings and align the team
Prepare a visual, scannable report — not a 50-slide deck. Lead with the top 3-5 actionable recommendations, then provide the supporting evidence (matrix, screenshots, scores) for those who want to dig deeper. The goal of the presentation is to change a design decision or roadmap priority, not to showcase how thorough the analysis was.
How AI changes this method
AI compatibility: full — Competitive Analysis is one of the methods most accelerated by AI because the inputs (public websites, app stores, reviews, documentation) are all digital and text-based. AI can gather, structure, and compare this data far faster than a human analyst. The human role shifts from data collection to strategic interpretation and validation.
What AI can do
- Automated competitor monitoring: Tools like Crayon, Klue, and custom LLM workflows can track competitor website changes, pricing updates, feature launches, and content additions continuously — replacing the manual periodic check.
- Review and sentiment analysis at scale: An LLM can process hundreds of G2, Capterra, or App Store reviews for a competitor and extract the top complaints, praise themes, and feature requests in minutes, surfacing patterns a manual review of 20 reviews would miss.
- Feature and content comparison: AI can crawl competitor websites and structure the information into comparison tables automatically, including pricing tiers, feature sets, and content topics covered.
- SWOT draft generation: Given the evaluation data, an LLM can produce a first-draft SWOT analysis organized by competitor, which the analyst then validates against their walkthrough experience.
- Screenshot annotation and flow documentation: Tools like Figr and Page Flows use AI to capture and annotate competitor flows, reducing the manual screenshot-and-comment process from hours to minutes.
- Heuristic scoring assistance: An LLM can evaluate competitor interfaces against Nielsen’s 10 heuristics based on screenshots or descriptions, providing a baseline score the expert then adjusts.
What requires a human researcher
- Experiencing the product as a user: AI can analyze screenshots, but it cannot feel the friction of a confusing flow, the delight of a well-timed animation, or the trust created by consistent visual language. The experiential walkthrough is irreplaceable human work.
- Strategic interpretation: AI can identify that a competitor lacks a feature, but it cannot judge whether that gap represents an opportunity worth pursuing or a deliberate product choice. Strategic context requires human business judgment.
- Validating findings against real user needs: A competitor weakness is only an opportunity if your users actually care about it. Connecting competitive findings to primary user research data requires the analyst’s judgment.
- Stakeholder communication: Framing recommendations in a way that aligns with business goals, addresses stakeholder concerns, and drives roadmap decisions is human work that requires organizational knowledge and persuasion.
AI-enhanced workflow
Before AI, a competitive analysis for a UX or product team took 2-3 weeks: setting up accounts, walking through flows, manually taking and annotating screenshots, reading dozens of reviews, filling spreadsheets, and writing the report. The analyst spent about 60% of the time on data collection and organization, and 40% on actual analysis and insight generation.
With AI integrated, the collection phase compresses dramatically. The analyst still performs the core walkthroughs (1-2 days for 3-5 competitors) because the experiential judgment is irreplaceable, but everything around it accelerates. Review analysis that took a day becomes a 20-minute LLM task. Feature comparison that required manual spreadsheet work gets automated by crawlers. SWOT drafting that took half a day becomes a 30-minute edit-and-validate cycle. The total timeline shrinks from 2-3 weeks to 5-7 days, with the analyst’s time concentrated on walkthroughs, strategic interpretation, and stakeholder alignment.
The risk is over-relying on AI-generated comparisons without doing the walkthroughs. An LLM can tell you that a competitor “has an onboarding wizard,” but it cannot tell you that the wizard asks four irrelevant questions before delivering value. The walkthrough is where the actionable insight lives, and skipping it turns the analysis into a feature checklist — the exact failure mode every expert warns against.
Beginner mistakes
Comparing features instead of experiences
The most common mistake is building a feature checklist — “they have dark mode, we don’t” — without evaluating how each feature fits into the user experience. A feature list tells the team what competitors built; an experience analysis tells them how it feels and whether it actually works. The fix is to walk through competitor products as a real user, completing real tasks, and documenting friction and delight rather than presence or absence of features.
Analyzing too many competitors
Trying to cover 8-10 competitors in depth leads to shallow analysis across the board. Each additional competitor adds complexity to the matrix and dilutes attention from the patterns that matter. Start with 3-5 carefully chosen competitors and go deep. If there is time and budget remaining, add one more — but resist the instinct to be thorough at the cost of depth.
Treating it as a one-time exercise
The competitive field changes continuously. A one-time analysis produces a snapshot that becomes outdated within months as competitors ship new features and redesign flows. Build competitive analysis into a recurring process (quarterly light reviews, annual deep dives) and maintain a living document that the team can reference and update.
Copying without understanding
Seeing a competitor do something well and immediately replicating it is the fastest path to a mediocre product. A competitor’s design choice may work because of their specific user base, brand positioning, or technical architecture — factors that do not apply to your product. Always validate competitive findings against your own user research before acting on them.
Skipping the external evidence
Relying only on your own walkthrough gives you one person’s subjective experience. User reviews, community discussions, and support forums reveal problems and successes that a single expert walkthrough cannot surface. The triangulation between your walkthrough, user reviews, and published data produces far richer findings than any single source alone.
Example from practice
A mid-size B2B SaaS company building project management software noticed that trial-to-paid conversion had dropped from 12% to 8% over two quarters, while competitors in the same category were growing. The product team suspected the onboarding experience was the problem but had no evidence.
The UX researcher ran a competitive analysis of four direct competitors (Asana, Monday.com, ClickUp, Notion) and one aspirational benchmark (Linear). She created accounts on each platform, completed the same five tasks (create a project, invite a team member, set a deadline, assign a task, view a progress report), and scored each on a 10-dimension rubric covering onboarding clarity, time to first value, help availability, and visual design quality. She also analyzed 200 reviews per competitor on G2, using an LLM to extract the top complaints and praises.
The analysis revealed that all four competitors reached “first meaningful task completed” within 3-5 minutes of signup, while the company’s product took 11 minutes because it required configuring workspace settings before any project creation. The review analysis confirmed this: the company’s G2 reviews mentioned “complicated setup” in 34% of negative reviews, versus 8-12% for competitors. The researcher recommended a redesigned onboarding that deferred workspace configuration until after the user completed their first project. After implementation, trial-to-paid conversion recovered to 11% within one quarter.
Tools
Flow capture and comparison:
- Figr — AI-powered side-by-side UX comparison from screenshots or HTML
- Page Flows — library of 5,000+ recorded user flows from real products
- Loom or OBS Studio — screen recording for documenting walkthroughs
Review and sentiment analysis:
- G2, Capterra, Trustpilot — primary sources for B2B and B2C product reviews
- App Store and Google Play — mobile app review sources
Market and traffic intelligence:
- SimilarWeb — traffic estimates, engagement metrics, and audience overlap
- Ahrefs or SEMrush — SEO analysis, keyword gaps, and content strategy comparison
Evaluation and documentation:
- Notion or Airtable — structured comparison matrices and team collaboration
- Miro or FigJam — visual mapping of flows, patterns, and opportunities
AI-assisted analysis:
- ChatGPT or Claude — review analysis, SWOT drafting, framework generation
- Crayon or Klue — automated competitive intelligence monitoring
AI prompts for this method
4 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for competitive analysis →.