Skip to content

How to conduct desk research: a practical guide for product and UX teams

What is desk research?

Desk research (also called secondary research) is the practice of gathering insights from existing data sources — academic studies, industry reports, government datasets, competitor publications, internal company documents, news archives, and online communities — instead of running new field studies. The method works by defining a research question, identifying credible sources, extracting information across multiple perspectives, and synthesizing the findings into a structured summary that orients the team to a new topic, market, or audience. Desk research is the standard first step before any primary research because it tells the team what is already known, surfaces existing answers, and points the primary work toward the gaps that still matter.

What question does it answer?

  • What is already known about this topic, and what gaps remain that primary research should fill?
  • Who are the main competitors in this space, and how do they position their products?
  • What are the documented user behaviors, pain points, and trends in this category?
  • What regulatory, demographic, or market context shapes the decisions users make in this domain?
  • What benchmarks (conversion rates, retention, satisfaction scores) are typical in this industry?
  • What language do users and competitors use to talk about this problem, and which terms should the team adopt?

When to use desk research

  • At the start of any new project where the team is unfamiliar with the domain, market, or user.
  • Before recruiting interview participants, to refine the research questions and avoid asking things already documented.
  • When the budget or timeline does not allow primary research and the team needs the best evidence available right now.
  • When entering a new market or vertical and the team needs to map competitors, regulatory context, and user expectations before investing.
  • When stakeholders disagree about the size of an opportunity or the maturity of a competitor — published evidence settles the argument faster than opinion.
  • When refining a primary research plan after a kickoff, to focus interview questions and surveys on the unknowns rather than the known.

Not the right method when the question is specific to the team’s own users in their own product context — secondary data covers averages and broad patterns, not the team’s particular users. Desk research is also a poor substitute when the team needs fresh, current behavioral data — published reports lag behind reality by months or years. Finally, desk research can become a never-ending rabbit hole if the team does not set scope and a deadline upfront.

What you get (deliverables)

  • Annotated source list: 20–40 credible sources with author, publication, date, key findings, and a relevance note.
  • Topic map: a visual or written breakdown of subtopics that emerged from the literature.
  • Synthesis brief: 3–10 pages summarizing the key findings, organized by research question rather than by source.
  • Competitive map (when applicable): a comparison of competitors on the dimensions that matter for the project.
  • SWOT analysis (when applicable): strengths, weaknesses, opportunities, and threats relevant to the team’s own product.
  • List of open questions and gaps: what desk research could not answer, which becomes the input for the primary research plan.

Participants and duration

  • Participants: none — this is a no-respondents method.
  • Sources to gather: 20–40 credible sources for a typical UX or product project; 50+ for a deep market entry study.
  • Setup time: 1–2 hours to define the research question and topic map.
  • Execution time: 2–10 days depending on scope.
  • Synthesis and writing: 1–3 days for the brief and deliverables.

How to conduct desk research (step-by-step)

1. Define the research question and scope

Write down the specific question the desk research should answer — “What do parents of preschoolers expect from a meal-planning app?” rather than “Tell me about meal-planning apps.” Set a deadline upfront (typically 3–10 days) and a target source count (20–40 for most projects). Without scope and deadline, desk research expands until the team gives up.

2. Talk to stakeholders before searching

Spend 30–60 minutes with each project stakeholder. Ask what they already know, what reports they have already seen, what they suspect is true, and what would change their mind. Stakeholder conversations reveal the internal evidence the team forgot existed and surface assumptions to check.

3. Map subtopics and source types

Build a quick mind map starting from the core question. What subtopics affect the answer? What types of sources cover each subtopic — academic studies, industry reports, government data, competitor blogs, online communities, internal company documents? Decide which source types you will use before searching.

4. Search systematically, not opportunistically

Go through each source type in order. For academic literature, use Google Scholar or Semantic Scholar. For industry reports, search Nielsen Norman Group, Baymard, Gartner, Forrester, and Statista. For competitor coverage, search the competitor’s blog, press releases, and case studies directly. For user voices, search Reddit, Quora, and product review sites. Save every relevant source with the author, date, and URL as you go.

5. Evaluate source credibility

Check the author’s qualifications, the publication date, the funding source (a vendor’s white paper has a strong incentive to support its product), and whether the source cites primary data or just other secondary sources. Use a quick rubric (CRAAP test, 5W questions, or your own) to drop sources that fail. Drop anything older than 2–3 years for fast-moving topics.

6. Extract findings, not summaries

For each kept source, extract the specific findings that relate to your research question. Quote numbers, dates, and direct claims rather than paraphrasing. Note who funded the study, the sample size, and the geography. The goal is a structured database of findings tied to sources, not a stack of article summaries.

7. Synthesize across sources

Group findings by subtopic and look for patterns. Where do multiple sources agree? Where do they contradict? Where is there a gap that no source addresses? The synthesis is the value of desk research — anyone can list 30 articles, but only the analyst can say “five sources agree on X, two contradict, and the question of Y is completely undocumented.”

8. Run a SWOT or competitive comparison if it fits

For projects entering a new market or comparing competitors, structure part of the synthesis as a SWOT analysis or a feature comparison spreadsheet. These structured outputs make the findings easier to act on.

9. Write the brief and the open questions

Produce a 3–10 page brief organized by the original research question, with the synthesis up front and the source list at the end. End with the explicit list of questions desk research could not answer — this is the input for the primary research plan.

How AI changes this method

AI compatibility: full — Desk research is the method most fundamentally transformed by AI. Tools like Perplexity Deep Research, Elicit, Claude with web search, ChatGPT Deep Research, and Semantic Scholar can search, retrieve, summarize, and cite sources at a speed no human can match. The human role shifts from “find and read everything” to “frame the question, judge credibility, and synthesize across sources.”

What AI can do

  • Search and retrieve sources at scale: Perplexity Deep Research, Elicit, ChatGPT Deep Research, and Claude with web search can retrieve 20–50 cited sources for a research question in minutes.
  • Summarize and extract findings: An LLM can read a 30-page report and produce a structured summary with key claims, sample sizes, methods, and limitations in 5 minutes per source.
  • Cluster sources by theme: Tools like Atlas, Elicit, and custom Claude prompts can take a list of 30 sources and group them by topic, finding, or methodological approach.
  • Cross-reference and flag contradictions: AI can compare claims across sources and surface where they agree, where they disagree, and where one source’s finding contradicts another’s.
  • Translate sources from other languages: For multi-market desk research, AI handles first-draft translation of foreign-language sources well enough to extract findings.
  • Draft the synthesis brief: Given the structured extraction, an LLM can produce a first-draft brief organized by research question, which the analyst then edits.

What requires a human researcher

  • Framing the research question: AI will happily answer a vague question with vague results. Choosing the precise question, the scope, and the deadline is human work.
  • Judging source credibility: LLMs cite sources but do not reliably distinguish a peer-reviewed study from a vendor white paper or a Reddit post. The human still has to verify each citation.
  • Catching hallucinations: AI tools sometimes invent citations or misquote real sources. Every claim that will appear in the final brief must be checked against the original. This is the single biggest reason “AI desk research” goes wrong.
  • Interpreting cultural and market context: A finding from a US study may not generalize to Europe or Asia. Knowing when to trust a generalization and when to set it aside requires market judgment AI cannot provide.
  • Stakeholder management: Talking to stakeholders before and after the desk research, anticipating objections, and tying findings to business decisions is human work.

AI-enhanced workflow

Before AI, a desk research project for a UX or product team took 1–2 weeks: manual Google Scholar searches, manual reading of 20–40 articles, manual extraction of findings into a spreadsheet, and writing the synthesis brief. The analyst spent 70% of the time on assembly and 30% on insight.

With AI in the loop, the workflow inverts. The analyst spends an hour framing the question, then runs Perplexity Deep Research or ChatGPT Deep Research to retrieve 20–30 cited sources in 10–15 minutes. They use Claude or ChatGPT to extract structured findings from each source in under an hour total, then ask the model to cluster and cross-reference. The analyst then spends most of the remaining time on the part AI cannot do: verifying citations, judging credibility, interpreting findings against the business context, and writing the brief in a voice a human stakeholder will trust. A 1–2 week project compresses to 2–3 days.

The catch is hallucinations. Every AI desk research workflow needs a verification pass where the analyst opens each cited source and confirms the claim. Skipping this step leads to confidently wrong briefs that erode stakeholder trust irreversibly.

Tools

AI deep research: Perplexity Deep Research, ChatGPT Deep Research, Claude with web search, Elicit, Consensus, Semantic Scholar AI, Atlas Workspace, Kompas AI, AnswerThis.

Academic and scholarly databases: Google Scholar, Semantic Scholar, JSTOR, ACM Digital Library, PubMed, ResearchGate.

Industry research and benchmarks: Nielsen Norman Group, Baymard Institute, Forrester, Gartner, Statista, Pew Research, dscout’s People Nerds.

Government and public data: data.gov, Eurostat, World Bank Open Data, OECD Data, national statistical agencies.

Competitor and market intelligence: SimilarWeb, SEMrush, Crunchbase, BuiltWith, G2 and Capterra reviews, Owler.

Source management: Zotero, Mendeley, Notion, Obsidian, Roam Research.

Works well with

  • In-depth Interview (Di): Desk research orients the team and refines the interview questions; interviews then fill the gaps desk research could not cover. This is the canonical pairing.
  • Benchmarking (Bm): Desk research often produces a competitive map as one output, and Benchmarking takes that map further by structured comparison against your own product.
  • Survey (Sv): Desk research reveals the language users use about a topic, which the survey then uses to write better questions and answer options.
  • Persona Building (Ps): Desk research provides demographic, behavioral, and contextual data that informs persona drafts before primary research validates them.
  • Literature Review (Lr): Literature Review is a more rigorous, systematic flavor of desk research focused on academic sources; the two methods overlap heavily.

Example from practice

A B2B SaaS company was considering entering the European market with their employee onboarding tool, which was already established in North America. The exec team was unsure whether the product needed major adaptation or could be sold as-is. They had two weeks before the next strategy meeting and no budget for primary research with European customers.

The lead PM ran a desk research sprint. She defined the question as “What HR onboarding regulations and cultural expectations differ between North America and key European markets (UK, Germany, France, Netherlands)?” and set a 7-day deadline. She used Perplexity Deep Research to surface 35 sources covering EU data protection rules (GDPR), country-specific labor law on probation periods, HR technology adoption rates from Eurostat and Statista, and competitor positioning from European HR tech blogs. She extracted findings into a structured spreadsheet and used Claude to cluster the findings by country.

The synthesis revealed three things the exec team did not expect: GDPR required significant changes to how employee data was stored and which regions hosted servers, German co-determination law required works council involvement in any HR technology rollout, and the dominant European competitors already had strong feature parity on the basics — the differentiation would have to come from integrations with European HRIS platforms the US product did not yet support. The team postponed the European launch by six months to address GDPR compliance and integration work first, avoiding what would have been a costly false start. The desk research cost zero in primary recruitment and saved an estimated $400K in wasted go-to-market spend.