Skip to content

Literature review checklist: questions, search, screening, synthesis, gap analysis

This literature review checklist covers the full project from defining the focused research questions to writing the brief with prioritized recommendations. Use it as a working document — copy it into your project notes, tick off each item as you go, and add comments where the project deviates from the standard flow. The checklist assumes a UX scoping or rapid review with one researcher and a one to two week time box; for full systematic reviews the same items still apply, but plan for two researchers, dual screening, and a much longer extraction phase.

Before

  • Write 2–4 focused research questions, each tied to a real design or research decision
  • Pick the type of review (narrative / scoping / rapid / systematic) based on time and stakes
  • Set the scope in writing: time window, source types, languages, minimum and maximum source count
  • Define inclusion and exclusion criteria for screening (what counts as a relevant source)
  • Audit available access: paid databases, institutional logins, free sources, internal research repository
  • Pick the documentation tool (Zotero + spreadsheet, Notion database, Airtable) and set up the extraction template
  • Time-box the review explicitly (5 hours for a micro-review, 1–2 weeks for a UX scoping review, longer for systematic)
  • Brief stakeholders on what the review will and will not answer, so expectations match the scope

Execution

  • Search internal sources first: research repository, past usability tests, support tickets, design docs
  • Run the external search across academic databases (Google Scholar, ACM, IEEE) and UX knowledge bases (NN/g, Baymard)
  • Use AI literature tools (Elicit, Consensus, SciSpace) for semantic search and first-pass extraction
  • Use citation mapping (Litmaps, Connected Papers) to find adjacent sources the keyword search missed
  • Screen by abstract first; do not read every paper in full at the screening stage
  • Score each source High / Medium / Low for relevance and credibility
  • Extract structured findings into the spreadsheet for High and Medium sources: citation, methods, findings, limitations, implication
  • Read the methods sections of the top 5–10 highest-impact sources in full
  • Verify any AI-surfaced citation against the real database before trusting it
  • Stop adding sources at saturation — when new ones stop yielding new themes
  • Tag each extracted finding with the theme(s) it relates to

After

  • Reorganize the extracted findings by theme, not by source
  • Write a synthesis paragraph for each theme with convergent findings, contradictions, and strength of evidence
  • Build the explicit gap analysis: which research questions the literature does not answer
  • Draft concrete recommendations tied to the team’s specific design or research decision
  • Flag any recommendation that contradicts stakeholder assumptions and frame it carefully
  • Write the 5–10 page brief: research questions, method, themes, gaps, recommendations, source log
  • Present in person to design, product, and research leads; do not email the spreadsheet
  • Archive the source log and the search strategy so future reviews can build on them
  • Schedule a follow-up review on the same topic in 12–18 months when new evidence accumulates