How to conduct a diary study: a practical guide with AI prompts
What is a diary study?
A diary study is a longitudinal self-reporting research method in which participants document their experiences, behaviors, and emotions over a period of days or weeks as they interact with a product, service, or situation in their natural environment. Unlike single-session methods, a diary study captures how behavior evolves over time, revealing patterns, habits, and contextual triggers that a one-time interview or usability test would miss.
What question does it answer?
- How do users interact with a product across days or weeks in their real environment?
- What triggers specific behaviors, and how do those triggers change over time?
- What emotional highs and lows do users experience throughout their journey?
- How do habits form, break, or evolve around a product or process?
- What contextual factors (location, mood, time of day, competing activities) shape usage?
- Where does the product fit — or fail to fit — into users’ daily routines?
When to use
- When behavior unfolds over time and cannot be captured in a single session — for example, onboarding that spans a week, habit formation, or recurring workflows.
- When you need to understand real-world context: where, when, and under what circumstances people use your product.
- When recall bias is a concern — participants log experiences close to the moment they happen rather than reconstructing them later in an interview.
- When you suspect there is a gap between what users say in interviews and what they actually do day-to-day.
- When tracking feature adoption, content engagement, or behavior change after a product launch or update.
- When exploring an unfamiliar problem space where you do not know what patterns to look for — the diary structure lets patterns emerge from longitudinal data.
Not the right method when you need quick answers (the shortest useful diary study takes 3-5 days), when the behavior is a one-time event, when participants cannot commit to repeated self-reporting, or when you need controlled conditions (use usability testing instead). If you already know the patterns and need to measure their frequency, a survey is faster and cheaper.
What you get (deliverables)
- Diary entries: timestamped text, photos, videos, voice recordings, and screenshots documenting real experiences
- Behavioral timeline: a chronological map of how each participant’s behavior evolved across the study period
- Thematic analysis: recurring themes, patterns, and anomalies coded across all participants
- Contextual triggers map: the conditions (time, place, mood, external events) that prompt or prevent specific behaviors
- Journey insights: pain points, workarounds, and moments of delight as they occur in the wild
- Participant quotes and media: raw evidence to support insights and bring stakeholder presentations to life
- Recommendations: research-backed suggestions grounded in observed longitudinal behavior
Participants and duration
- Participants: 10-15 per segment. Diary studies require more participants than interviews because dropout rates are higher (expect 15-25% attrition) and individual entries vary in depth. For a single homogeneous group, 10-12 active participants typically produce saturation. If studying 2 segments, recruit 10-15 per segment (20-30 total, accounting for dropouts).
- Entry frequency: 1-3 entries per day depending on the behavior’s natural frequency. For daily-use products, one entry per day is standard. For event-triggered studies (e.g., “log every time you experience X”), frequency depends on how often the event occurs.
- Study duration: 1-4 weeks.
- 3-5 days: short behavior tracking, product trials, event-based studies
- 1-2 weeks: recurring habits, onboarding journeys, feature adoption
- 3-4 weeks: behavior change, long-term engagement, seasonal patterns
- Total project timeline: 3-6 weeks including all phases.
- Design and setup: 3-5 days
- Recruitment and onboarding: 3-7 days
- Data collection (diary period): 1-4 weeks
- Follow-up interviews (optional but recommended): 2-3 days
- Analysis and synthesis: 3-7 days
How to conduct a diary study (step-by-step)
1. Define the research objective and study parameters
Write down 2-4 specific questions the study must answer. Then determine: what behavior are you tracking? How often does it naturally occur? This shapes your study duration and entry frequency. A study about morning routines needs 7-14 days of daily entries. A study about customer support interactions might need 3-4 weeks but only event-triggered entries.
2. Design diary prompts and entry structure
Create a structured template that participants will complete for each entry. Effective prompts are specific enough to guide participants but open enough to capture unexpected behavior. A strong entry template includes:
- Context: Where are you? What time is it? What were you doing before this?
- Trigger: What prompted this experience or interaction?
- Action: What did you do? Walk through the steps.
- Outcome: What happened? Did you accomplish what you intended?
- Emotion: How did you feel during and after?
- Media: Attach a photo, screenshot, or short video if relevant.
Vary prompts across days to prevent repetition fatigue. Day 1 might focus on first impressions, day 4 on workarounds, day 7 on reflections about the overall experience.
3. Choose a diary platform and set up the study
Select a tool based on your needs. Options range from simple (Google Forms, email) to specialized (dscout, Indeemo, EthOS, Recollective). Consider: Does it support multimedia? Can participants submit entries from their phone? Does it send automated reminders? Can you moderate and ask follow-up questions on individual entries?
Set up the platform with your prompts, configure reminder schedules, and prepare an onboarding guide for participants.
4. Recruit and screen participants
Diary studies demand sustained effort from participants, so recruitment quality matters more than in most methods. Screen for:
- Behavioral fit (they actually do the thing you are studying)
- Willingness to commit to the full study period
- Comfort with self-reporting, including photo/video capture
- Access to a smartphone or computer for submitting entries
Offer incentives that reflect the extended commitment: $100-300 for consumer studies lasting 1-2 weeks, higher for B2B or longer durations. Use milestone-based incentives (partial payment at midpoint, full payment at completion) to reduce dropout. Over-recruit by 20-30%.
5. Onboard participants
Run a brief onboarding session (15-30 minutes, live or recorded) covering:
- The purpose of the study (framed broadly to avoid biasing responses)
- How to submit entries: where, when, what format
- Examples of good and poor entries — showing what level of detail you expect
- The reminder and check-in schedule
- Consent, privacy, and data handling
Send a written summary after the session. The most common reason for low-quality entries is unclear expectations, and onboarding is your one chance to set them.
6. Monitor, moderate, and maintain engagement
This is the most labor-intensive phase for the researcher. During the diary period:
- Review incoming entries daily. Ask follow-up questions on entries that are thin or particularly interesting (“You mentioned frustration here — can you describe what happened in more detail?”).
- Send varied reminder messages. Avoid identical pings — rotate between prompts, encouragement, and specific focus areas for each day.
- Watch for dropout signals: participants who miss 2+ consecutive entries need a personal check-in.
- Use milestone incentives or small “thank you” messages at the halfway point to sustain motivation.
NNGroup recommends keeping daily entry time under 10-15 minutes to prevent fatigue.
7. Conduct follow-up interviews
After the diary period ends, interview 5-8 participants whose entries revealed the most interesting patterns, contradictions, or gaps. Use their own diary entries as discussion material: “On day 3 you mentioned X, and on day 9 you described something different. What changed?” These interviews add the “why” that diary entries alone cannot always provide.
8. Analyze and synthesize
- Export all entries and organize them by participant and by day.
- Read through each participant’s full diary arc — their individual journey across the study period.
- Code entries for recurring themes: behaviors, emotions, triggers, pain points, workarounds.
- Look for temporal patterns: what changes from day 1 to day 14? What stays consistent?
- Build cross-participant themes: which patterns appear across 3+ participants?
- Write insights as “observation + implication” statements.
- Select representative quotes and media to support each insight.
9. Report findings
Structure the report around the longitudinal dimension — this is what makes diary study findings distinct from interview findings. Show how behavior evolved, not just what behavior existed. Use participant timelines, before-and-after comparisons, and contextual quotes. Lead with insights and recommendations, not with a description of the method.
How AI changes this method
AI compatibility: partial — AI accelerates the planning phase (prompt design, screener creation, onboarding materials) and the analysis phase (coding entries, identifying patterns across hundreds of data points, summarizing participant arcs). The data collection phase, however, remains human-driven: participants self-report in their own environments, and the researcher moderates entries in real time. AI cannot observe a participant’s morning routine or ask a follow-up question at the right moment during the study.
What AI can do
- Diary prompt generation: Given a research objective and target behavior, an LLM can draft varied daily prompts that prevent repetition fatigue — producing a week’s worth of unique prompts in minutes instead of hours.
- Reminder message crafting: AI can generate a set of engagement messages that vary in tone and focus, keeping participants motivated without repetitive nudging.
- Entry coding and thematic analysis: After the study, AI can process hundreds of diary entries, tag them by theme, and identify patterns across participants and time periods — replacing the first pass of manual coding that might take days.
- Participant arc summaries: An LLM can summarize each participant’s full journey across the study, highlighting turning points, emotional shifts, and behavioral changes — giving the researcher a structured overview before deep analysis.
- Screener questionnaire drafting: AI can generate behavioral screening questions based on the study’s criteria, including disqualifying questions, in 10-15 minutes.
- Cross-entry contradiction detection: AI can flag inconsistencies between a participant’s early and late entries, or between what they reported and what their photos/screenshots show, directing the researcher to the entries that need closer attention.
What requires a human researcher
- Study design decisions: Choosing the right duration, entry frequency, prompt structure, and participant criteria requires research judgment that depends on the specific context. AI can suggest options, but the researcher decides.
- Real-time moderation during the study: Reading entries as they come in, asking follow-up questions, spotting dropout signals, and adjusting prompts mid-study require contextual awareness that AI lacks. The quality of diary data depends heavily on how well the researcher moderates.
- Follow-up interviews: The most valuable insights often emerge when a researcher walks a participant through their own entries and asks “why.” This requires rapport, active listening, and adaptive probing.
- Ethical judgment: Deciding how to handle sensitive disclosures in diary entries, when a participant’s submission reveals distress, or how to manage privacy when entries include photos of their environment — these require human judgment.
- Final synthesis and interpretation: AI can identify that “frustration” appeared in 40% of entries on days 3-5, but the researcher determines whether that frustration stems from the product, the study itself, or external life circumstances.
AI-enhanced workflow
The biggest time savings come in the analysis phase. A 2-week diary study with 12 participants generates 150-250 individual entries, many with photos or videos. Before AI tools, a researcher would spend 3-5 days reading, coding, and synthesizing this volume of data. With AI-assisted coding and summarization, the first-pass analysis can happen in a day, freeing the researcher to spend more time on interpretation and follow-up interviews.
The planning phase also benefits. Designing varied daily prompts that keep participants engaged without being repetitive used to take half a day of careful writing. An LLM can produce a draft set of 14 daily prompts in 15 minutes, which the researcher then edits for tone, specificity, and alignment with the research questions. The same applies to onboarding materials, reminder messages, and screener questionnaires.
Where AI does not change the workflow is the data collection period itself. The researcher still needs to review entries daily, ask follow-up questions, manage engagement, and make judgment calls about when to adjust the study. This 1-4 week moderation period remains the most time-consuming part of a diary study, and it cannot be compressed by AI.
Tools
- Diary platforms: dscout (mobile-first, multimedia, structured missions), Indeemo (video diaries, in-context capture), EthOS (ethnographic diary research), Recollective (community-based diary studies), Yazi (WhatsApp-based diary studies)
- Lightweight alternatives: Google Forms (structured entries via links), SurveyMonkey (scheduled diary surveys), email prompts with templates, Notion or Google Docs (shared diary templates)
- Recruitment: User Interviews, Respondent, Ethnio, Prolific
- Analysis: Dovetail (coding, tagging, repository), Looppanel (AI-assisted diary analysis), Reframer (qualitative coding), Miro (affinity diagrams, timelines)
- AI-assisted: Speak AI (transcription + theme detection), ChatGPT/Claude (entry coding, prompt generation, summarization), Notably (AI-powered qualitative analysis)
- Communication: Calendly (scheduling onboarding and follow-ups), Slack or WhatsApp (check-ins with participants)
Works well with
- In-depth Interview (Di): Follow-up interviews after a diary study add the “why” behind diary entries. Participants walk through their own logged experiences with a researcher, producing richer explanations than either method alone.
- Journey Mapping (Jm): Diary data provides timestamped, real-world touchpoints that feed directly into journey maps — unlike maps built from recalled experiences, these reflect what actually happened.
- Survey (Sv): After a diary study reveals behavioral patterns, a survey can measure how widespread those patterns are across the full user base. The diary study generates the right questions; the survey answers “how many.”
- Contextual Inquiry (Ci): A diary study identifies when and where interesting behavior happens; a contextual inquiry then lets the researcher observe that behavior in person at the right moment.
- Persona Building (Ps): Diary data reveals behavioral segments based on how people actually use a product over time, not based on demographics or self-reported attitudes. These segments form the basis of evidence-based personas.
Example from practice
A fitness app company noticed that 60% of new users stopped opening the app within the first two weeks, but analytics could not explain why — the features were being used, completion rates were normal, and NPS scores from onboarding surveys were positive.
They ran a 14-day diary study with 15 new users, asking them to log each interaction with the app: what triggered it, what they did, how they felt, and what they did afterward. By day 5, a pattern emerged that no survey or interview had surfaced: participants were motivated on days they exercised, but on rest days they opened the app, found nothing relevant to their current state, and closed it feeling vaguely guilty. The app had no rest-day content — no recovery tips, no progress reflections, no encouragement to take a break. Each “empty” visit eroded the habit.
Based on this finding, the team designed a rest-day experience with recovery content, progress summaries, and next-workout previews. Two-week retention increased from 40% to 58% over one quarter, and participants in a follow-up diary study reported that rest days felt like part of their fitness routine rather than a gap in it.
Beginner mistakes
1. Writing vague or repetitive prompts
“How was your experience today?” produces thin, unhelpful entries day after day. Participants need specific guidance to produce detailed data. Instead of one generic question, give them structured prompts that target context, behavior, emotion, and outcome. Vary the prompts across days — repetition causes fatigue, and participants start copying their previous answers.
2. Under-moderating the study
Setting up the diary and waiting for entries to roll in is the most common mistake. Without active moderation — reading entries daily, asking follow-up questions, flagging thin responses — data quality degrades steadily. By the end of an unmoderated 2-week study, most entries are one-sentence answers. Treat moderation as a daily research activity, not an afterthought.
3. Running too long without enough incentive structure
A 4-week diary study with a single payment at the end will lose half its participants by week 2. Milestone-based incentives (partial payment at midpoint, bonus for complete participation) keep attrition manageable. Match the study length to the behavior’s natural cycle — if the behavior repeats weekly, two weeks is often enough. Longer is not always better.
4. Skipping onboarding
Sending participants a link and written instructions produces wildly inconsistent entry quality. Some will write paragraphs, others will submit two words. A 15-30 minute onboarding session — showing examples of good and poor entries, walking through the platform, and answering questions — standardizes expectations and prevents most quality problems before they start.
5. Analyzing entries in isolation instead of as arcs
Reading all Day 3 entries together, then all Day 5 entries, misses the longitudinal story that makes diary studies valuable. First read each participant’s complete journey from day 1 to the final day. Then look for cross-participant patterns. The within-participant arc is the unit of analysis, not the individual entry.