slice icon Context Slice

Data Gathering Playbook

When prioritization data is missing, use these methods to gather or estimate the inputs you need.

Estimating Reach

Reach = how many users/customers will this affect in a given time period (usually per quarter or month).

If you have product analytics: Query for users who hit the relevant flow or feature area. Use last quarter's data for quarterly reach. Example: "Users who opened settings page" = 800/month.

If you have funnel data: Filter to users at the relevant stage. A checkout improvement only reaches users who reach checkout, not all visitors.

If you don't have analytics: Survey a sample ("Would you use X?") and multiply by total users. Ask customer success how many tickets or requests mention this problem. Use industry benchmarks as a fallback—e.g., "X% of SaaS users expect dark mode."

Conservative default: When uncertain, use a low estimate and note low confidence. It's better to underestimate reach than to inflate scores with guesses.

Estimating Impact

Impact = how much will this improve things for each affected user?

User research method: Ask users "On a scale of 1-5, how much does this problem affect you?" Then translate: 5 = Massive (3), 4 = High (2), 3 = Medium (1), 2 = Low (0.5), 1 = Minimal (0.25).

Proxy method: Find a similar feature you've shipped and check its adoption rate or satisfaction improvement. If "advanced search" increased task completion by 15%, similar features might have similar impact.

Fake door method: Add a button or link for a non-existent feature. Click-through rate indicates interest: >10% clicks = High (2), 5-10% = Medium (1), <5% = Low (0.5).

Time/money calculation: Estimate time saved per user per use, multiply by frequency. Example: "Saves 5 minutes per report × 4 reports/week × 200 users = 400 hours/week saved."

Estimating Effort

Effort = person-months of work across all disciplines.

Multi-engineer estimates: Ask multiple engineers "How long would this take YOU?" (not the team). Average their estimates. Engineers estimate for themselves more accurately than for others.

Historical comparison: Find a similar past feature. Check actual time spent (not the original estimate). Adjust for complexity differences.

Include everything: Engineering is often <50% of total effort. Include design, QA, documentation, launch support, and ongoing maintenance. A "2 week" feature often becomes 6 weeks when you count everything.

Add buffer: Add 30% for unknowns on well-understood work, 50%+ for novel work. Features always take longer than the happy path estimate.

Increasing Confidence

Confidence = how sure are we these estimates are correct?

From 50% (guess) to 80% (evidence):

  • Run a user survey with n>20 responses
  • Build a quick prototype and test with n>5 users
  • Analyze how competitors solved the same problem
  • Validate technical feasibility with an engineering spike
  • Check support tickets for frequency and severity data

From 80% (evidence) to 100% (data):

  • Ship to beta users and measure actual usage
  • Run an A/B test in production
  • Measure actual impact on target metrics
  • These require shipping something—only worth it for high-stakes decisions

When to stop: 80% confidence is often good enough. The cost of gathering more data may exceed the value of better precision. Default to 50% if you're guessing, then note what would increase confidence.

Quick Reference

Input Fast Method Better Method
Reach % of users × total users Funnel analytics for that flow
Impact Comparison to similar feature User survey or fake door test
Effort Single engineer estimate + 50% Multi-engineer average + 30%
Confidence Default 50%, honest assessment User research or prototype test