slice icon Context Slice

Feature Context Template

Use these templates when preparing features for prioritization. Better context leads to more accurate scores.

Full Context Template

Use this when you have time to gather thorough information:

Feature: [Clear, descriptive name]

Problem
What user pain does this solve? Be specific about who feels the pain and when.

Users
- Who benefits? [Role, segment, or persona]
- How many? [Number per month/quarter]
- How do you know? [Analytics source, survey, estimate]

Evidence
- Support tickets: [Count, examples]
- User requests: [Count, source]
- Usage data: [Relevant metrics]
- Competitive pressure: [What competitors offer]

Impact
- What changes for users if we build this?
- How will we measure success? [Metric + target]

Effort
- Engineering estimate: [Person-weeks/months]
- Who estimated: [Names]
- Dependencies: [Other teams, features, infrastructure]
- Hidden work: [Design, QA, docs, launch]

Confidence
- Level: [High/Medium/Low]
- Why: [What makes you confident or uncertain]
- What would increase confidence: [Experiment, research, spike]

Risks
- Technical: [Unknowns, complexity]
- Product: [User adoption, edge cases]
- Business: [Timing, competition, resources]

Quick Prioritization Template

Use this for rapid initial prioritization when time is limited:

Feature: [Name]
Problem: [One sentence]
Users: [Who and roughly how many]
Impact: [High/Medium/Low] — [Why]
Effort: [Days/Weeks/Months] — [Gut estimate]
Confidence: [High/Medium/Low] — [Data or guess?]

Minimum Viable Context

At minimum, prioritization needs:

  1. Who — Which users are affected
  2. How many — Rough count (hundreds? thousands?)
  3. How much — Impact level (game-changer or nice-to-have?)
  4. How hard — Effort ballpark (days, weeks, months?)

Without these four, scores will be unreliable. If you can't answer these, gather more information before prioritizing. See sliceData Gathering Playbook for methods.

Example: Well-Documented Feature

Feature: Dark mode

Problem
Users working in low-light environments report eye strain. Power users who work late
request this frequently.

Users
- Who: All users, but especially power users (>2 hrs/day usage)
- How many: ~800 monthly active users would use this based on survey
- Source: In-app survey, 47% selected "very interested"

Evidence
- Support tickets: 23 in last quarter mentioning dark mode or eye strain
- User requests: #2 most requested feature in feedback portal (89 votes)
- Competitive pressure: 4 of 5 competitors offer dark mode

Impact
- Reduces eye strain for evening/night users
- Metric: User satisfaction score, target +5 points
- Secondary: May improve retention for power users

Effort
- Engineering estimate: 1.5 person-months
- Estimated by: Sarah (frontend), Marcus (design system)
- Dependencies: Design system color tokens need refactoring first
- Hidden work: QA across all screens, update screenshots in docs

Confidence
- Level: High (80%)
- Why: Clear user demand, well-understood technical scope
- Risk: Color token refactoring might surface edge cases

Risks
- Technical: Some third-party components may not support theming
- Product: Users may want more customization (contrast levels)

Example: Minimal Context (Needs More)

Feature: Better search
Problem: Users complain search doesn't work well
Users: Unknown
Impact: Unknown
Effort: Unknown

This isn't enough to prioritize. Before scoring, gather: How many users search? What "doesn't work well" — no results, wrong results, slow? What would "better" look like? How complex is the fix?