task icon Task

Analyze Adoption Patterns

Requirements
CSV file with usage data. Expected columns: user/customer identifier, usage metrics (logins, sessions, feature usage), dates, optionally segment info.
1

Ask the user to upload their usage data CSV file.

Expected data:

  • User/customer identifier (required)
  • Usage metrics (required) — logins, sessions, feature flags, action counts
  • Date/period (required) — for trend analysis
  • Segment info (optional) — plan tier, company size, user role

Check what they want to understand:

  • Overall adoption trends?
  • Specific feature adoption?
  • Churn risk identification?
  • Segment comparisons?
6

Review the validation output. Note any warnings about data quality or missing columns.

Interpret the parsed CSV columns semantically using the interpretation guide.
Identify user/customer columns, usage metric columns, date columns, and segment columns.

8

Analyze the usage data:

  1. Calculate trends — Track metrics over time

    • Overall active user trajectory
    • Feature-specific adoption rates
    • Engagement depth changes
  2. Segment analysis — If segment columns exist:

    • Compare adoption across segments
    • Identify high/low performing groups
    • Note significant differences
  3. Risk detection — Identify warning signs:

    • Users with declining engagement
    • Patterns that precede churn (if historical data)
    • Segments with concerning trends
  4. Success patterns — What healthy users look like:

    • Behaviors correlated with retention
    • Feature combinations that indicate value

Present findings using the Adoption Analysis template.
Quantify everything—percentages, counts, trends.

9

After presenting analysis:

  • Ask if they want to explore specific segments deeper
  • Offer to identify specific at-risk users/accounts
  • Ask if findings should inform any immediate actions

Session file uiParsed Product Data will be cleaned up automatically.

                    To run this task you must have the following required information:

> CSV file with usage data. Expected columns: user/customer identifier, usage metrics (logins, sessions, feature usage), dates, optionally segment info.

If you don't have all of this information, exit here and respond asking for any extra information you require, and instructions to run this task again with ALL required information.

---

You MUST use a todo list to complete these steps in order. Never move on to one step if you haven't completed the previous step. If you have multiple read steps in a row, read them all at once (in parallel).

Add all steps to your todo list now and begin executing.

## Steps

1. Ask the user to upload their usage data CSV file.

Expected data:
- User/customer identifier (required)
- Usage metrics (required) — logins, sessions, feature flags, action counts
- Date/period (required) — for trend analysis
- Segment info (optional) — plan tier, company size, user role

Check what they want to understand:
- Overall adoption trends?
- Specific feature adoption?
- Churn risk identification?
- Segment comparisons?


2. [Gather Arguments: Parse CSV] The next step has the following requirements for arguments, do not proceed until you have all the required information:
- `inputPath`: path to the uploaded CSV from user
- `outputPath`: output path from ui:session.product.data
- `hasHeaders` (default: "true") - Whether first row is headers: true, false
- `delimiter` - Field delimiter (auto-detected if empty)
- Packages: papaparse

3. [Run Code: Parse CSV]: Call `run_script` with:

```json
{
  "file": {
    "path": https://sk.ills.app/code/stdlib.csv.parse/preview,
    "args": [
      "inputPath",
      "outputPath",
      "hasHeaders",
      "delimiter"
    ]
  },
  "packages": ["papaparse"]
}
```

4. [Gather Arguments: Validate Product Data] The next step has the following requirements for arguments, do not proceed until you have all the required information:
- `inputPath`: output path from ui:session.product.data
- `requiredColumns`: user,date,usage
- `minRows` (default: "20"): 20
- `analysisType` (default: "general"): adoption

5. [Run Code: Validate Product Data]: Call `run_script` with:

```json
{
  "file": {
    "path": https://sk.ills.app/code/product.data.validate/preview,
    "args": [
      "inputPath",
      "requiredColumns",
      "minRows",
      "analysisType"
    ]
  },
  "packages": null
}
```

6. [Read CSV Column Interpretation Guide]: Read the documentation in: `./skills/sauna/[skill_id]/references/stdlib.csv.interpretation.md` (Semantic column interpretation guidance)

7. [Read Parsed Product Data]: Read the file at `./documents/tmp/product-data.json` and analyze its contents (Load the parsed data)

8. Review the validation output. Note any warnings about data quality or missing columns.

Interpret the parsed CSV columns semantically using the interpretation guide.
Identify user/customer columns, usage metric columns, date columns, and segment columns.


9. [Read Adoption Analysis Guide]: Read the documentation in: `./skills/sauna/[skill_id]/references/product.adoption.guide.md` (Get analysis framework and output template)

10. Analyze the usage data:

1. **Calculate trends** — Track metrics over time
   - Overall active user trajectory
   - Feature-specific adoption rates
   - Engagement depth changes

2. **Segment analysis** — If segment columns exist:
   - Compare adoption across segments
   - Identify high/low performing groups
   - Note significant differences

3. **Risk detection** — Identify warning signs:
   - Users with declining engagement
   - Patterns that precede churn (if historical data)
   - Segments with concerning trends

4. **Success patterns** — What healthy users look like:
   - Behaviors correlated with retention
   - Feature combinations that indicate value

Present findings using the Adoption Analysis template.
Quantify everything—percentages, counts, trends.


11. After presenting analysis:
- Ask if they want to explore specific segments deeper
- Offer to identify specific at-risk users/accounts
- Ask if findings should inform any immediate actions

Session file `./documents/tmp/product-data.json` will be cleaned up automatically.