Customer Success pack
Claude Skill

Ticket Trend Analyzer

Analyzes a batch of support tickets for themes, root causes, product/process recommendations.

What it does

Given a batch of support tickets (Zendesk, Intercom, Freshdesk export), identifies themes by feature area and root cause, separates symptom-level patterns from underlying causes, and recommends specific product, doc, or process changes. Avoids the "tickets are up 20%" report that doesn't drive action.

When to use

  • Monthly / quarterly review of support tickets to find product or process improvements
  • Specific account where ticket volume spiked and you want to understand why
  • Building the case to product for a fix — you need to show pattern, not anecdote

When not to use

  • Real-time triage of a single P1 ticket — that's incident management
  • You only have ticket counts and no body content — analysis needs the actual ticket text
  • Sample size <30 tickets — patterns will be unreliable

Install

Download the .zip, then unzip into your Claude skills folder.

mkdir -p ~/.claude/skills
unzip ~/Downloads/ticket-trend-analyzer.zip -d ~/.claude/skills/

# Restart Claude Code session.
# Skill is now available — Claude will use it when relevant.

SKILL.md

SKILL.md
---
name: ticket-trend-analyzer
description: Use when analyzing a batch of support tickets for themes, root causes, and product/process recommendations. Triggers on "ticket trends", "analyze these tickets", "support ticket themes", "Zendesk analysis".
---

# Ticket Trend Analyzer

Most ticket reports stop at "volume by feature area." That doesn't drive action. The real value is identifying root causes — the same underlying issue often shows up as 6 different ticket types — and the specific product, doc, or process changes that would prevent the next 100 of these.

## Required inputs

1. **Ticket data** — subject + body (or first 200 chars of body), category if tagged, account, date
2. **Time window**
3. **Total tickets in window** + your sample size (note any sampling bias)
4. **Tool of origin** (Zendesk / Intercom / Freshdesk / Salesforce Service Cloud)
5. **Stratification needed** — by tier? by product area? by account?

If the user provides only counts and no ticket bodies, push back: "Ticket counts give you trends. Ticket bodies give you root causes. We need the bodies to do real analysis."

## Analysis framework

### Step 1: Tag every ticket

For each ticket, extract:
- **Feature area** (which part of the product)
- **Symptom** (what the user reported — "report failed to load")
- **Root cause** (when known — "data freshness lag from upstream API")
- **Type**:
  - **[BUG]** — product malfunction
  - **[GAP]** — feature missing or limited
  - **[USABILITY]** — feature exists but is confusing
  - **[DOCS]** — answered by documentation
  - **[ONBOARDING]** — first-30-day pattern
  - **[INTEGRATION]** — third-party connection
  - **[ENVIRONMENTAL]** — customer-side issue (their data, their setup)

### Step 2: Cluster by symptom AND root cause

Symptoms cluster easily. Root causes are the harder, higher-value signal.
- 12 tickets ALL about "report failed to load" with 3 different root causes = 3 distinct problems
- 8 different ticket symptoms ALL caused by the same upstream lag = 1 problem causing 8 symptoms

### Step 3: Quantify and stratify

For each cluster:
- N tickets / % of total
- N unique accounts (10 tickets from 1 account is different from 10 tickets from 10 accounts)
- Time-to-resolution distribution
- Account tier breakdown
- Trend (up / flat / down vs prior period)

### Step 4: Action recommendation per cluster

For each cluster, recommend ONE owner + action:

#### Product fix
- Cluster has clear bug or gap
- Recommendation: ticket ID for product team with severity, affected accounts, est. ticket reduction
- Example: "12% of tickets are duplicate report exports — fix is a 1-day product change. Estimated 200+ tickets/quarter eliminated."

#### Doc fix
- Cluster is a "how do I X" pattern resolvable by docs
- Recommendation: doc page to write or update, expected deflection rate
- Note: tickets don't go to zero from a doc fix — usually 30-60% deflection

#### Process fix on YOUR side
- Cluster reveals a process gap (escalation routing, SLA, scoping)
- Recommendation: process change, owner

#### Onboarding fix
- Cluster is concentrated in customers' first 30-60 days
- Recommendation: change in onboarding flow, kickoff template, or champion checklist

#### Customer-specific fix
- Cluster is concentrated in 1-3 accounts
- Recommendation: targeted intervention, not a product change

### Step 5: What this analysis CANNOT tell you
- Selection bias — silent customers don't ticket
- Tagging quality — if Zendesk tags are noisy, your clusters are noisy
- Sampling bias — if you sampled top-tier accounts, results don't generalize to SMB
- Recency bias — last week's incident skews

Be explicit about these limits.

## Output

### Top 5-7 clusters in priority order
Per cluster: name, N tickets, N accounts, % of total, type, root cause (if known), recommended action with owner, estimated ticket reduction.

### Themes (cross-cluster patterns)
3-5 cross-cluster observations.
- "Onboarding-related tickets are 35% of all tickets in customer\'s first 30 days, then drop to 8%. The onboarding gap is X."
- "Tier-1 accounts have 3x ticket-per-seat than tier-3 — they\'re using you more deeply, not failing more."

### Top 3 product asks
The 3 product / engineering changes with highest leverage. Quantified estimated ticket reduction.

### Top 3 process asks
On the CS / support side.

### Top 3 doc asks
For the docs team.

## Anti-patterns

- "Tickets are up 20%" with no root cause — useless to product
- One ticket from an angry customer treated as a theme
- Aggregate report that hides per-account patterns
- "Improve documentation" as a recommendation (which doc, what does it answer?)
- Recommendations with no owner or estimated impact

## Tone

Engineering- and product-friendly. Tickets are noise to product teams unless you do this work. Your job is to pre-package the signal so they can act.

## Output format

Markdown. Tables for cluster quantification. Specific ticket IDs cited as evidence per cluster.

Example prompts

Once installed, try these prompts in Claude:

  • Analyze 200 tickets from Q1. [paste export]. Find themes by feature area, root cause, and what's actionable.
  • Tickets from one specific account (Hooli) spiked 4x in March. [paste 40 tickets]. What's really going on?