AI for customer success: spotting churn before it happens
Health scoring from call recordings, QBR prep, knowledge-base search, ticket triage. Less reactive, more strategic.
Customer success is the most reactive job in SaaS. By the time a customer is on a save call, the churn signal fired three weeks ago: in a snippy support ticket, in the way their champion stopped saying "we" on the last call, in the Slack reply that used to be a paragraph and now reads "ok thanks." You felt it. You didn't have time to act on it. That's the gap AI closes. Not by replacing the relationship work, but by surfacing the signal early enough that you can actually do the relationship work before the renewal becomes a fire drill.
Here's what to actually run.
1. Health scoring from call recordings
Fathom, Gong, Avoma, and Read.AI all already record and transcribe your calls. Most CSMs use 20% of what they pay for. The piece almost everyone misses: the model can read across calls for a single account and tell you the tone shifted.
Stand up a weekly job (or just a recurring prompt with the last three transcripts pasted in):
You're analyzing customer health for a CSM. Below are the last
three call transcripts with [Account]. Date order, oldest first.
For each call extract:
- Sentiment (positive / neutral / cautious / negative),
with one direct quote as evidence
- Who spoke most on the customer side, and did that change
- Any new objections, blockers, or competitor mentions
- Roadmap or feature requests they repeated
Then compare across the three calls. Flag:
- Tone shifts (specific lines, not vibes)
- Topics they dropped (interest fading)
- New stakeholders showing up (good or bad sign?)
End with a one-line risk read: green, yellow, or red, and why.
[paste call 1]
[paste call 2]
[paste call 3]
Yellow flags before yellow flags become red ones. That's the whole game.
2. QBR prep that's actually customized
Most QBR decks are recycled. The customer feels it. They sit through your slide of generic best practices while wondering when you'll get to the bug their team has been complaining about for six weeks.
Paste their last 90 days of tickets, call summaries, and product usage data into Claude. Ask for a brief, not a deck.
You're prepping me for a QBR with [Account]. I'm their CSM.
The meeting is 60 minutes, four stakeholders on their side
including the economic buyer.
Inputs below:
1. Ticket summaries, last 90 days [paste]
2. Call summaries from Fathom, last 90 days [paste]
3. Product usage report (logins, feature adoption, MAU) [paste]
4. Their original goals from the kickoff doc [paste]
Give me:
- 3 wins to lead with (specific, with the metric)
- 2 risks that could surface, and how I'd respond
- 3 likely objections from the economic buyer specifically
- 2 expansion angles based on usage gaps
- One question I should ask that I'm probably not asking
No filler. This is for me, not the deck.
That brief is the difference between a QBR they tolerate and one where they ask you to come back next quarter.
3. Knowledge base search that works
Internal KBs are graveyards. You wrote the answer to "what's our SSO policy for sub-50 seat customers" eighteen months ago, in a Notion doc nobody can find, and now you're answering it from memory on a call. Wrong, sometimes.
Glean and Guru sit on top of your existing docs (Notion, Confluence, Google Drive, Slack history) and give you an actual search layer. Notion AI works inside Notion if your KB lives there entirely. The win isn't the AI part. It's that "where did we tell that customer X policy" stops being a 20-minute archaeology dig.
Set this up once. Thank yourself every week.
4. Ticket triage so you can do your actual job
Intercom Fin and Zendesk's AI agents handle the tier-1 tickets that shouldn't be eating your CSMs' time: password resets, billing questions, "where do I find the export button." Configure them carefully, route anything ambiguous to a human, and watch your team's ticket queue stop dictating their calendar. The CSM job is accounts, not tickets. Reclaim the hours.
Granola is worth a mention here too: it captures notes from calls without a bot joining, which matters when a customer is sensitive about recording.
The trap
AI hallucinates sentiment. A customer who joked sarcastically on a call ("oh great, another integration that breaks") can get flagged as a churn risk by a model with no sense of humor. A long pause from a customer who is just thinking can read as disengagement. The model doesn't know your champion's deadpan.
Never act on a flag without verifying. Read the actual quote the model cited. Listen to the 30 seconds around it. Check if the "tone shift" is one bad day or a pattern. The cost of a false alarm save call is real: you've signaled to the customer that something is wrong when nothing was, and now they're wondering what you saw that they don't. Use AI to point at where to look, not to decide what's true.
Up next
You've got the CS playbook. The next pillar 2 guide flips the seat from customer-facing to product-facing: "AI for product managers: PRDs, research, roadmaps." Same kind of stack, same trust-but-verify loop, very different daily inputs.
Next in this pillar
AI for product managers: PRDs, research, roadmaps