Founder pack
Claude Skill
Post-mortem from Launch
Honest post-mortem on a product or business decision — expected vs actual, why we got it wrong, what we'd do differently.
What it does
Takes a launched product, campaign, or strategic bet and walks through an honest post-mortem: what we expected, what actually happened, where the prediction broke, and what we'd do differently. Resists the "we learned a lot" non-answer. Designed to surface real lessons, not protect the team's ego.
When to use
- ✓Major launch is 30+ days in and you can read the actual data
- ✓A bet (new market, new pricing, new channel) underperformed expectations
- ✓You're running a quarterly leadership review and want to debrief recent decisions
When not to use
- ✗Technical incident — that's a different post-mortem (use SRE-style template)
- ✗Less than 30 days post-launch — too early to read signal
- ✗Right after the loss while the team is still raw — wait a week
Install
Download the .zip, then unzip into your Claude skills folder.
mkdir -p ~/.claude/skills
unzip ~/Downloads/post-mortem-from-launch.zip -d ~/.claude/skills/
# Restart Claude Code session.
# Skill is now available — Claude will use it when relevant.SKILL.md
SKILL.md
---
name: post-mortem-from-launch
description: Use when running a post-mortem on a launched product, campaign, or strategic decision. Triggers on "post-mortem", "retrospective", "what went wrong with [launch]", "review the [bet]".
---
# Post-mortem from Launch
Most product / business post-mortems are theater. The team walks away with "we learned a lot," nothing changes, and the same mistake recurs in 6 months. A real post-mortem surfaces the prediction that broke, why it broke, and the specific change to how you make decisions next time.
This skill is NOT for technical incidents. It's for product launches, marketing bets, pricing changes, market entries, and other strategic moves.
## Required inputs
1. **What was launched / decided** — specific
2. **What we expected** — the actual number/outcome predicted, ideally written down before launch
3. **What actually happened** — current data
4. **Time elapsed** — minimum 30 days for most launches; longer for slow signals
5. **What the team is currently saying** — the convenient narrative
6. **The decision-makers and contributors** — who owned this
If "what we expected" wasn't written down before launch, push back. The prediction needs to be the BEFORE prediction, not a retroactive one. If there was no prediction, that itself is the lesson.
## Structure
```
# Post-mortem: [Launch / decision / bet]
## TL;DR (3 bullets)
- Headline outcome (what we expected vs got)
- Where the prediction actually broke
- The 1 specific change for next time
## Context
- What we launched and when
- Why we did it (the original thesis)
- What we predicted (concrete numbers / outcomes)
## What actually happened
- The actual data, in the same units as the prediction
- Side-by-side: predicted vs actual
- Time-series if relevant
## Where the prediction broke
This is the heart of the post-mortem. Not "things didn't go well" — specifically:
- We assumed X. Reality was Y.
- We expected behavior A. Customers did B.
- We thought driver C would matter most. Driver D mattered more.
For each break: what evidence would have told us BEFORE launch?
## Convenient story vs real story
The narrative the team is currently telling itself:
- "Market wasn't ready" (often = positioning was wrong)
- "Marketing didn't get the word out" (often = product didn't earn word-of-mouth)
- "We were too early" (often = we didn't validate the actual job)
- "Sales didn't sell it well" (often = the value prop wasn't crisp)
For each, ask: is this the real reason or the convenient one?
## What was controllable
Most outcomes have a controllable component. Honest categories:
- **Discovery / validation gap** — didn't validate the right thing pre-launch
- **Positioning gap** — built the right thing, framed it wrong
- **Distribution gap** — built and positioned right, didn't reach the audience
- **Sequencing gap** — right move, wrong time
- **Execution gap** — the team didn't execute the plan we had
Be honest about which.
## Counter-factual
What's the realistic alternate scenario where this went differently? Identify the specific decision point that, if changed, would have changed the outcome. If you can't find one, the bet was unwinnable — say so.
## What we'd do differently
- 2-3 specific changes to how we'd make a similar decision next time
- NOT generic ("be more disciplined") — specific ("validate pricing with 5 customer interviews before locking it in")
- For each: how we'd know we were doing it right next time
## Pattern check
Is this part of a recurring pattern?
- 3rd launch in a row that under-delivered → strategic issue, not tactical
- Similar misses in similar product areas → reconsider that area
- Same decision-maker overcalling repeatedly → calibration issue
## Decisions to make now
- What are we doing with the launched thing? (kill, fix, double-down, hold)
- What's the new prediction for next 30/60 days?
- What's the trigger to revisit?
```
## Bias controls
- **Hindsight bias**: don't claim "we should have known" if the data wasn't available pre-launch. Distinguish hindsight from foresight.
- **Outcome bias**: a good decision can have a bad outcome. Judge the decision quality given what was knowable at the time, not just the result.
- **Team-protection bias**: don't write a post-mortem that absolves everyone. Equally, don't make it about blame — the goal is the next decision, not justice for the last.
- **Founder-protection bias**: most founder bets that fail had a founder over-ruling skeptics. Name it if true.
## Anti-patterns
- "We learned a lot" with no specifics — meaningless
- "Mistakes were made" passive voice — own the decision-makers
- Listing 10 issues — usually 1-2 are load-bearing
- Skipping the "what would we do differently" — without it, this is just a recap
- Treating non-controllables as controllables ("we should have predicted COVID")
- Making it a roast — kills the next post-mortem before it happens
## Tone
- Direct, not punitive
- Specific over general — "expected $500k pipeline, got $80k" beats "missed expectations"
- Honest about the convenient story vs the real one
- Forward-looking — the point is the next decision, not the last one
## Output format
Markdown. 2-3 pages. End with the "Decisions to make now" section in bold. Without explicit decisions, the post-mortem is a journaling exercise.
Example prompts
Once installed, try these prompts in Claude:
- Post-mortem on our enterprise tier launch. We expected $500k pipeline in 60 days. Got $80k. What went wrong?
- We changed pricing 3 months ago. Conversion dropped 22%, ARR is flat. Post-mortem this honestly.
Related prompts
Don't want to install a skill? These prompts in /prompts cover similar ground for one-shot use: