Developer pack
Claude Skill
Test Generator
Generates unit/integration tests for a function or module — happy path, edge cases, failures.
What it does
Given a function or module, generates a test file covering the happy path, edge cases, and failure modes. Uses the project's existing test framework and fixtures. Names tests so failures explain themselves. Avoids tautological tests that just mirror the implementation.
When to use
- ✓You wrote new code and need real tests, not snapshots
- ✓Existing code has no tests and you're about to refactor it
- ✓You want a checklist of edge cases before writing tests yourself
When not to use
- ✗The code is going to be deleted next week — write a smoke test, not a suite
- ✗Pure UI / visual concerns — use Playwright or visual regression
Install
Download the .zip, then unzip into your Claude skills folder.
mkdir -p ~/.claude/skills
unzip ~/Downloads/test-generator.zip -d ~/.claude/skills/
# Restart Claude Code session.
# Skill is now available — Claude will use it when relevant.SKILL.md
SKILL.md
---
name: test-generator
description: Use when generating unit or integration tests for a function, module, or endpoint. Triggers on "write tests for", "generate tests", "test this function", or pasted code with a request for coverage.
---
# Test Generator
Generate tests that catch real regressions. Avoid tests that mirror the implementation — those break when you refactor and tell you nothing.
## Required inputs
1. **The code under test** (function, module, or file)
2. **Test framework** — Vitest, Jest, Pytest, Go testing, JUnit, etc. If unknown, ask before generating; idiom matters.
3. **Mocking conventions** — `vi.mock`, msw, sinon, monkeypatch, `gomock`. Match what the project already does.
4. **Existing test file location** if there is one (so the new file fits the convention)
If you can't tell which framework, ask. Don't write Jest tests in a Vitest project.
## Coverage plan
For ANY function, walk this list and generate a test for each that applies:
### Happy path
- The single most common, expected input + expected output
### Boundary conditions
- Empty input (`""`, `[]`, `{}`, `null`, `undefined`)
- Single-element input
- Maximum-size input (if a limit applies)
- Zero, negative numbers, very large numbers
- Unicode / emoji / non-ASCII strings
- Whitespace-only or trimmed inputs
### Failure modes
- Network call fails / times out
- Dependency throws
- Invalid input is rejected with the RIGHT error (not a generic 500)
- Resource exhaustion (full disk, too many open files) if relevant
### Concurrency / ordering
- Two callers at the same time (if shared state)
- Retry causing duplicate side effects
- Out-of-order events
### Idempotency
- Calling twice with the same input — same result?
## Naming
Each test name should make the failure explain itself when red:
- BAD: `test('it works')`
- BAD: `test('returns a value')`
- GOOD: `test('returns 0 when the cart is empty')`
- GOOD: `test('throws ValidationError when email is missing the @')`
When a CI log shows just the test name, the reader should know what broke.
## Anti-patterns to avoid
- **Mirror tests** — `expect(formatName(u)).toEqual(\`\${u.first} \${u.last}\`)` is just re-implementing the function in the test. Test the OBSERVABLE behavior.
- **Snapshot-only tests** — fine for stable structure, useless for logic
- **Tests with no assertion** — `await fn()` without `expect`
- **Mocks for the system under test** — if you mock the function you're testing, you're testing nothing
- **One mega-test that covers 12 cases** — split them so a failure points to the case
## Mocking discipline
- Mock at the boundary (HTTP, DB, clock), not internal helpers
- Prefer real instances over mocks for in-memory things (a real array beats a `MockArray`)
- Reset / restore mocks between tests — leaking state is the #1 source of flaky tests
## Output format
Generate a complete test file that:
1. Uses the project's framework, imports, and idioms
2. Has a sensible `describe` / `context` block structure
3. Includes setup / teardown if needed
4. Lists test cases in priority order (happy path first, then edges, then failures)
End with a short note: "Cases I did NOT cover and you may want to add: [list]" — be honest about what the generation skipped (load tests, browser-only behavior, etc.).
Example prompts
Once installed, try these prompts in Claude:
- Generate tests for this function. We use Vitest + msw for HTTP mocking. [paste function]
- Generate integration tests for the /api/checkout endpoint. Existing tests live in tests/api/. [paste handler]