All roles
for developers

AI for developers

AI is the biggest productivity unlock for developers since the IDE. Not because it writes code FOR you (that's when things go sideways) but because it removes the tax on the boring parts: tests, boilerplate, syntax-jogging, debugging dumb mistakes. Your judgment still drives. AI just types faster.

8 use cases·7 tools·30-min starter
Get the Developer skill pack (13)Hand-written workflows you install in ~/.claude/skills/

What AI handles well

Debug with reasoning, not just fixes

The problem. Code is broken. You want to understand WHY before you accept a fix. Generic AI suggestions fix the symptom but leave the architecture concern unsaid.

What AI does. Use the debug-with-reasoning prompt. Get a root-cause walkthrough, the minimal fix, and whether the bug hints at a deeper issue worth refactoring.

Use the promptTools:Claude is best for nuanced debugging. Cursor for in-editor.

Three-level code review (PR feedback)

The problem. Reviewing a PR. The default is to leave nits. The valuable feedback is structural — but you don't always have the bandwidth to think about it.

What AI does. Use the three-level review prompt: must-fix, code quality, architectural. The structure forces you to separate "block merge" from "nice to have."

Refactor without changing behavior

The problem. Working code that's hard to read. You want it cleaner without breaking anything subtle.

What AI does. Have AI refactor with strict constraints — same I/O, same external API, no new deps. Then list every change so you can review what changed and what to manually re-test.

Write tests with edge cases you missed

The problem. Happy path tests are easy. The bugs you ship are in the cases you didn't think to test.

What AI does. Use the test-generation prompt that explicitly asks for boundary cases, failure modes, and "sneaky cases that look like edge cases but are actually correct behavior worth pinning down."

Generate CRUD boilerplate (and skip ahead to the interesting code)

The problem. New endpoint. New form. New CRUD pattern. The first hour of every feature is the same boring scaffolding.

What AI does. Generate the boilerplate, then let AI justify where it diverged from defaults and what's stub-quality. You start at "the interesting part" instead of typing route handlers.

Migrations that won't lock your prod DB

The problem. You need to add a NOT NULL column on a 50M-row table. Or rename. Or split. The migration is technically correct but locks the table for 10 minutes.

What AI does. Use the migration prompt that explicitly considers locking, deploy ordering, and rollback. AI surfaces the concrete failure modes you should be ready for.

Architecture decisions, structured

The problem. Build vs buy. Monolith vs services. Framework A vs B. You're going in circles on a non-trivial decision.

What AI does. Use the architecture decision prompt. It surfaces the actual question (often you're framing it wrong), the option you're missing, and what you'd need to know in 6 months that would make either choice obvious.

Documentation that someone might actually read

The problem. You shipped something. The docs are stub-quality. You hate writing docs.

What AI does. AI generates docs from working code, but with constraints: skip what's obvious from the type signature, skip marketing words, focus on "when to use" and "common pitfalls."

Your AI stack

Start with the foundation. Add specialized tools as the work calls for them.

Foundation LLM

Claude
Best LLM for code as of 2026. Notably stronger reasoning on debugging, refactoring, and architecture decisions than alternatives.
ChatGPT
Strong on quick boilerplate, syntax lookups, and Code Interpreter for analysis. Worth keeping alongside Claude.

Specialized add-ons

Cursor
AI-native IDE. Tab autocomplete, agentic edits, codebase-aware chat. The default for most professional devs in 2026.
Claude Code
Anthropic's terminal-based agentic coding tool. Strong for multi-step refactors and codebase navigation.
Aider
Open-source pair-programming in the terminal. Good for git-aware edits and bringing your own LLM.
Copilot
Still solid for autocomplete in editors that aren't Cursor. Cheaper if you're cost-sensitive.
Perplexity
Better than searching docs. Cites sources, finds recent changes the LLMs don't know about.
Shipping AI-built code? See the deploy stack — hosting + database picks that pair with what AI builders generate.

Prompts ready to use

Get started in 30 minutes

1

Pick an AI editor (Cursor or Claude Code)

10 min

Cursor if you're used to VS Code workflow. Claude Code if you live in the terminal. Both are paid but worth it for daily use. Free tiers exist for trials.

2

Set up codebase context for your editor

10 min

Add an AGENTS.md or .cursorrules file at the repo root with: stack, conventions, things to avoid, your testing framework. AI now has your codebase context loaded.

3

Run the three-level review prompt on your next PR

5 min

Before submitting, paste your diff and run the three-level review. Notice the architectural-level feedback — that's where the real value is.

4

Pick one repetitive task (tests, boilerplate, docs) and have AI handle it on your next ticket

5 min

Don't try to "use AI for everything" on day one. Pick one task type and offload it. Then add another. Habit-by-habit beats wholesale change.

Common mistakes

  • Accepting AI-generated code without reading it. AI invents functions that don't exist, hallucinates API methods, and writes plausible-looking nonsense. If you can't explain what the code does, don't merge it.

  • Letting AI define architecture. AI will happily generate a microservices-from-day-one stack for a side project. Architecture decisions need YOU and the constraints of YOUR system. Use AI to challenge your reasoning, not replace it.

  • Pasting proprietary code into public LLMs without checking your company's policy. If you're using OpenAI/Anthropic via the consumer apps, your inputs may train future models. Use enterprise/team plans where data doesn't train on your code.

  • Skipping tests because "AI generated it." AI-generated code needs MORE testing than your own, not less, because you didn't write it and your mental model of edge cases is missing.

  • Trusting AI on package versions, API signatures, or anything time-sensitive. The model has a training cutoff. For libraries that move fast, verify against current docs before shipping.

Related roles