AI for engineers: using it as a senior on an existing team
Not greenfield vibe coding. Code review, refactor planning, debugging, learning a new codebase. The patterns that actually work in a team.
Most engineering AI advice assumes you're starting a new project. Most engineers are not. You're inheriting a four year old codebase, a flaky test suite, a Slack thread arguing about whether PRs need two approvals, and a deadline that didn't move when the last person quit. The vibe coding guides do not apply to your Monday morning. This one does.
The shape of leverage is different on a real team. You can't ask the model to scaffold a new app, you have to make small, defensible changes inside conventions you didn't write. Four workflows pay rent in that environment.
Code review on your own PRs before anyone else sees them
Before you hit "request review," paste your diff into Claude or Cursor and ask for the kind of pass a senior engineer would do. The model is genuinely good at the boring stuff humans miss across files: a function signature you changed in one place but not its callers, a null check you added in the route handler that's still missing in the worker, a log line that prints a token.
A prompt that consistently pulls its weight:
You're a senior engineer reviewing this PR for production. Focus on:
1. Logic bugs, off-by-one, wrong operators, mutation during iteration
2. Edge cases the diff does not handle (null, empty, large input, concurrent calls)
3. Inconsistencies between this change and existing code in the repo
4. Anything touching auth, secrets, queries, or user input that needs a second look
5. Tests that exist but don't actually exercise the new branch
Quote the line. Don't summarize. If a category is clean, say "no issues."
[paste git diff]
This is the cheapest two minutes you'll spend all day. It also catches the thing you knew was sloppy and were hoping nobody would notice.
Refactor planning, before you touch a line
The mistake junior engineers make is starting the refactor. The mistake senior engineers make is starting the refactor without asking what else breaks. Cursor and Claude Code, with full repo context, will map the blast radius for you in thirty seconds. "What other files reference getCurrentUser? What's the upgrade path if I change its signature to accept a request context? Is anything calling it from a background worker where there is no request?"
Read the answer before you write anything. Half the time you discover the refactor is bigger than you thought, and you scope the PR down. The other half you discover one weird caller in a cron job nobody remembered, and you save yourself a 3am page.
Debugging the genuinely weird
The "passes locally, fails in CI" class of bug is what AI is best at. Paste the stack trace, the surrounding code, the CI config, and the last ten commits on the branch. Ask what changed between the green run and the red one. The model is patient with the boring forensic work, environment variables, Node versions, timezone defaults, the test that depends on file ordering. It's a much better rubber duck than a rubber duck because it can read.
Onboarding to a codebase you've never seen
Cody by Sourcegraph indexes the whole repo with semantic search, which means "where does the auth check happen on this endpoint" returns an answer in seconds instead of an afternoon of grep. Cursor with the codebase as context does the same job from the editor. GitHub Copilot's chat is fine for narrow questions. Tabnine is around if your security team blocks the others. The point isn't the tool, it's that "read the README, then click around for a week" is no longer how you orient. Ask the questions you'd ask a teammate, of the repo itself.
The trap nobody warns you about
The model does not know your team's conventions. It will produce idiomatic React, idiomatic Python, idiomatic Go, and your codebase has its own dialect of all three. The repo wraps every fetch in a custom client. The repo never throws, it returns Result types. The repo has a logging convention with structured fields, and the model is going to write console.log. Always read what it suggests against what your repo actually does, and prefer the repo. The next guide, "Working with AI on a codebase you didn't write," covers how to teach the model your conventions so this happens less.
On PR descriptions
Use the AI to draft, then rewrite in your own voice. Reviewers can smell a generated PR description, the cadence is off, the bullets are too symmetric, the explanation is suspiciously thorough about the obvious parts and silent on the actual decision. Trust drops when they catch it. A two sentence description in your own voice beats a polished six bullet AI essay every time.
What it's still bad at
Architectural decisions that touch ownership boundaries. Knowing that the payments code is owned by another team and you should not refactor it without a heads up. Knowing that proposing a queue here will reopen a fight that was settled two quarters ago. Knowing the staff engineer who reviewed the original design will block any change that doesn't address his three pet concerns. None of that lives in the repo. It lives in your team's history, and the model has none of it.
Use AI for the parts that scale with reading and typing. Use your own judgment for the parts that scale with knowing people.
Next: AI for operations: docs, vendors, and processes that do not drift.
Next in this pillar
AI for operations: docs, vendors, and processes that do not drift