Developer pack
Claude Skill

Loop Breaker

Diagnoses why an AI agent is stuck and produces a clean restart with the right context.

What it does

When an AI agent has tried the same fix three times and the test still fails, this skill diagnoses what's actually wrong (often: missing context, wrong mental model, or chasing a symptom). Produces a fresh-session prompt with the constraints and context that breaks the loop, instead of burning more tokens in the same conversation.

When to use

  • Your AI agent has tried 3+ times and the build/test still fails
  • The conversation is degrading — same suggestions, different wording
  • Token spend is climbing fast with no progress

When not to use

  • First or second iteration — sometimes the AI just needs the error it didn't see
  • You don't know what's actually failing yourself — diagnose the failure first, then bring it here

Install

Download the .zip, then unzip into your Claude skills folder.

mkdir -p ~/.claude/skills
unzip ~/Downloads/loop-breaker.zip -d ~/.claude/skills/

# Restart Claude Code session.
# Skill is now available — Claude will use it when relevant.

SKILL.md

SKILL.md
---
name: loop-breaker
description: Use when an AI coding agent has tried the same fix 3+ times without progress. Triggers on "stuck in a loop", "agent keeps trying the same thing", or pasted multi-message agent transcripts.
---

# Loop Breaker

Same failure twice means the model does not have the answer. More iterations of the same conversation rarely surface it. The fix is to leave the conversation, diagnose what's missing, and restart with that missing piece.

This skill diagnoses *why* the loop is happening and produces the restart prompt.

## Required inputs

1. **The original task** — what the agent was asked to do
2. **The actual failure** — the real error message, test output, or wrong behavior (not the user's paraphrase)
3. **What the agent has tried** — at least 3 attempts, with the diffs/edits each time
4. **The relevant code** — the files the agent has been editing

If the user pastes only their summary ("it keeps failing"), ask for the actual error and the actual attempts. The summary is usually where the real signal got lost.

## Diagnosis: pick the failure mode

Categorize the loop into one of these. The fix is different for each.

### Mode 1: Wrong target

The agent is fixing the wrong line. The actual cause is in a config file, an environment variable, a different module, or a dependency the agent never read.

**Tell:** Each attempt edits the same lines. The error doesn't change.
**Fix:** Force the agent to *read* the surrounding system before editing. Provide the related files (config, env, upstream module) explicitly.

### Mode 2: Missing context

The agent has the right line but wrong information about it. It's guessing the schema, the type, the library version.

**Tell:** Each attempt invents a slightly different API call.
**Fix:** Paste the actual schema, actual types, actual function signature. Don't make the agent infer.

### Mode 3: Conflicting instructions

Earlier in the conversation, the user (or a prior message) said something the agent is still trying to satisfy that contradicts the current goal.

**Tell:** The agent keeps adding back something the user just removed, or vice versa.
**Fix:** Fresh session. The accumulated context is poisoning the reasoning.

### Mode 4: Wrong tool

The model can't actually do this task in this environment. (E.g., asking for a fix that requires running a debugger when the tool only edits files.)

**Tell:** The agent describes what *would* work but can't actually do it.
**Fix:** Switch tools or do that step yourself.

## Output format

```
## Diagnosis
Mode [N]: [name]
Evidence: [the specific signal in the transcript that points here]

## What the agent is missing
[The concrete piece — file content, schema, version, signature]

## Restart prompt (paste into a fresh session)
[A self-contained prompt with the missing context inlined. Do not reference the prior conversation.]

## If the restart also fails
[The signal to switch tools, switch model, or write it yourself.]
```

## Anti-patterns

- Telling the user "try again with a clearer prompt" — that's not a diagnosis
- Suggesting the agent "think more carefully" — it has no internal signal for this
- Continuing in the existing session — the accumulated noise is the problem; restart

## Tone

- Direct. The user has already burned tokens. They want to stop the bleeding.
- Specific about evidence. Quote the transcript line that points to the failure mode.
- Honest about the limit. Sometimes the answer is "this model can't do this; write it yourself."

Example prompts

Once installed, try these prompts in Claude:

  • My agent has tried 5 times to fix this auth test. Each time it edits the same lines. Help me diagnose.
  • Cursor is in a loop on this migration. Here's the conversation. What's actually wrong?

Related prompts

Don't want to install a skill? These prompts in /prompts cover similar ground for one-shot use: