All terms
Inference & reasoning

Few-shot / Zero-shot / N-shot

Also known as: k-shot, few-shot prompting, in-context examples, demonstrations

Prompting styles defined by how many examples you include before the actual task: zero examples (zero-shot), a few (few-shot), or N specific examples (N-shot).

What it means

These terms describe how many demonstrations you give the model before asking it to do the real task. Zero-shot means you just describe the task: "Classify this review as positive or negative: ..." No examples. One-shot is one demo. Few-shot is typically 2-10. N-shot is the general form. The terminology comes from older ML where "shot" meant labeled training examples; in LLMs it means examples shown in the prompt itself, and they're not used for training — just for in-context learning. When examples help: rare or unusual tasks the model wasn't heavily trained on, tasks where output format matters (you want JSON, a specific tone, a particular structure), tasks where the boundaries are fuzzy and the model needs to see what counts. Three good examples can take a flaky prompt to reliable. When examples don't help (or hurt): tasks the model already knows well — adding examples often doesn't move accuracy and just costs tokens. Tasks where reasoning matters more than format — examples can lock the model into mimicking the example's reasoning depth instead of adapting. With reasoning models in particular, heavy few-shot prompting can backfire: the model performs better when you describe the task clearly and let it think, instead of constraining it to copy a pattern. By 2026, the practical default for most tasks is zero-shot with a clear, structured prompt — sometimes with one example for format, rarely more. The exception is highly specific output formats or domain-specific judgment calls where examples carry information you can't easily put into prose. If you find yourself adding a fifth example, the better move is usually clearer instructions or fine-tuning.

Example

Zero-shot: "Translate to French: 'Hello world.'" — model handles it. Few-shot for a custom JSON schema: show 3 input/output pairs with the exact field names and types, then paste the new input. The model produces matching JSON. Don't bother with few-shot for "summarize this article" — modern models do it fine zero-shot.

Why it matters

Knowing when to add examples vs. when to write better instructions is the core skill of prompt engineering. New users add too many examples; experienced users use them surgically. With reasoning models, the trend is fewer examples and clearer descriptions — examples can constrain models that would do better with room to think.

Related terms

See it in a comparison