How to prompt: 5 patterns that work in any model
Give context, show examples, demand structure, iterate, assign a role. Before/after examples for each.
Guide one ended with "show, don't ask." Here are five concrete ways to do it. Each pattern works in Claude, ChatGPT, Gemini, and most other models. Pick the one that fits, or stack a few.
Paste the source, don't describe it
The model can only work with what's in front of it. If you describe a document instead of pasting it, you're asking it to imagine the content and answer about an imagined thing. That's where hallucinations come from.
The fix is almost always: copy, paste, ask.
before:
Summarize the latest OpenAI pricing page for me.
after:
Below is the current OpenAI pricing page. Summarize the changes vs GPT-4o pricing in a 5-bullet list.
[paste the full page]
Same goes for code, contracts, transcripts, and email threads. Don't paraphrase. Paste.
Show two or three examples
Models are pattern-matchers. If you give them two or three examples of what good output looks like, they'll match the pattern. This is called few-shot prompting and it's the single biggest quality lever you have.
before:
Write a product update tweet for our new dashboard feature.
after:
Here are three product update tweets we've shipped before:
1. "Shipped: dark mode. It's been our #1 request for 8 months. Toggle it in settings."
2. "New: bulk export to CSV. Took us a weekend. Should have done it last year."
3. "You can now invite teammates with view-only access. Free on every plan."
Write a fourth one for our new dashboard feature, which lets users build custom charts from any field in their data.
Two examples beat ten adjectives every time.
Demand a structured output
If you don't tell the model what shape you want, you'll get prose. Prose is hard to parse, hard to reuse, and hard to compare across runs. Specify the structure and the field names.
before:
Tell me about these three competitors.
after:
For each of the three companies below, return a JSON object with these fields:
- name
- pricing_model (one of: subscription, usage-based, freemium, one-time)
- target_customer (one short sentence)
- main_weakness (one short sentence)
Companies: Notion, Coda, Airtable.
Bullet lists, tables, JSON, XML tags, even "respond in exactly three sentences." All of these work. Pick one and be specific.
Iterate, don't restart
People treat prompts like a vending machine: type once, expect a finished answer, give up if it's wrong. The better mental model is editing. The first response is a draft. You react to it.
before (one giant prompt, then walks away):
Write a 600-word blog post about why most onboarding flows fail, with three case studies and a CTA at the end.
after (a conversation):
Turn 1: Outline a 600-word post on why most onboarding flows fail. Just the H2s and one sentence each.
Turn 2: Good. Replace section 2 with something about activation metrics instead of NPS.
Turn 3: Now write section 1 only. Match the voice of [paste sample].
Turn 4: Tighten this paragraph. It feels like a LinkedIn post.
Four turns, way better output. Each turn keeps the parts you liked and fixes one thing.
Assign a role and an audience
Telling the model who it is and who it's writing for changes the vocabulary, the tone, and the assumptions it makes. This is not magic, it's just stuffing the context with relevant patterns.
before:
Explain how RAG works.
after:
You're a staff engineer explaining RAG to a product manager who has never written code but is technical enough to understand database concepts. Use one analogy, no code, and end with the one thing they should ask their engineering team.
The role tells the model what kind of language to pull from. The audience tells it what to leave out. Both matter.
Quick recap
Paste the material. Show examples. Demand structure. Iterate. Set a role.
You can stack these. A great prompt usually does three or four at once.
The next guide picks your daily driver: Claude, ChatGPT, or Gemini, and when something else wins.
Next in this pillar
Picking your daily AI: Claude vs ChatGPT vs Gemini