GetBetterPrompts
HomeTextImageVideoHumanize
Guides
  • Home
  • Text
  • Image
  • Video
  • Humanize
  • Guides
GetBetterPrompts
Privacy PolicyTerms of ServiceCookie Policy

GetBetterPrompts. Structured prompts, fast. Free, private, no login.

contact@getbetterprompts.com

AI Prompt Guides

How to Write Better AI Prompts (With Examples)

Writing a good prompt is the single highest-leverage skill you can build when working with AI. A clear, structured prompt turns a mediocre response into an accurate, useful one. This guide gives you a concrete framework you can apply to any model, any task, right now.

In this guide

  1. 1.What Makes a Good Prompt
  2. 2.The Role-Task-Format Framework
  3. 3.One Task per Prompt
  4. 4.Constraints and Avoid Lists
  5. 5.Examples and Few-Shot Prompting
  6. 6.Test and Iterate Your Prompts
1

What Makes a Good Prompt

A good prompt removes ambiguity. The model does not know your context, your audience, or your standards unless you spell them out. Vague inputs like "write me an email" force the model to guess, and it will guess wrong most of the time.

Strong prompts share three traits:

  • Specific -- about the desired output
  • Contextual -- they provide the background the model needs
  • Constrained -- they set limits on format or length

The OpenAI prompt engineering guide calls this "writing clear instructions," and it is the single most effective technique they recommend.


Bad: "Summarize this article."

Better: "Summarize this article in three bullet points for a product manager who hasn't read it. Keep each bullet under 20 words."

The second version tells the model what to produce, who it's for, and how long it should be. That specificity is what separates useful output from generic filler.

Key takeaway: every piece of information you leave out is a decision you're letting the model make for you. Sometimes that's fine. Usually it's not.


2

The Role-Task-Format Framework

The simplest structure that consistently produces good results has three parts:

  • Role -- assign the AI a persona ("You are a senior copywriter")
  • Task -- describe what you need ("Write a landing page headline for a budgeting app aimed at college students")
  • Format -- specify the output shape ("Give me five options, each under 10 words")

This framework works because it mirrors how you'd brief a human colleague. You wouldn't hand someone a project without explaining who they're acting as, what you need, and how you want it delivered. The Anthropic prompt engineering guide recommends assigning a role as one of the first techniques to try, because it anchors the model's tone and expertise level.

Here's a quick template you can copy:

Role: You are a [job title] with expertise in [domain].
Task: [Specific action verb] + [what] + [for whom/why].
Format: Output as [bullet points / table / JSON / paragraph]. Keep it under [length].

You don't need all three parts for every prompt, but starting here gives you a reliable baseline. Once the output is close, you can refine individual pieces.


3

One Task per Prompt

Cramming multiple tasks into a single prompt is the fastest way to get mediocre results on all of them. When you ask the model to "research competitors, then write a positioning statement, then draft three ad headlines," each subtask gets less attention than it deserves.

Break complex work into a chain of focused prompts instead:

  • Step 1 -- ask for the research. Review it.
  • Step 2 -- feed the relevant findings into a second prompt that writes the positioning statement.
  • Step 3 -- use that statement in a third prompt for the headlines.

Each step builds on verified output from the previous one.

This approach has a practical benefit beyond quality: it's easier to debug. If your headlines are off, you can trace the problem to the positioning statement or the research without re-running everything. The Google Gemini prompting guide also recommends splitting complex tasks, noting that simpler prompts produce more predictable results.

Rule of thumb: if your prompt contains the words "then" or "also" more than once, it's probably doing too much. Split it.


4

Constraints and Avoid Lists

Telling the model what not to do is just as important as telling it what to do. Without constraints, you'll get default behavior: long paragraphs, generic phrasing, and unnecessary qualifiers.

Constraints can cover:

  • Length -- "under 200 words"
  • Tone -- "no jargon, write at a sixth-grade reading level"
  • Structure -- "use numbered steps, not paragraphs"
  • Content -- "do not mention competitor names"

Avoid lists work well for recurring problems. If the model keeps adding disclaimers you don't want, add "Do not include disclaimers or caveats" to your prompt.


The Anthropic guide suggests using explicit constraints to reduce hallucination, especially when asking the model to cite sources or stick to provided data.

Example constraint: "Only use information from the document above. If the answer isn't in the document, say so."

This kind of constraint dramatically improves factual accuracy.

Build a personal library of constraints that solve your repeated frustrations. Over time, you'll have a toolkit of reusable prompt fragments that save you editing time on every generation.


5

Examples and Few-Shot Prompting

Showing the model what you want is often more effective than describing it. This technique is called "few-shot prompting," and it works by including one or more examples of the desired input-output pair directly in your prompt.

Say you need the model to extract structured data from unstructured text. Instead of writing a long description of the output schema, paste in one example:

Input: "John Smith, CEO, joined March 2019."

Output: { name: "John Smith", title: "CEO", start_date: "2019-03" }

The model picks up the pattern and applies it to new inputs with high consistency.

The OpenAI guide recommends few-shot examples as one of the most reliable ways to steer output format and style. Even a single example (one-shot) can dramatically improve consistency compared to a zero-shot prompt.


Tips for choosing examples:

  • Pick ones that represent edge cases, not just the easy path
  • If your data sometimes has missing fields, show an example with a missing field so the model knows how to handle it
  • Two or three well-chosen examples beat ten generic ones every time

6

Test and Iterate Your Prompts

Your first prompt is a draft, not a finished product. Treat prompt writing like code: write it, test it on a few inputs, review the output, and revise. Most people stop after the first attempt and blame the model when the results aren't great.

A simple iteration loop:

  • Step 1 -- Run the prompt.
  • Step 2 -- Identify the biggest gap between what you got and what you wanted.
  • Step 3 -- Add a constraint, example, or clarification that addresses that specific gap.
  • Step 4 -- Run it again.

Two or three cycles usually get you to production-quality output.


Keep a prompt log. When you find a prompt that works reliably, save it somewhere you can find it later. Label it with the task, the model you tested it on, and any notes about what made it work. This saves you from re-inventing prompts you've already solved.

The Google Gemini prompting guide recommends testing the same prompt with slightly different wording to check stability. If small phrasing changes produce wildly different output, your prompt is too fragile and needs more structure. Consistency across runs is the real test of a well-written prompt.

Sources

  • OpenAI Prompt Engineering Guide
  • Anthropic Prompt Engineering Guide
  • Google Gemini Prompting Guide

Related guides

  • AI Image Prompt Guide: Style, Lighting, TipsRead guide
  • How to Prompt Gemini (Text, Image, Video)Read guide
  • AI Video Prompt Guide: Sora, Veo 3, and RunwayRead guide
Try the free prompt generator