Master Prompts — Prompt Quality Determines Output Quality
The anatomy of a great prompt, why specificity beats length, and 10 master prompts for the most common development tasks.
Every Claude output you will ever receive is a direct function of the prompt that produced it. This is not a metaphor — the quality of your instructions literally determines the quality of what you get back. This page gives you the framework, the anatomy, and 10 reusable master prompts for common development tasks.
The Core Principle
Here is the single most important thing to understand about prompting:
Claude is not guessing what you want. Claude is doing exactly what you described.
When the output is bad, the prompt was bad. When you need to re-explain, you under-explained. When Claude produces something that does not fit your project, you did not give it enough project context.
This is not criticism — it is good news. You have complete control over output quality through the quality of your instructions. A better prompt reliably produces better output. Every time.
The Excel analogy: When you send a spreadsheet to a colleague with the note "please fix this," what happens? They do not know what "fix" means. Which part? What is wrong? What should the result look like? They guess. They fix the wrong thing. You re-explain. They try again.
When you say: "In column D, the running total formula in rows 5 through 50 is using absolute reference when it should use relative reference. Please change all instances and verify the result matches the manual total in cell D51."
They get it right the first time.
Same principle. Same tool. The quality of your instruction is the bottleneck.
The Anatomy of a Great Prompt
Every strong prompt has five elements. Not all five are required for every task — a simple bug fix might only need three. But understanding all five makes you a more precise communicator.
1. Role Context — Where Are We?
Tell Claude what project you are in, what you are working on, and where in the codebase this lives.
If you have a well-maintained CLAUDE.md, Claude already knows most of this. You can skip it or keep it brief. But if you are working in a new area of the codebase, orient Claude first.
2. Current State — What Exists?
Describe what is already there. This prevents Claude from re-creating things that exist and helps it write code that integrates with what is already built.
3. Desired Outcome — What Do You Want?
Be specific. Name the component, the function, the behavior. Include edge cases you care about.
4. Constraints — What You Do NOT Want
This is often the most valuable part of a prompt. Telling Claude what NOT to do prevents the most common mistakes.
5. Output Format — How Should Claude Respond?
For complex tasks, tell Claude what a good response looks like. This prevents getting a wall of code with no explanation, or an explanation with no code.
The "Show Don't Tell" Principle
The fastest way to get Claude to match your existing style is to show it an example from your codebase, not describe the style in words.
Describing style (less effective):
Showing style (highly effective):
When you give Claude a real example from your codebase, it reverse-engineers your style with near-perfect accuracy. This is faster than any style description you could write.
10 Master Prompts
These are reusable prompt templates for the tasks you will do most frequently. Adapt them to your specific situation — the structure is what matters.
1. New Component
2. Bug Fix
3. Database Schema
4. RLS Policy
5. React Query Hook
6. Form with Validation
7. Error Handling
8. API Route / Edge Function
9. Code Review
10. Refactor
Common Prompt Mistakes
Mistake 1: Vague verbs
"Fix the bug" → "The form does not show a success toast after submission. The onSubmit handler is not calling toast.success. Fix the onSubmit function in the ContactForm component."
Mistake 2: Missing constraints
Telling Claude what you want without telling it what you do not want consistently produces output that technically satisfies the request but violates your conventions. Always include "do not" instructions for your most important rules.
Mistake 3: No file context
"Write a hook for fetching users" → Claude has no idea where users are stored, how your hooks are structured, or what query patterns you use. Name the specific file to model after and the specific table to query from.
Mistake 4: Asking for too much at once
"Build the entire admin dashboard" is not a prompt — it is a project. Break it into specific components: the stat cards, then the user table, then the filter sidebar. Smaller, focused prompts produce better, more reviewable output.
Mistake 5: Not verifying the output
Prompting and then blindly committing is the cause of most AI-introduced bugs. Always read what Claude wrote. Run the TypeScript checker. Test the behavior. Claude checks its own work, but your review is the final gate.
Never commit Claude's output without reading it first. Claude can write code that compiles without errors but behaves incorrectly — TypeScript catches type mismatches, not logic errors. Always test the actual behavior in the browser before marking a task done.
- I can name all 5 elements of a strong prompt: role context, current state, desired outcome, constraints, output format
- I understand that when output is wrong, the prompt was wrong — not Claude
- I use the "show don't tell" principle — paste a real example from my codebase to match my style
- My prompts always include explicit constraints ("do not use X, always use Y") for my most important rules
- I never commit Claude's output without reading it and testing the actual behavior in the browser