Prompts Expert

Back to Blog

7 Common Prompting Mistakes (and How to Fix Them)

Posted on June 3, 2024

Why Your Prompts Aren't Working

Getting a great result from an AI model is a skill. If you're consistently getting bland, incorrect, or unhelpful responses, chances are you're falling into a few common traps. Here are seven of the most frequent prompting mistakes and how to correct them.

1. Being Too Vague

The Mistake: "Write about real estate."

The Problem: The AI has no idea where to start. Should it talk about commercial or residential? Market trends? A career as an agent? The result will be a generic, high-level summary that's useless.

The Fix: Be hyper-specific. "Act as a real estate agent. Write a short blog post (around 300 words) for first-time homebuyers explaining the importance of getting a pre-approval for a mortgage before they start looking at houses."

2. Forgetting to Assign a Role

The Mistake: "Explain the concept of photosynthesis."

The Problem: The AI will give a standard, encyclopedia-like definition. It's correct, but may not be suitable for your audience.

The Fix: Assign a role to set the tone and expertise. "Act as a middle school science teacher. Explain the concept of photosynthesis to a class of 12-year-olds using a simple analogy involving a factory."

3. Asking Multiple Questions in One Prompt

The Mistake: "What is Python, why is it popular, and can you give me some code to get started?"

The Problem: The model might focus on one part of the question and give a brief answer to the others, or it might blend them together awkwardly. You lose control over the output.

The Fix: Break it down into separate, focused prompts. Or, better yet, use a structured prompt. "Create a document about Python for beginners. It should have three sections with the following headings: 'What is Python?', 'Why is Python Popular?', and 'Your First Python Code'."

4. Not Specifying the Format

The Mistake: "List the pros and cons of remote work."

The Problem: You'll probably get two paragraphs of text. This is hard to scan and not visually appealing.

The Fix: Explicitly define the output format. "Create a markdown table comparing the pros and cons of remote work. The table should have two columns: 'Advantages' and 'Disadvantages'."

5. Using Ambiguous Language

The Mistake: "Make this text better."

The Problem: "Better" is subjective. Does it mean shorter? More professional? More exciting? The AI has to guess.

The Fix: Define what "better" means. "Rewrite the following paragraph to be more persuasive and professional. It should target an audience of potential investors."

6. Trusting the Output Blindly

The Mistake: Copying and pasting AI-generated statistics, facts, or code directly into your work.

The Problem: LLMs can "hallucinate" – they can invent facts, cite non-existent sources, and write buggy code. They are optimized to be plausible, not always truthful.

The Fix: Always act as the human expert. Verify any factual claims with a quick search. Test all code snippets thoroughly. You are responsible for the final output.

7. Not Iterating

The Mistake: Giving up after the first prompt doesn't work perfectly.

The Problem: Prompting is a conversation. The first attempt is often just a starting point.

The Fix: Treat it like a process of refinement. Look at the output, identify what's wrong, and adjust your prompt. Add more context, clarify an instruction, or provide an example. The best results often come from the third or fourth iteration.