Most people underuse LLMs, not because the models are weak, but because our prompts are.
If you read my last article, which you probably didn’t 😏, you’ll know that I’ve been playing around with prompt engineering recently and exploring a lot of prompting techniques. I can’t share all the exciting new insights that I’ve learned, but this is my attempt. Honestly, I felt ashamed of myself for not knowing some of these earlier.
LLMs are sometimes underutilized, and not due to the AI’s fault, but because of our low-effort prompts. We tend to prompt LLMs like we’re chatting with a friend, whereas we’re interacting with a machine (buzzkill, I know) that encodes your text into tokens, processes them through billions of neural network parameters, aka, does a lot of taekwondo with your inputs, and then decodes the output back into human-readable language. Sometimes, I almost forget that it’s a machine, credit to the engineers who made the technology feel so human.
To get the best output from your LLM, it boils down to one thing: you have to come to terms with the fact that AI is incredibly good at following directions. The implication of that is, it will follow your directions exactly as you write them, and what you write may not always properly communicate what you need. So to get better results, you must understand the LLM a bit, and how it works, so that you can account for its nuances (e.g., possible hallucinations) and prompt in a way that leaves little room for ambiguity. Your prompt must communicate clearly to the LLM and use the right prompting techniques so that you can increase your chances of getting the best result.
Here’s a concise, professional, and engaging version of the article — same energy, but leaner, tighter, and emoji-free. It keeps the clarity, real-world examples, and practical takeaways, while dropping the extra length and decoration.
Starting from the basics?
For starters, a prompt is simply what you tell an AI model to do. But the way you phrase it changes everything.
Example 1: Weak prompt
Write about climate change.
Result: A generic paragraph that sounds like a Wikipedia summary.
Example 2: Strong prompt
Write a 150-word LinkedIn post about climate change. Make it sound hopeful and end with a question to engage readers.
Result: A concise, engaging post ready to publish.
The Prompt Formula: Context + Task + Format + Tone + Examples
Most effective prompts follow a simple structure:
- Context: Who is the AI supposed to be or what situation is it in?
“You’re an HR manager welcoming a new hire.” - Task: What should it do?
“Write a welcome email.” - Format: What should the result look like?
“Three short paragraphs.” - Tone: How should it sound?
“Warm and professional.” - Examples (optional): Provide a model to imitate.
Full prompt:
You’re an HR manager welcoming a new employee joining next week. Write a warm and professional welcome email in three short paragraphs.
Tip #1: Give It a Role
AI responds better when you give it a role to play.
Compare these:
Explain quantum computing.
vs.
You’re a Harvard professor explaining quantum computing to a 10-year-old. Use analogies and simple language.
The second produces a clearer, more engaging answer because the model understands who it’s supposed to “be.”
Tip #2: Encourage Step-by-Step Reasoning
For logic-heavy or analytical questions, guide the model to think carefully.
Let’s solve this step by step.
A train travels 100 km in 2 hours and 200 km in 4 hours. What’s the average speed?
Adding “step by step” encourages reasoning instead of guessing.
Tip #3: Show, Don’t Just Tell
If you want a certain style or structure, give examples first.
Here are two examples of short tweet summaries:
- “AI isn’t replacing humans; it’s replacing tasks.”
- “Every startup is now an AI startup — but few are real AI companies.”
Now write three tweet-style summaries for these articles: [insert links].
This approach, called few-shot prompting, helps AI learn your tone and structure before generating new content.
Tip #4: Be Specific About the Output
Clarity beats creativity when you need structured results.
Examples:
- “Summarize this text in five bullet points.”
- “Output your answer in a three-column table: Idea, Evidence, Application.”
- “Return results in JSON format.”
Being explicit about output format saves time and improves consistency.
Tip #5: Iterate, Don’t Expect Perfection
Prompting is not a one-shot task; it’s a conversation. Start broad, inspect what you get, then refine.
Example workflow:
- “Write a blog introduction about remote work.”
- “Make it sound more conversational.”
- “Shorten it to under 100 words.”
Each iteration improves the output and clarifies your own expectations.
Tip #6: Use XML-Style Tags for Structure
For complex tasks, XML-style tags help organize information clearly.
Without tags:
Summarize this transcript and list three insights. The transcript starts below.
With tags:
<task>
Summarize the following transcript and list three key insights.
</task>
<transcript>
Hey, thanks for joining today. We discussed how prompt engineering works...
</transcript>
Tags define clear sections, reducing confusion and improving precision — especially useful when prompts are long or contain multiple parts.
You can even use custom tags:
<instruction>Write a 2-line summary</instruction>
<content>[Paste text here]</content>
<format>Bullet points only</format>
This structure helps both you and the model stay organized.
Tip #7: The Key Mindset: Collaborate, Don’t Command
The best prompt engineers treat AI like a capable intern, not a servant. Give it context, examples, and feedback. Iterate together.
When you approach AI as a creative partner, prompting becomes less about giving orders, and more about co-creating ideas, insights, and solutions.
Concluding words…
If you didn’t get anything from all I shared above, just get this:
Prompt engineering is about communicating clearly. Once you learn how to guide models with structure, tone, and intentionality, you’ll often see better results.
