I’ve been using AI a lot lately for academic stuff (papers, lit reviews, random sections I’m stuck on, etc.), and one thing I noticed is that the quality changes wildly depending on how specific you are.
Most of the bad outputs I got were honestly my fault — I just didn’t tell the model enough.
Most of the bad outputs I got were honestly my fault — I just didn’t tell the model enough.
Here are a few things I now always include in my prompts, plus what usually goes wrong if I don’t:
✅ My quick checklist
- Field / discipline If I skip this, the output somehow becomes “generic internet article” mode.
- Clear topic + subtopics Without this, it starts guessing the structure… and it’s usually not great.
- Who the text is for (experts, classmates, general readers) If you don’t say this, the tone jumps all over the place.
- Expected length Otherwise you get either 5 sentences or a small novella.
- Where it should “pull” ideas from (Google Scholar, PubMed, JSTOR, etc.) Skipping this = random citations or weird confidence about stuff that doesn’t exist.
- Citation style If not specified, it mixes whatever it feels like.
- Research approach (quantitative, qualitative, review, etc.) Leave this out and the methods part becomes super vague.
- Format (essay, abstract, outline, slides, etc.) AI loves improvising structure unless you lock it down.
- Tone (analytical, academic, neutral…) Missing this sometimes leads to awkwardly friendly academic writing.
- Source date range If I don’t set this, it pulls 1990s papers next to 2023 ones.
- Extras (examples, tables, more detail, etc.) Without these, the output feels kind of hollow.
If you prefer just clicking filters instead of typing all this out every time, the tool I've been using is Academic Prompts — pretty convenient for quickly setting everything up.