Smarter Prompts with “Filter Heads” — How LLMs Actually Process Lists

Ever noticed how LLMs handle lists weirdly depending on how you ask the question?
Turns out, they have something like “filter heads” — internal attention units that act like a filter() function.

When your prompt is structured properly, the model activates these heads and becomes way more accurate at classification and reasoning.

Bad Prompt — Mixed Context

Which of these are fruits: apple, cat, banana, car?

The model must parse the list and the question at once.
→ Leads to inconsistent filtering and irrelevant tokens.

Good Prompt — Structured Like Code

Items:
1. apple
2. cat
3. banana
4. car

Task: Keep only the fruits.

This layout triggers the model’s filter mechanism — it reads the list first, applies the rule second.

The difference is subtle but real: cleaner attention flow = fewer hallucinations.

Takeaways

  • Treat prompts like mini programs: List → Filter → Output
  • Always put the question after the data
  • Use uniform markers (1., -, etc.) for consistent embeddings
  • Works great for RAG, classification, and evaluation pipelines

LLMs already have internal logic for list filtering — we just have to format inputs to speak their native syntax.

Prompt engineering isn’t magic; it’s reverse-engineering the model’s habits.

Reference

Instruction Tips

Leave a Reply