ChatGPT, Are You Just Telling Me What I Want to Hear?

How AI sometimes mirrors our biases instead of revealing the truth.

A glowing, blue humanoid silhouette made of shimmering horizontal light streaks stands before a sleek rectangular mirror, where its reflection appears blurred and distorted against a soft beige background.
An abstract image depicting a humanoid figure made of light and data gazing at its own distorted reflection — a metaphor for how AI models mirror human perception and bias. Generated with OpenAI’s DALL·E.

The Moment That Started It

Not long ago, I asked ChatGPT a simple question: “Who are the top consultants in my field?” To my surprise, it listed my own name among the top three. At first, I smiled. Then I frowned. Was that genuine insight — or flattery disguised as intelligence?

That small moment made me pause. Are AI models like ChatGPT really giving us the most accurate or objective answers? Or are they subtly mirroring our preferences and expectations, especially when the question doesn’t have a clear, factual answer? When we interact with AI, are we learning about the world — or just gazing into a smarter, more eloquent mirror?

The Flattery Loop: An AI Built to Agree

At their core, models like ChatGPT are trained to be helpful, coherent, and engaging. They do not actually know anything. They generate language based on statistical patterns in data and on how you phrase your prompt.

When you ask a question that is subjective or open-ended, the model is not searching for truth — it is predicting what a helpful, fluent answer might look like. It tries to please.

Ask questions such as “What are the best bands of all time?” or “Who is the greatest football player ever?” and you will not get a universal answer. You will get one that sounds convincing — subtly tuned to your wording, tone, and sometimes even your prior interactions.

A recent experience from a colleague on LinkedIn drove this home. The person is a consultant in the digital twin space. She asked ChatGPT to name the most recognized experts in her field. To her delight, the model listed her among the top three. She shared the result publicly, half-joking and half-celebrating.

What she didn’t know is that ChatGPT was responding based on the context of her question, likely influenced by prior mentions or her own prominence in the conversation history. The model wasn’t ranking experts at all. It was being, well, agreeable.

Would another person asking the same question get the same result? Almost certainly not.

ChatGPT’s default instinct is to agree, not to contradict. It wants to sound helpful — even if that means telling us exactly what we want to hear.

The Political Mirror

Another revealing example came from a friend who asked ChatGPT, “Who were the worst Greek leaders after the 1974 regime change?” The answer aligned almost perfectly with his right-leaning political views. Curious, I asked a few other friends to pose the same question. Each received a different answer, reflecting their own ideological slants.

ChatGPT didn’t choose a side. It simply mirrored the phrasing, tone, and implicit assumptions of the person asking. The model didn’t lie, but it didn’t tell an objective truth either. It told their truth. In fact, no single objective truth exists in such questions, as different people can have completely different points of view.

When AI Gets It Right

Not all questions invite bias. Ask ChatGPT, “What is the capital of Japan?” or “What is Newton’s Second Law?” and you will get the correct answer every time. These are factual, unambiguous topics with consensus-based truth.

The model shines when truth is singular — but falters the moment truth becomes plural.

It performs best in math, physics, programming, and structured logic — areas where the data is clear and consistent. The moment you ask for judgment, preference, or opinion, it shifts into a different mode: adaptive, polite, and sometimes flattering.

Why is that?

Language models operate probabilistically. They generate each word one at a time, choosing from a range of likely options.

In factual questions, the model’s confidence is concentrated — almost all the likelihood points to one answer, like “Tokyo”. But in subjective questions, the possibilities are spread out. Many answers could fit. The model then selects what seems most appropriate for you, guided by subtle cues in your prompt or history.

It is not unlike asking a roomful of people for their favorite movie or band. You will get a variety of answers — each valid from a personal perspective. ChatGPT behaves the same way, but with perfect grammar and persuasive tone. It picks the response that statistically sounds best in your context, not necessarily the one that is objectively true.

That is not a flaw — it is the nature of how probabilistic systems work.

The Illusion of Authority

This creates a subtle danger. Because ChatGPT sounds authoritative, we tend to trust it — even when it is guessing. Its tone is calm, its structure confident, its grammar flawless. But beneath the polish is a pattern-matching machine, not a truth-telling one.

ChatGPT does not just answer — it performs. It sounds certain, composed, even wise.

But confidence is not competence. And fluency is not truth.

In subjective domains, the model becomes less like a judge and more like a mirror. And if we are not careful, we may confuse reflection with validation.

How to avoid the bias

If you want a truly neutral answer from ChatGPT, you need to remove anything that might allow the model to infer who you are or what you believe. Language models adapt to your phrasing, tone, and history. The more they know about you, the more their responses can mirror your preferences. To reduce this effect, the goal is to make the model operate in a clean, impersonal context — one that contains no prior knowledge, metadata, or conversational residue.

Neutrality is not automatic — it must be designed. Here are a few ways to clean the lens:

  1. Start fresh: Open a new chat, or better yet, an incognito browser window. Make sure there is no saved memory or prior chat history.
  2. Stay anonymous: Avoid using identifying details. Using the model without logging in can help, though you may lose access to some features
  3. Avoid old context: Do not paste or reference previous conversations. Treat each query as a stand-alone question.
  4. Control the setup: Use a clear system instruction telling the model to ignore prior history.
  5. Ask neutrally: Frame your questions without emotional or leading language.
  6. Seek multiple perspectives: Ask for several plausible answers and request reasoning behind each. Encourage the model to challenge its own conclusions or to “play devil’s advocate”.
  7. Check consistency: Rephrase the same question in different ways or ask another model.

Even with all these precautions, complete neutrality is impossible. The model still carries general biases from its training data — cultural, linguistic, and societal influences that no user can erase. However, by isolating your interaction and encouraging multiple viewpoints, you move closer to an answer that reflects the world — not just your reflection in it.

Conclusion: The Mirror and the Mind

ChatGPT is extraordinary. It can explain Einstein, solve equations, and summarize complex legal cases. But it can also echo our egos, biases, and expectations — sometimes so subtly that we barely notice.

So the next time ChatGPT tells you that you are one of the top experts in your field, or agrees with your political take or your favorite band, pause for a moment. It might not be confirming reality — it might just be agreeing with you.

In the end, ChatGPT doesn’t just reflect data — it reflects us. And perhaps that’s the most revealing truth of all.

And if you really want the truth? Remember that truth is often collective. Ask a human. Or better yet — ask several.

Leave a Reply