
they gave me an answer, often convincing, but rarely verified.
Then I created GPTWiki a bilingual assistant (FR/EN) who doesn't try to be right,
but to compare the sources before answering.
What he does differently:
• He cites 5 to 8 sources (academic, institutional, media, Wikipedia, etc.)
• It shows where opinions converge or diverge
• It explains why there are disagreements (context, ideology, time)
• And above all: no hallucinations felt since I used it
Result: I save time in my research,
and the responses are finally critical instead of being “smooth” speeches.
GPTWiki does not seek absolute truth
it shows how knowledge is constructed and why it varies according to context.
And honestly?
This is the first time I feel like I'm talking to an assistant who thinks with me, not just a polite yes-man.
What do you think?
Would you like ChatGPT to integrate this kind of “comparative and critical” mode by default?
