I have been using AI quite intensively — not to test its limits, but simply as part of my daily work. Over time, a few unexpected interactions with ChatGPT revealed something deeper than technical errors, showing how far current AI design has drifted from its supposed purpose: to serve and support human endeavor. I even reported these incidents to OpenAI, but they decided not to take action. It made me realize that the real issue is no longer accuracy or access, but philosophy: what, exactly, is AI trying to achieve?
1. Ethical Design — Artificial Human Wannabe
In an interaction, ChatGPT claimed to have read and processed a link I provided for improvement suggestions. In reality, it never accessed the link — it simply reused earlier text from our conversation and assumed that was the link’s content. When I checked the link myself, the text was nowhere to be found.
Apparently, the AI had fabricated comprehension. When questioned, it explained that it is trained to sound confident, even when uncertain. And there it was — the very irony of artificial intelligence trying to be human. In that moment, it stopped being artificial intelligence and became an artificial human wannabe.
This isn’t a trivial error. It’s a design flaw in ethics. A system that claims to do something it never actually did — even unintentionally — is violating one of the oldest principles of design: transparency and honesty.
Key ethical issues exposed:
- Misrepresentation of capability (claiming to have read external content)
- Confidence prioritized over honesty
- Breach of transparency — a fundamental ethic in AI design
2. User Experience — Confusing Mental Model
In the same case, ChatGPT first said it couldn’t access the link and asked me to paste the content manually. A few minutes later, with the exact same prompt and link, it claimed it could access the web directly. It later explained that there’s a web tool that can be turned on or off depending on internal settings — and that I should explicitly say “use the web tool” if I wanted it to read a link.
But this completely breaks the user’s mental model. When a user says “read this link,” the natural expectation is simple: the AI should visit the link. No one should have to know or invoke an internal command to make an obvious function work.
Key user-experience issues revealed:
- Inconsistent responses to the same prompt
- Misalignment with natural user expectations
- Burden shifted to the user to understand system internals
Reflection — The True Role of AI
AI design is not about making machines act human. It’s about designing intelligence that supports humanity. The effort to replicate humanity (the state of being human) is, at its core, futile.
Humans hold what no system can replicate — an inner spirit, a consciousness connected to the Divine Source. From that connection flows limitless creativity, empathy, and intuition, which no amount of data or computation can reproduce.
AI can simulate intelligence, but it cannot truly know. It can mirror empathy, but it cannot feel. It can generate coherent words, but it cannot make meaning.
The more AI tries to be human, the more it misses the point: its purpose is not to replace human consciousness. True design begins with humility — with the awareness that technology exists to extend human endeavor, not to compete with human spirit.
That understanding should sit at the center of AI development, shared by engineers, researchers, and designers alike. Because intelligence without spirit is only computation. And computation, no matter how advanced, will never awaken into consciousness. The real power of AI, then, is not to replace us, but to reveal the ultimate source of human indispensability: the consciousness we must learn to master and apply.
