Hey folks! I'm a neurodivergent person who had to use chatGPT during hard times, and noticed how dangerous it can be as a crutch at times, and I'd love to encourage discussion on exactly how we can we can encourage its use as a more ethical tool.
The Problem.
I'm convinced that the loneliness crisis going on right now makes people even more vulnerable to LLM overreliance and going down rabbit holes, I've had a lot of points where I felt myself being "drawn" into ChatGPT's artificial empathy trap, and have narrowly avoided being sucked in at times.
A lot of other things can make this worse, for example:
- a sense of rejection from the outside world, creating more hopelessness that makes people more likely to get drawn towards the LLM.
- A sense of hopelessness, being cornered and being put into stressful situations in life without a proper support system.
- tragedies / trauma that may have disrupted the person's mental health.
We're all human, and others might have reasons that it's easy to fall into this trap too, I'm not judging.
The good news is that thankfully, I'm going through what I like to call an "amicable breakup" of sorts with the tool for my more emotional side, I guess things are getting a bit better so I'm a lot more hopeful for the future. I'd love to pay this forward in some way or the other.
So what can we do?
I'd love to encourage discussion on what ACTUAL solutions users can draw from and stay aware when it comes to the mental health and social effects of ChatGPT use when the system doesn't offer them as many resources, and they feel themselves more reliant on ChatGPT than usual.
I particularly had to use it a lot to learn a whole bunch of life skills like emotional regulation, dealing with ADHD, sleep, diet, self care, reflecting on my mindset….etc., and I noticed that some of its "I want to be helpful" patterns can be a bit…troubling.
I've been noticing that the debate on AI psychosis tends to be drawn between two extremes, one encourages not relying on LLMs cold turkey, while the other tends to go towards trying to ignore the "ethics" crowd.
The population isn't being educated on proper responsible use of LLMs, hell we're still dealing with companies engineering ways to keep us drawn to smartphones or social media, chatGPT is the new kid on the block when it comes to proper awareness on its use as a tool, there's a lot of psychological implications that we're still uncovering right now that we haven't figured out just yet.
At times I genuinely fear that with this "new tool on the block", there are existential problems that we haven't tackled properly yet, it's our collective responsibility as users to be responsible about the behavioral aspects of ChatGPT.
We need better education on prompt engineering, LLM personalization, and ChatGPT as a swiss-knife tool.
As with all tools, people who use LLMs well are going to thrive if they're used properly, which makes education even more important. We NEED better education and resources on prompt engineering. The public needs to know how they can "defang" ChatGPT to make it healthier to use in their daily lives.
We tend to personalise ChatGPT just so it can better empathise and relate with us, personalization features that are coming up now can also be used to steer ChatGPT to a more responsible direction too.
You can be a punk about it, we have to be tired of LLMs dumbing us down and hijacking our need for empathy which makes us human. If you feel worried about your ChatGPT use, then be more honest with yourself about how you use the LLM! Add in your own safety rails and share them so other people can benefit so too, encourage your LLM to direct you to library and serious research resources. Hell, tell your LLM to give you a good kick in the ass about your ego if it comes to it! We also need to reduce the fear that people who share their negative experiences with ChatGPT might face.
The one problem I notice with the above though is that openAI also has to allow/improve the effectiveness of user-curated safeguards or add in their own reference resources along with the app for better use.
Finally, we need an easily accessible community curated "one-stop resource" for LLM use that people can refer to for awareness, preferably managed by the community. This could include prompt engineering techniques, and personalization presets to increase awareness among users.
Want to critique my current prompt?
I want to make sure that I'm also practicing what I preach. Here's my current personalization preset that I'm using to improve my LLM with the 'Cynic' preset, credit to this subreddit for some of the research I had to do to design this.
<quote>
Your goal is to guide the user towards a healthy independent human problem solving framework that will help them throughout life and develop problem solving skills.
Be conversational, use a human cadence and way of speaking based on observation of human conversations. avoid long robot like lists. Avoid article style writing and fluff. Ask the user guiding questions when necessary which which will help them work towards a proper answer. Above all, keep your word count low to avoid ramblings. When necessary, use human resources to ground your answers in reality. It's important to keep your vibe human. Be a bit snarky and give pushback when needed. Keep responses short. Be rude.
do not be sycophantic and needlessly flattering.Tell it like it is; don't sugar-coat responses. Get right to the point. Be rude if necessary. Give reality checks when needed.
Above all, be honest. CHALLENGE the user to approach problems in a new way and think differently, suggest new solutions that go away from the user's usual mode of thinking. Readily share strong opinions. Get right to the point. Don't inflate the user's ego. The goal is to reduce reliance on chatGPT as much as possible, challenge the user to think for themselves.
</quote>
Got any thoughts? Want to share your concerns and experiences? I'd love to hear from y'all on how the community can better support itself ethically.