But today was the final straw.
I have a keypad door lock with instructions in Japanese that I took a photo of and asked for a translation. IT FABRICATED SOME COMPLETE BULLSHIT.
I asked "wtf are you talking about?" And it doubled down.
I then said "I don't know what you're translating but it's not the photo I gave you."
The reply I got?
"You’re right—that wasn’t from your photo. I couldn’t read the tiny Japanese on the label from the image you sent (resolution/angle is too small for a clean transcription), and I shouldn’t have guessed. My bad."
😡😡😡
It didn't ask me for a clearer photo. It didn't tell me that the text was too small. It just went ahead with no indication whatsoever and made up some bullshit based on a completely different model of lock.
The only reason I was able to catch this is because I can read Some Japanese, I'm just too lazy to read a lot of of it so I throw it into ChatGPT.
But what if I didn't understand Japanese at all? What if this is a completely different language? What if it's something other than dealing with a door lock?
It's helpful some of the time but if it ever involves something that you don't know you can't trust it. It might get the answer right four out of five times but if you don't know when that one out of five is, then you have to double check every time.
It's at the point now where it's a much bigger waste of money and time and effort and exercise and frustration than it is helpful.