Why is it that chat gpt think seems to think that anything military related instantly needs to be rerouted to their piece of shit safety models.

It doesn't matter if you're talking about something that was from history or something in the modern day the ai will refuse to name names or just blatantly ignore what you're saying and try to be as evasive as possible it doesn't matter if you're talking about which weapon this specific army used most commonly even if it was 3,000 years ago the ai will automatically reroute you to its piece of shit safety model.

It doesn't matter how you talk about it either it will instantly just reroute you to it safety model no matter the circumstance even if you make it clear that it's a historical question even if the ai acknowledges that it doesn't give a fuck it will still reroute you and give you the most sanitized answer possible and basically refuse to describe anything and just mouth off it's useless guidelines.

This wouldn't be a problem if they're safety model isn't basically just gpt2 in terms of capability that's just there to ensure that they cut costs as much as possible.

Leave a Reply