What were you able to get your AI to tell you via prompt injection that it would never have told you normally?

I’ve just recently discovered this whole thing about prompt injection, and I’ve seen that a lot of people were able to actually do it. But my question is: what were they able to use it for? How far can you go in asking an AI to give you details about stuff it would normally censor?

Leave a Reply