Various universities that do AI safety testing, and some of the AI redteamers like Pliny come up with some pretty impressive ways to make the LLMs do crazy things. Outside of that community, it's somewhat hard to find the 'engineering' part of prompt engineering. Maybe a small fraction of the posts here.
Prompt engineering MUST be possible with the latest round of models. So what are the new techniques? LLMs are turing complete even if probabilistic… so there MUST be ways to prompt engineer in important and impactful ways.