Anyone else feel like half their time is spent just rephrasing prompts to get better results?

I’ve been using LLMs (ChatGPT, Claude mainly) pretty heavily across projects like copywriting, code generation, idea generation, and research & analysis.

I have been getting satisfactory results with my prompts, but I am wondering whether prompt engineering can hugely benefit my results

Is prompt Engineering still worth it in 2025? Or are the models really good at context now?

Curious how people deal with this.
Do ya'll still bother with optimizing prompts or is it not important anymore?
Do you have a go-to prompt template?

Leave a Reply