Changing the reading levels permanently improved responses

I’m not a prompt engineer by any stretch of the imagination, but I do use AI professionally for reports, troubleshooting, general curiosities, or day-to-day tasks.

At one point, I was curious as to the reading level of the outputs. With existing custom output changes already in place, 5.0 had a reading grade around 8-10.

By setting the output, text to a PhD reading level, contextual answers to general queries appear to provide deeper and more nuanced answers.

I experimented by increasing word diversity and complexity as a percentage, e.g.; “increase reading level, word diversity, and complexity by 30%” I would then ask it to evaluate it’s reading level, and then trial it with test questions. Eventually, I settled somewhere just under a PhD level. Efficient enough to get the point across, but nuanced enough to look for deeper answers if needed.

Leave a Reply