Practical guide: How do LLMs work?

Photo by Dima Solomin on Unsplash

When I saw Ahrefs’ report showing that AI overviews steal 34.5% of our clicks, I wasn’t surprised. AI overviews save people from scrolling.

I’ve been optimizing content for algorithms for years. But when I tested our articles inside ChatGPT, they weren’t there. Never quoted or referenced.

It was time to stop writing for Google and focus on large language models. They now decide what information gets surfaced.

This transformed how I think about SEO. The results are there! Our traffic from ChatGPT alone doubled. Surprisingly, those visits convert better than our organic ones.

How LLMs work: Shifting from keywords to conversations

Kevin Indig put it beautifully:

“With AIOs and AI Mode, Google has not only eaten into its own search product, but it has also taken a big bite out of publishers, too.”

Keywords are no longer the way to get discovered. LLMs “learn” in a different way:

  • Context
  • Credibility
  • Your content’s overall usefulness

It doesn’t matter how many times you repeat “best productivity app” in your article. LLMs care about how confidently you explain why something works.

The AI discovery model is different from the “Google, click, article” linear path. Now, people find answers through multiple touchpoints. Someone may read a Reddit post, ask ChatGPT for a summary, and then land on your page if the AI cites you as a trusted source.

That’s the new SEO loop:

  • Conversational
  • Contextual
  • Continuous

When I started creating content that fits into that loop, I noticed something interesting: AI tools didn’t just “summarize” my work. They started recommending it.

My process for LLM-friendly content optimization

1. Research platforms; not just keywords

Instead of asking “what’s trending,” I ask, “how do LLMs work around this prompt?”

I test prompts in ChatGPT and Perplexity to see:

  • Which pages do they mention?
  • What tone do they use?
  • How deep does the content go?

Most of the time, they cite pieces that aren’t optimized for SEO at all. They are clear, trustworthy, and most of all — grounded in expertise.

To cover this step, I use Eney. It’s my AI partner that analyzes drafts and points out: Do my explanations align with user intent? It doesn’t write for me. It just helps me improve the content in a way that answers users’ needs.

2. Map questions; not queries

Real people don’t talk like content optimization tools suggest. They ask things like “Is it worth paying for Setapp if I only use 2 apps?”

This is how I start:

  • Collect questions from Reddit threads, support tickets, and feedback forms.
  • Map them out visually. I use MindNode Classic for this. It’s perfect for spotting patterns and gaps.

This part of my work is fun! It helps me find topics that no keyword planner would ever suggest. But users genuinely care about them.

Tip: This ChatGPT assistant suggests the right productivity tools that make your work more effective. I found MindNode Classic thanks to it. You can talk to it to find the exact app that fits your needs.

3. Write like a human and prove like a researcher

This is the step that made a big difference. I write like I’m explaining something to a friend. But I also back every claim with a source or my own test results.

LLMs favor this mix: conversational, but credible.

Ulysses helps me write well-structured content in a focused environment. It’s another app from the Setapp kit, which you can try with a 7-day free trial.

4. Measure and iterate

To measure results, I track referrals from AI platforms. The surprising thing is that traffic converts better than traditional organic clicks.

Do this and thank me later: analyze your 404 pages to see what type of content ChatGPT users expect. Yes, ChatGPT generates made-up links. But you can use that insight for your content plan. I found a bunch of topics missing from our blog this way.

People coming from AI citations already trust you. They are not window shoppers. They are decision-makers.

Every month, I feed my analytics into Eney to find what worked best semantically. This AI assistant helps me refine future topics. Sometimes I use that insight to plan my next article.

What I learned

The biggest shift wasn’t technical. It was mental.

I never focused my content on algorithms. But I still included some keywords to get into Google’s results. Now, I write strictly for understanding. Once I made that switch, the metrics followed naturally. My publications got higher engagement, better conversions, and double the traffic from ChatGPT.

This is the most important lesson I want to share: LLM optimization is not about tricking AI. It’s about being too helpful to ignore.

Google will keep changing. Search will keep fragmenting. But useful content driven from experience will always find its reader.

Leave a Reply