How to use AI response patterns to build better content
The last year has had many of us trying to understand how to report on AI visibility and understand what it takes to be seen and cited by AI.
But Rand Fishkin’s latest study on AI response variability has emphasized that LLM outputs aren’t as stable and predictable as search rankings, making this KPI an inconsistent piece of the puzzle.
The study found there’s less than a 1 in 100 chance that ChatGPT or Google AI will return the same list of brands across two responses. They analyzed thousands of prompts across multiple LLMs to highlight just how varied they are.
This has left some of the SEO community questioning the value of rank tracking at scale. But, rank tracking is far from useless. It’s just misapplied.
AI response tracking is an unstable performance KPI in its current state, but it becomes extremely powerful when used as an analysis tool to inform content strategy.
Let’s take a look at why you should still be investing in prompt tracking and how it can be used to inform your content strategy.
Why AI visibility tracking is unstable (for now)
LLMs aren’t deterministic ranking engines. They’re probabilistic language models that can gather and synthesize information from their own training data or live searches. These models use context windows and understanding of intent to serve different answers at any moment.
We’ve seen that responses change based on the prompts, and we know that the same question can be written in so many different ways, which opens the door for your CMO to question why you’re not showing up for a specific prompt when they just saw your brand mentioned or cited.
Tracking visibility remains an area of uncertainty until there’s greater clarity on user prompting. But it’s still valuable.
If prompt response tracking isn’t a stable KPI, then what is it? It’s pattern analysis, something SEOs are very familiar with.
Instead of only focusing on whether or not you are cited or listed, you should be trying to understand:
- How is the prompt response structured?
- What concepts repeatedly appear?
- What key phrases or terms are showing up?
- What level of nuance is typically included?
This requires a mental shift.
Dig deeper: 7 hard truths about measuring AI visibility and GEO performance
Traditional SEO vs. AI pattern analysis
In traditional SEO, we reverse engineer what’s already ranking. With AI search, we can apply the same thinking by reverse engineering the patterns we see in results.
| Traditional SEO | AI pattern analysis |
| Measures rankings | Understanding concept synthesis |
| Content gap analysis | Topic associations |
| Fixed results (SERPs) | Dynamic responses |
| Determined signals | Probability-based responses |
Analyzing prompt response patterns can help us understand how models synthesize concepts, and not just from the technical level, but at the content level.
To define a pattern, you’re not looking for exact response consistency. You’re understanding the structure, themes, and recurring topics.
Each LLM model formats its outputs differently, but patterns can still emerge in the structures, despite differences in retrieval methods and how each one functions.
I define a pattern by:
- It appears in 75% or more of outputs.
- Appears in two different AI models (Like GPT vs. Gemini).
- Similarities across multiple iterations of the same prompt.
The 75% goal felt consistent enough for my sample sizes to highlight a strong pattern versus just randomness. How you define this is truly up to you. There’s no statistical significance in this number.
You can adjust this based on your content and space, but for me, this has been the best way to spot consistency over noise.
So, say the theme of “pricing transparency” appears in 9 out of 12 responses and across two AI models, that’s not randomness. That’s semantic relevance, and that’s insight.
The framework
To test this out for yourself, you need a framework that breaks down what you’re looking for.
You can break it out into three types of patterns:
- Structural patterns.
- Conceptual patterns.
- Entity patterns.
Structural patterns
This is where you focus on how the response is organized. You’re looking for:
- Header/section frequency.
- List formatting consistency.
- Order or steps.
- Pro/con framing.
- Comparison tables.
- Decision frameworks.
These signals can help show how models organize topics.
For example, if the outputs for your prompt show:
- Definition > Criteria > Tools > Implementation.
That’s a structural pattern. You can leverage this to understand what might be helpful to your user, but AI isn’t always right. This is just another tool to identify patterns and decide how it applies to your content.
Conceptual patterns
These will vary based on your topic focus, but think about the concepts you are targeting. These can be harder to plan for and sometimes take a bit of analysis to start seeing the patterns.
For me, I’m focused on “Best domain registrars” as an example, and I’m looking for:
- Pricing transparency (renewal and purchase).
- Customer service mentions.
- Addon inclusions (WHOIS privacy, free emails, free anything).
- Security features.
- Bundling options.
- Transfers.
So if I start seeing that renewal prices are commonly discussed across models and variations of this prompt, that signals to me that I need to pay attention to how I frame and discuss it in my articles and product pages.
These conceptual patterns help you understand what these models are associated with decision-making.
Entity patterns
This is where you can view the tools, brands, and other mentions that appear in responses, regardless of their order.
This might look like:
- Brand mentions.
- Tool mentions.
- Feature to brand association.
- Category positioning.
- Cited sources.
In practice, you’d pay attention to how certain features appear with specific brands, or which sites are commonly cited. This helps you evaluate your positioning and identify opportunities with affiliate partners or third-party sites, including which sites you work with and how your brand is positioned on them.
Dig deeper: LLM consistency and recommendation share: The new SEO KPI
Building your system
You don’t have to invest in prompt-tracking tools to do this, though they make it easier. I handle it manually. It’s not perfect, but it works.
If you can’t involve multiple team members, adapt the structure to fit your resources. You may need to track over a longer period or lower your pattern threshold. Instead of 75% consistency, you might set it at 60%.
Step 1: Select and cluster your prompts
Identify three priority topics you want to track. For each of those topics, come up with 3-5 versions of prompts that would align with that topic.
For example, one of my priority topics is finding a domain registrar, so this cluster for me includes:
- How do I register a domain name?
- How can I get a domain name?
- Where can I buy a domain?
Step 2: Set up your tracking sheet
You’ll need a place to track the responses, like an old-fashioned spreadsheet with the following columns:
| Prompt | LLM | Web Search? Y/N | Date | Response | Sources (If Applicable) | Is My Brand Mentioned? |
In the LLM column, note the platform and model to help control for when new versions are released.
This is just to start gathering your data. When you know what patterns to look for, add those to the sheet. Consider using Claude or ChatGPT to help with the analysis, so you don’t have to do everything manually.
Step 3: Create a tracking plan and start tracking
To do this effectively, you need to define:
- Which models you want to track.
- Whether search mode is on or off, or left to the model to decide.
- How many times you want to run each prompt on each model.
- What frequency you want to track.
It’s also helpful to involve other team members, if possible, and use private modes to minimize context influence.
Once a week, a handful of my team members run each prompt through ChatGPT, AI Overviews, AI Mode, and Perplexity. Each person tests every prompt across each model, giving me 3-5 responses per prompt, per model, per week.
Step 4: Analyze
Once you’ve gathered 20–30 responses per prompt, start analyzing. You can use the tool of your choice to streamline this process.
From there, identify recurring patterns and map them to relevant pages on your site. Where can you address these themes? Are you answering the right questions, and does your content reflect the patterns you’ve uncovered?
This is ongoing work. Track consistently and review patterns quarterly to identify shifts. Over time, this becomes your optimization framework.
Dig deeper: How to create answer-first content that AI models actually cite
Where AI pattern analysis can mislead you
AI is based on probability, and it won’t always be right. This isn’t the only way of optimizing for AI, but it can be part of your playbook.
You still run the risk of bias in the training data, inconsistency in whether search or training data was used, and variations in the new “models” launched across the different LLMs.
You shouldn’t be blindly aligning with the AI outputs, but you can use your best judgment and understanding of your target audience to understand if it’s the context you want to use for your optimization.
How to connect this to performance
Now this is the tricky part. We’ve learned just how random AI responses can be, but there are still a few signals you can measure to see how this impacts your content.
- “Traditional” metrics: Are you seeing more clicks? Better positions in GSC or keyword tracking tools? What about conversions?
- AI traffic: If you’re able to pull your AI traffic data from Adobe, GA4, or any other analytics tools, you can track to see if there’s any movement on the pages you update.
- AI tracking tools: And while yes, there’s a lot of variability in this as a KPI, if you’re using AI visibility tools, they will give you an indication of whether your methods are working. You can leverage the same manual tracking outlined here to see if you start noticing your brand emerge as a pattern.
Start studying AI outputs
There are still many unknowns with LLMs, and it feels like they’re changing every day.
But one thing remains consistent: these tools provide answers. If there’s any level of understanding you can get on those answers, you can try to use it.
The patterns in the responses can reveal how topics are understood and how brands are discussed, and give you an idea of how to adapt your content strategy.