Google Ads now surfaces Performance Max (PMax) campaign data in the “Where ads showed” report, giving advertisers clearer insight into placements, networks, and impressions — data that was previously unavailable.
What’s new. The update makes it possible to see exactly where PMax ads are appearing across Google’s network, including search partners, display, and other placements. Advertisers can now track impressions by placement type and network, helping them understand how campaigns are performing in detail.
Why we care. This update finally gives visibility into where PMax campaigns are running, including Google Search Partners, display, and other networks. With placement, type, and impression data now available, marketers can better understand campaign performance, optimize budgets, and make informed decisions instead of relying on guesswork. It turns previously opaque PMax reporting into actionable insights.
User reaction. Digital marketer Thomas Eccel shared on LinkedIn that the report was historically empty, but now finally shows real data.
“I finally see where and how PMax is being displayed,” he wrote.
He also noted the clarity on Google Search Partners, previously a “blurry grey zone.”
The bottom line. This update gives marketers actionable visbility into PMax campaigns, helping them understand placement performance, optimize spend, and identify which networks are driving results — all in one report.
Organic search clicks are shrinking across major verticals — and it’s not just because of Google’s AI Overviews.
Classic organic click share fell sharply across headphones, jeans, greeting cards, and online games queries in the U.S., new Similarweb data comparing January 2025 to January 2026 shows.
The biggest winner: text ads.
Why we care. You aren’t just competing with AI Overviews. You’re competing with Google’s aggressive expansion of paid search real estate. Across every vertical analyzed, text ads gained more click share than any other measurable surface. In product categories, paid listings now capture roughly one-third of all clicks. As a result, several brands that are losing organic visibility are increasing their paid investment.
By the numbers. Across four verticals, text ads showed the most consistent, measurable click-share gains.
Classic organic lost 11 to 23 percentage points of click share year over year.
Text ads gained 7 to 13 percentage points in every case.
Paid click share doubled in major product categories.
AI Overviews SERP presence rose ~10 to ~30 percentage points, depending on the vertical.
Classic organic is down everywhere. Year-over-year classic organic click share declined across all four verticals. Headphones saw the steepest drop. Even online games — historically organic-heavy — lost double digits. In two verticals (headphones, jeans), total clicks also fell.
Headphones: Down from 73% to 50%
Jeans: Down from 73% to 56%
Greeting cards: Down from 88% to 75%
Online games: Down from 95% to 84%
Text ads are the biggest winner. Text ads gained share in every vertical; no other surface showed this level of consistent growth:
Headphones: Up from 3% to 16%
Online games: Up from 3% to 13%
Jeans: Up from 7% to 16%
Greeting cards: Up from 9% to 16%
In product categories, PLAs compounded the shift:
Headphones: Up from 16% to 36%
Jeans: Up from 18% to 34%
Greeting cards: Up from 10% to 19%
AI Overviews surged unevenly. The presence of Google AI Overviews expanded sharply, but varied by vertical:
Headphones: 2.28% → 32.76%
Online games: 0.38% → 29.80%
Greeting cards: 0.94% → 21.97%
Jeans: 2.28% → 12.06%
Zero-click searches are high — and mostly stable. Except for online games, zero-click rates didn’t change dramatically:
Headphones: 63% (flat)
Jeans: Down from 65% to 61%
Online games: Up from 43% to 50%
Greeting cards: Up from 51% to 53%
Brands losing organic traffic are buying it back. In headphones:
Amazon increased paid clicks 35% while losing organic volume.
Walmart nearly 6x’d paid clicks.
Bose boosted paid 49%.
In jeans:
Gap grew paid clicks 137% to become the top paid player.
True Religion entered the paid top tier without top-10 organic presence.
In online games:
CrazyGames quadrupled paid clicks while organic declined.
Arkadium entered paid after losing 68% of organic clicks.
The result? We’re seeing a self-reinforcing cycle, according to the study’s author, Aleyda Solis:
Organic share declines.
Competition intensifies.
More brands increase paid budgets.
Paid surfaces capture more clicks.
About the data. This analysis used Similarweb data to examine SERP composition and click distribution for the top 5,000 U.S. queries in headphones, jeans, and online games, and the top 956 queries in greeting cards and ecards. It compares January 2025 to January 2026, tracking how clicks shifted across classic organic results, organic SERP features, text ads, PLAs, zero-click searches, and AI Overviews.
Microsoft Advertising is rolling out multi-image ads for Shopping campaigns in Bing search results, giving ecommerce brands a richer way to showcase products and capture shopper attention before the click.
What’s new. Advertisers can now display multiple product images within a single Shopping ad, letting shoppers preview different angles, styles or variations directly in search.
The format is designed to make ads more visually engaging and informative, helping consumers compare options quickly without leaving the results page.
How it works:
Additional images are uploaded through the optional additional_image_link attribute in the product feed.
Advertisers can include up to 10 images, separated by commas.
The images appear alongside pricing and retailer information in Shopping results.
Why we care. Multi-image ads could increase engagement and purchase intent by presenting a fuller picture of a product. More visuals can highlight features, colors and design details that a single image might miss.
Discovery. The feature was first spotted by digital marketer Arpan Banerjee who shared spotting it on LinkedIn.
The bottom line. Multi-image Shopping ads give retailers more creative flexibility and shoppers more context at a glance — a shift that could improve ad performance and reshape how products compete in search results.
A new applied learning path from Microsoft Advertising is designed to help marketers get more value from Performance Max campaigns through hands-on, scenario-based training — not just theory.
What’s happening. The new Performance Max learning path bundles three progressive courses that focus on real-world setup, optimization and troubleshooting. The structure is meant to let advertisers learn at their own pace while building practical skills they can immediately apply to live campaigns.
Each course targets a different stage of expertise, from beginner fundamentals to advanced strategy and credentialing.
What’s included:
Course 1: Foundations
Introducing Microsoft Advertising Performance Max campaigns covers the essentials.
Ideal for beginners who want to understand how PMax campaigns work.
Focuses on core concepts and terminology.
Course 2: Hands-on setup
Setting up Microsoft Advertising Performance Max campaigns provides a guided walkthrough.
Designed for advertisers launching their first PMax campaign or refreshing their skills.
Walks step-by-step through campaign creation and answers common setup questions.
Course 3: Advanced implementation
Implementing & optimizing Microsoft Advertising Performance Max centers on scenario-based applied learning.
Targets advanced users developing strategic and optimization skills.
Includes practical tools like checklists, videos and reusable reference materials.
How it works. The third course introduces embedded support features that let learners access targeted educational resources mid-assessment via a “Help me understand” option. Users can review specific concepts in context and return directly to their questions.
The benefit. Learners can spend more time on weak areas while quickly progressing through familiar material.
Credential payoff. Completing the advanced course unlocks the chance to earn a Performance Max badge. The credential signals proficiency in implementing and optimizing PMax campaigns and applying best practices in real-world scenarios.
The badge is digitally shareable and verifiable through Credly, making it easy to display on professional platforms like LinkedIn.
Why we care. This update from Microsoft Advertising makes it faster and easier to build real, job-ready skills for running Performance Max campaigns — not just theoretical knowledge. The applied, scenario-based training helps marketers avoid common setup mistakes, optimize campaigns more confidently, and improve performance in live accounts.
Plus, the shareable credential adds professional credibility, signaling proven expertise to clients and employers.
The bottom line. The new learning path aims to close the gap between training and execution. By combining applied scenarios, embedded support and credentialing, it offers a structured route for advertisers to build confidence — and prove it — in Performance Max campaign management.
ChatGPT heavily favors the top of content when selecting citations, according to an analysis of 1.2 million AI answers and 18,012 verified citations by Kevin Indig, Growth Advisor.
Why we care. Traditional search rewarded depth and delayed payoff. AI favors immediate classification — clear entities and direct answers up front. If your substance isn’t surfaced early, it’s less likely to appear in AI answers.
By the numbers. Indig’s team found a consistent “ski ramp” citation pattern that held across randomized validation batches. He called the results statistically indisputable:
44.2% of citations come from the first 30% of content.
31.1% come from the middle (30–70%).
24.7% come from the final third, with a sharp drop near the footer.
At the paragraph level, AI reads more deeply:
53% of citations come from the middle of paragraphs.
24.5% come from first sentences.
22.5% come from last sentences.
The big takeaway. Front-load key insights at the article level. Within paragraphs, prioritize clarity and information density over forced first sentences.
Why this happens. Large language models are trained on journalism and academic writing that follow a “bottom line up front” structure. The model appears to weight early framing more heavily, then interpret the rest through that lens.
Modern models can process massive token windows, but they prioritize efficiency and establish context quickly.
What gets cited. Indig identified five traits of highly cited content:
Definitive language: Cited passages were nearly twice as likely to use clear definitions (“X is,” “X refers to”). Direct subject-verb-object statements outperform vague framing.
Conversational Q&A structure: Cited content was 2x more likely to include a question mark. 78.4% of citations tied to questions came from headings. AI often treats H2s as prompts and the following paragraph as the answer.
Entity richness: Typical English text contains 5% to 8% proper nouns. Heavily cited text averaged 20.6%. Specific brands, tools, and people anchor answers and reduce ambiguity.
Balanced sentiment: Cited text clustered around a subjectivity score of 0.47 — neither dry fact nor emotional opinion. The preferred tone resembles analyst commentary: fact plus interpretation.
Business-grade clarity: Winning content averaged a Flesch-Kincaid grade level of 16 versus 19.1 for lower-performing content. Shorter sentences and plain structure beat dense academic prose.
About the data. Indig analyzed 3 million ChatGPT responses and 30 million citations, isolating 18,012 verified citations to examine where and why AI pulls content. His team used sentence-transformer embeddings to match responses to specific source sentences, then measured their page position and linguistic traits such as definitions, entity density, and sentiment.
Bottom line. Narrative “ultimate guide” writing may underperform in AI retrieval. Structured, briefing-style content performs better.
Indig argues this creates a “clarity tax.” Writers must surface definitions, entities, and conclusions early—not save them for the end.
Google Ads has launched a new Results tab inside its Recommendations section that shows advertisers the measured performance impact after they apply bid and budget suggestions.
How it works. After an advertiser applies a bid or budget recommendation, Google analyzes campaign performance one week later and compares it to an estimated baseline of what would have happened without the change. The system then highlights the incremental lift, such as additional conversions generated by raising a budget or adjusting targets.
Where to find it. Impact reporting appears in the Recommendations area of an account. A summary callout shows recent results on the main page, while a dedicated Results tab provides a deeper breakdown grouped by Budget and Target recommendations, with filtering options for each.
Why we care. Advertisers can now see whether Google’s automated recommendations actually drive incremental results — not just projected gains — helping teams evaluate the business value of platform guidance.
What to expect. Results are reported as a seven-day rolling average measured across a 28-day window after a recommendation is applied. Metrics focus on the campaign’s primary bidding objective — such as conversions, conversion value, or clicks.
Between the lines. The feature adds a layer of accountability to automated recommendations at a time when advertisers are relying more heavily on platform-driven optimization.
Spotted by. Hana Kobzová founder of PPCNewsFeed who shared a screenshot of the help doc on LinkedIn.
Help doc. Even though there isn’t a live Google help doc, a Google spokesperson has confirmed that there’s an early pilot running.
I’ve observed numerous SEO professionals on LinkedIn and at conferences talking about “ranking No. 1 on ChatGPT” as if it’s the equivalent of a No. 1 ranking on Google:
On Google, being the first result is often a golden ticket.
This is almost certainly not the case with AI responses – even if they weren’t constantly changing.
Our team’s research shows AI users consider an average of 3.7 businesses before deciding who to contact.
Being the first result in that list on ChatGPT isn’t the golden ticket it is in Google search.
This being the case, the focus of AI search really should be on “inclusion in the consideration set” – not necessarily being “the first mentioned in that set” – as well as crafting what AI is saying about us.
User behavior on AI platforms differs from Google search
Over the past several months, my team has spent more than 100 hours observing people use ChatGPT and Google’s AI Mode to find services.
One thing came into focus within the first dozen or so sessions: User behavior on AI platforms differs from Google search in ways that extend far beyond using “natural language” and having conversations versus performing keyword searches.
Which is overstated, by the way. About 75% of the sessions we observed included “keyword searches.”
One key difference: Users consider more businesses in AI responses than in organic search.
It makes sense — it’s much easier to compare multiple options in a chat window than to click through three to five search results and visit each site.
In both Google AI Mode and ChatGPT, users considered an average of 3.7 businesses from the results.
This has strong implications for the No. 1 result – as well as No. 4.
The value of appearing first drops sharply — and the value of appearing lower rises — when, in 75% of sessions, users also consider businesses in Positions 2 to 8.
What’s driving conversions isn’t your position in that list.
Why do businesses with lower rankings end up in the consideration set in LLMs?
First of all, these aren’t rankings.
They are a list of recommendations that will likely get shuffled, reformatted from a list to a table, and completely changed, given the probabilistic nature of AI.
That aside, AI chat makes it much easier to scan and consider more options than Google search does.
Let’s look at the Google search results for “fractional CMO.”
If a user wants to evaluate multiple fractional CMO options for their startup, it’s more work to do so in Google Search than in ChatGPT.
Only two options appear above the fold, and each requires a click-through to read their website content.
Contrast this with the experience on ChatGPT.
The model gave them eight options, along with information about each one.
It’s easy to read all eight blurbs and decide whom to explore further.
Which leads to the other thing we really need to focus on: what the model is saying about you.
A bigger driver than being first on ChatGPT: Being a good fit
Many search marketers focus on rankings and traffic, but rarely on messaging and positioning.
This needs to change.
In the case of the response for an ophthalmologist in southern New Jersey, you get an easily scannable list:
Roughly 60% make their entire decision based on the response, without visiting the website or switching to Google, according to our study.
So how do you drive conversion?
Deliver the right message — and make sure the model shares it.
Dr. Lanciano may be the best glaucoma specialist in the area. But if the model highlights Ravi D. Goel and Bannett Eye Centers for glaucoma care, and that’s what the user needs, they’ll go there.
Bannett Eye Centers appears last in the AI response but may still win the conversion because of what the model says about it — something that rarely happens in Google Search.
Visibility doesn’t pay the bills. Conversions do. And conversions don’t happen when customers think someone else is a better fit.
Perplexity is abandoning advertising, for now at least. The company believes sponsored placements — even labeled ones — risk undermining the trust on which its AI answer engine depends.
Perplexity phased out the ads it began testing in 2024 and has no plans to bring them back, the Financial Times reported.
The AI search company could revisit advertising or “never ever need to do ads,” the report said.
Why we care. If Perplexity remains ad-free, brands lose paid access to a fast-growing audience. The company previously reported that it gets 780 million monthly queries. With sponsored placements gone, brands have no way to get visibility inside Perplexity’s answers other than via organic citations.
What changed. Perplexity was one of the first AI search companies to test ads, placing sponsored answers beneath chatbot responses. It said at the time that ads were clearly labeled and didn’t influence outputs. Executives now say perception matters as much as policy.
“A user needs to believe this is the best possible answer,” one executive said, adding that once ads appear, users may second-guess response integrity.
Meanwhile. Perplexity’s exit comes as other AI platforms experiment with ads.
Perplexity says subscriptions are its core business. It offers a free tier and paid plans from $20 to $200 per month. It has more than 100 million users and about $200 million in annualized revenue, according to executives.
Perplexity also introduced shopping features, but doesn’t take a cut of transactions, another indication it’s cautious about revenue models that could create conflicts of interest.
“We are in the accuracy business, and the business is giving the truth, the right answers,” one executive said.
Search behavior is no longer just people typing keywords into Google. It’s people asking questions and, in some cases, outsourcing their thinking to LLMs.
As Google evolves from a traditional search engine into a more question-and-answer machine, businesses need a robust, time-tested way to respond to customer questions.
AI changes how people research and compare options. Tasks that once felt painful and time-consuming are now easy. But there’s a catch. The machine only knows what it can find about you.
If you want visibility across the widest possible range of questions, you need to understand your customers’ wants, needs, and concerns in depth.
That’s where the “they ask, you answer” framework comes in. It helps businesses identify and create the many questions and answers prospective customers already have in mind. Always useful, it’s a practical, actionable way forward in the age of AI.
An answer-first content strategy and why it matters now
“They Ask, You Answer” (TAYA) is a book by Marcus Sheridan. (I strongly recommend you read it.)
The concept is simple: buyers have questions, and businesses should answer them honestly, clearly, and publicly — especially the ones sales teams avoid.
No dodging. No “contact us for a quote.” No “it depends” – sorry, SEO folks.
TAYA isn’t just an inbound marketing strategy. It’s a practical way to map a customer-facing content strategy with an E-E-A-T mindset.
The framework centers on five core content categories:
Pricing and cost.
Problems.
Versus and comparisons.
Reviews.
Best in class.
These categories align with the moments when a buyer is seeking the best solution, reducing risk, and making a decision.
More of those moments now happen inside AI environments — on your smartphone, your PC, in apps like ChatGPT or Gemini, or anywhere else AI shows up, which at this point is nearly everywhere.
At their core, these are question-and-answer machines. You ask. The machine answers. That’s why the TAYA process fits so well.
The modern web is chaotic. Finding what you need can be exhausting — dodging ads, navigating layers of SERP features, and avoiding pop-ups on the site you finally click.
AI is gaining ground because it feels better. Easier. Faster. Cleaner. Less chaos. More order.
You could argue we already have a north star for content creation in E-E-A-T. But have you ever tried to build a content strategy around it? Great in principle, harder in practice.
They ask, you answer puts an E-E-A-T-focused content strategy on rails:
Pricing supports trust, experience, and expertise.
Problems show experience and trust.
Versus content builds authority and expertise.
Reviews build experience and trust.
Best-in-class content builds authority and trust.
E-E-A-T can be difficult to target because there are many ways to build trust, show experience, and demonstrate authority. TAYA maps those signals across multiple areas within each category, helping you build a comprehensive database of people-first content that AI readily surfaces.
How to integrate TAYA with traditional SEO research
The skills and tools we use as SEOs already put us in a strong position for the AI era. They can help us build an integrated SEO, PPC, and AI strategy.
The action plan:
Google Search Console: Go to Google Search Console > Performance. Filter queries by question modifiers such as who, what, why, how, and cost. These are your raw TAYA topics.
Google Business Profile: Review keywords and queries in your Google Business Profile for additional ideas.
The semantic map: Use AnswerThePublic or Also Asked. Look for secondary questions. If you’re writing about cost, you’ll often see related concerns such as financing or hidden fees.
The competitor gap: Use Semrush or Ahrefs Keyword Gap tools. Don’t focus on what competitors rank for. Look for “how-to” and “versus” keywords where they have no meaningful content. That’s your land grab.
Method marketing: Immerse yourself in the mindset of your ideal customer and start searching. What comes up? What does AI say? What’s missing? Tools like the Value Proposition Canvas and SCAMPER can help you evaluate these angles in structured ways.
Often, you won’t go wrong by simply searching for your own products and services. AI tools and search results will surface a wide range of questions, answers, and perspectives that can feed directly into your AI and SEO content strategy.
Also consider the internal sources available to you:
Sales calls and sales teams.
Live chat transcripts.
Emails.
Customer service tickets.
Proposal feedback.
Complaints.
All of this helps you understand the question landscape. From there, you can begin organizing those insights within the five TAYA categories.
TAYA and your AI-era content marketing strategy
The framework centers on five core categories, reinterpreted for an answer-driven environment where Google, Gemini, and ChatGPT-like systems anticipate user needs.
For each, here’s what it is, why it matters now, and examples to get you started.
1. Pricing and cost: Why we must talk about money
Buyers want cost clarity early. Businesses avoid it because “it depends.” Both are true, but only one is useful.
AI systems will readily summarize typical costs, using someone else’s numbers if you don’t publish your own. If you fail to provide a credible range with context, you’re effectively handing the narrative to competitors, directories, or a generic blog with a stock photo handshake.
How to do it
Publish ranges, not unrealistic single prices.
Explain what drives costs up or down.
Include example packages, such as good, better, and best.
Be explicit about what’s included and excluded.
Add country-specific variables where relevant, such as tax or VAT in the UK.
Content examples
How much does [service] cost in the UK? Include price ranges and what influences them.
X vs. Y pricing: what you get at each level.
The hidden costs of [solution] and how to avoid them.
Budget checklist: what to prepare before you buy [product or service].
One of the most cited examples in the TAYA world is Yale Appliance. The company embraced transparent, buyer-focused content and saw inbound become its largest sales channel, alongside significant reported growth.
The takeaway isn’t “go sell fridges.” It’s to answer money questions more clearly and honestly than anyone else. Do that, and you build trust at scale.
2. Problems: Turning problems into strengths
This category focuses on being honest about drawbacks, limitations, risks, and who a product or service isn’t for. You have to think beyond pure SEO or GEO.
A core communication strategy is taking a perceived weakness, such as being a small business, and reframing it as a strength, like a more personalized approach.
Own the areas that could be seen as problems. Present them clearly and constructively so customers understand the trade-offs and context.
The answer layer aims to provide balanced guidance. Pages that focus only on benefits read like marketing. Pages that acknowledge trade-offs read like advice.
People can spot spin quickly. Be direct. Own your limitations. When you do, credibility increases.
How to do it
Create problem-and-solution guides.
Include “avoid if …” sections.
Address common failure modes and misuses.
Be explicit about prerequisites, such as budget, timeline, skill, or access.
Content examples
The biggest problems with [solution] and how to mitigate them.
Is [product or service] worth it? When it’s a great choice and when it isn’t.
Common mistakes when buying or implementing [solution].
What can go wrong with [approach] and how to reduce risk.
This is where your “experience” in E-E-A-T becomes tangible. “We’ve seen this go wrong when …” carries far more weight than “we’re passionate about excellence.”
People rely on comparisons to reduce cognitive load. They want clarity. What’s the difference?
Comparison queries are ideal for answer engines because they lend themselves to structured summaries, tables, and recommendations. If you don’t publish the clearest comparison, you won’t be the source used to generate the clearest answer.
How to do it
Compare by use case, not just features.
Use a consistent framework, such as price, setup, outcomes, risks, and who it suits.
Include clear guidance, such as “If you’re X, choose Y.”
Content examples
X vs. Y: which is better for [specific scenario]?
In-house vs. outsourced for [service]: cost, risk, and results.
Tool A vs. Tool B vs. Tool C: an honest comparison for UK teams.
Alternatives to [popular option]: when to choose each.
SEO bonus: These pieces tend to earn links because they’re genuinely useful and because many competitors hesitate to name alternatives directly.
This isn’t about asking for a five-star review. It’s about creating review-style content that helps buyers evaluate their options.
AI summaries often rely on review-style pages because they’re structured around evaluation. But generic affiliate reviews can be, at best, inconsistent in sincerity. Your advantage is first-hand experience and contextual truth.
How to do it
Review your own services honestly, including what clients value and where they struggle.
Review the tools you use with clear pros and cons.
Publish “what we’d choose and why” for different buyer types.
Content examples
Is [solution] worth it? Our honest take after implementing it for X clients.
Best [category] tools for [persona], including limitations.
The top questions to ask before choosing a [provider].
What good looks like: a checklist to evaluate [service].
If you want to be cited in AI answers, you have to sound like a source, not an ad.
5. Best in class – and the courage to recommend others
Sheridan’s view, and it’s a bold one, is that you should sometimes publish “best in class” recommendations even when the best option isn’t you. That’s how trust is built.
The answer layer rewards utility. If your page genuinely helps users choose well, it becomes the kind of resource systems are more likely to reference.
How to do it
Build “best for” lists based on clear criteria, not hype.
Explain how you evaluated the options.
Include scenarios where each option wins or loses.
Content examples
Best [solutions] for [use case] in 2026, including criteria and picks.
Best [service] providers for [industry] and what to look for.
Best budget, best premium, best for speed, best for compliance.
If I were buying this today: the decision framework I’d use.
The goal is for your brand to become a trusted educator, not just a vendor.
When leveraged effectively, TAYA is a powerful way to map what you should be addressing and to build a content strategy that ensures you’re represented across the AI landscape.
In practice, that means building an editorial program where:
Every piece begins with a real buyer question.
The five core categories prioritize decision-stage content, not just awareness content.
Traditional SEO research validates language and demand.
Content is written to satisfy both the human, through clarity and confidence, and the machine, through structure, specificity, evidence, and balanced trade-offs.
This shift also changes how success is measured.
In classic SEO, the win was rank, click, convert.
In the AI era, the win is often be the source, earn trust, be chosen, with or without the click.
If your content is the clearest, most in-depth, most honest, and most experience-backed explanation available for the questions buyers are already asking, then whether someone discovers it through Google, Gemini, ChatGPT, or elsewhere, you’ve built something durable.
Which is what strong SEO has always been about. The window has changed. The principles haven’t.
With more than two decades in SEO, I’ve lived through every major disruption the industry has faced — from stuffing meta keywords to rank on AltaVista to Google reshaping search, to mobile-first indexing, and now AI.
What feels different today is the speed of change and the emotional weight it carries. I see growing pressure across teams, even among seasoned professionals who have weathered every major shift before this one.
Many have a legitimate concern: If AI can do this faster, where do I fit in? That’s not a technical question. It’s a human one.
That uncertainty affects morale and adoption. Productivity slows. Experimentation stalls. Teams either overuse AI without judgment or avoid it altogether.
The real leadership challenge is about building confidence, capability, and trust in AI-assisted teams.
4 tips for building AI confidence in SEO teams
Building real confidence in AI within an SEO team isn’t about deploying new tools. It’s about shifting the culture.
The most effective SEO teams aren’t the ones adopting the most tools. They use AI intentionally and with discipline. They automate data pulls, summarize research, and cluster keywords. This allows teams to focus on strategy, storytelling, and stakeholder alignment.
Technology adoption is largely cultural, as Harvard Business School has noted. Tools alone don’t drive change. Trust does. That insight applies directly to SEO teams navigating AI today.
Below are four strategies for building AI confidence in your teams through clarity, participation, and shared ownership, not pressure or hype.
1. Earn trust by involving the team in AI tool selection and workflow design
A practical way to strengthen trust is to move from top-down implementation to shared ownership. People trust what they help create.
When AI is imposed on a team, resistance increases. Inviting people into evaluation and workflow design makes AI feel less intimidating and more empowering. Bringing teams in early also surfaces real-world insight into where AI reduces friction or introduces new risks.
Effective leaders:
Invite teams to test tools and share feedback.
Run small experiments before scaling adoption.
Communicate clearly about what you’re adopting, what you’re rejecting, and why.
When teams feel included, they’re more willing to experiment. They learn and stretch into new capabilities. That openness fuels growth and innovation.
2. Meet people where they are – not where you want them to be
AI capability varies widely across SEO teams. Some practitioners experiment daily. Others feel overwhelmed or skeptical, often because they’ve seen past automation trends come and go.
Leaders who strengthen confidence understand that capability develops at different speeds. They create environments that encourage curiosity, where uncertainty is normal, and learning happens continuously, not just when it’s mandated.
That means:
Normalizing different comfort levels.
Creating psychological safety around “I don’t know yet.”
Avoiding shame or over-celebration of early adopters.
Offering multiple learning paths.
Recognizing different starting points makes progress feel achievable rather than threatening.
When someone uses AI to cut a task from hours to minutes, it’s more than a productivity gain. It proves AI can support real work without replacing human judgment.
Effective teams:
Share clear examples of AI improving quality and efficiency.
Highlight internal champions who can mentor others.
Create space for demos and knowledge sharing.
Reinforce a culture of experimentation, not judgment.
My agency formed AI focus groups with members from across the organization. One group focused on integrating AI into project management, with representatives from SEO, operations, and leadership.
That shared ownership made adoption more successful. Teams weren’t just implementing AI; they were shaping how it fit into real workflows. The result was stronger buy-in, better collaboration, and greater confidence across the team.
Each group shared its successes and lessons. This built awareness of what worked and why. Momentum builds when teams see their peers using AI responsibly and effectively.
AI accelerates analysis. Humans interpret meaning.
AI drafts. Humans validate, refine and contextualize.
AI scales output. Humans build trust and influence.
AI can help with execution, but it can’t replace strategic instincts, contextual judgment, or cross-functional leadership. Those are the skills that ultimately move performance forward.
Why experience still matters in AI-driven SEO
AI has lowered the barrier to entry for many SEO tasks. With effective prompts, almost anyone can generate keyword lists, outlines, or summaries. With that accessibility, we see many short-lived tactics and recycled “quick wins.”
Anyone who’s been in SEO long enough has seen this cycle before. The tactics change. The fundamentals don’t. This is where experience becomes the differentiator.
AI can generate outputs, not accountability
AI can produce content and analyze data, but it doesn’t own outcomes. It doesn’t carry responsibility for brand reputation, compliance, or long-term performance.
SEO professionals remain accountable for:
Deciding what to exclude from publication.
Assessing technical, reputational, and compliance risks.
Weighing long-term consequences against short-term gains.
AI executes. Humans decide. That distinction matters more than ever.
Pattern recognition is learned, not automated
AI excels at surfacing patterns. It struggles to explain why they matter or whether they apply in a specific context.
Experienced SEOs bring a depth of understanding that AI can’t replicate. Their historical background helps them distinguish true shifts from industry noise.
Few industries have seen as many tactics rise and fall as SEO. Experience enables strategic thinking beyond what worked before and helps avoid repeating tactics that once succeeded but later failed.
AI suggests possibilities. Experience evaluates relevance.
Professional integrity remains a differentiator
In high-visibility search environments, mistakes scale quickly. AI can produce inaccuracies and hallucinations. These errors can put brands at risk of losing trust and facing compliance issues.
Teams with strong professional SEO foundations:
Validate AI output instead of assuming correctness.
Prioritize accuracy over speed.
Maintain ethical SEO standards.
Protect brand voice and credibility.
Integrity isn’t automated. It’s practiced. In a high-speed AI environment, that discipline matters even more.
As routine tasks become automated, the SEO professional’s role shifts to strategic oversight. Time once spent on manual analysis can be redirected to interpreting user intent, shaping search strategy, guiding stakeholders, and assessing risk.
This makes fundamentals more important. Teams still need sound judgment, technical expertise, and accountability. AI can support execution, but professionals remain responsible for decisions, quality, and long-term performance.
Developing the next generation of SEOs requires more than proficiency with tools. It requires teaching:
When to rely on AI.
When to challenge it.
How to apply experience and context to its output.
Google is rolling out new, more visible links within AI Overviews and AI Mode. These new link cards appear in a pop-up window when you hover over them on desktop. They also show more prominent details about the website.
Google was testing these earlier and now this new style is live.
What it looks like. Here is a screenshot of these new link pop up menus on hover:
What Google said. Google’s Robby Stein posted on X saying:
“New on Search: In AI Overviews and AI Mode, groups of links will automatically appear in a pop-up as you hover over them on desktop, so you can jump right into a website to learn more. And we’ll show more descriptive and prominent link icons within the response across both desktop and mobile.”
“Our testing shows this new UI is more engaging, making it easier to get to great content across the web.”
Why we care. This new style does appear to encourage more clicks to websites and I do hope that we will see more traffic from Google’s AI experiences from these changes.
Of course, we still have no way to measure this in Search Console.
Traffic from AI chatbots converts at a higher rate than traffic from Google, according to Airbnb CEO Brian Chesky. He shared this tidbit on the company’s Q4 2025 earnings call:
“And what we see is that traffic that comes from chatbots convert at a higher rate than traffic that comes from Google,” Chesky said on Feb. 12.
Yes, but. He didn’t share specific conversion rates, and the company didn’t quantify chatbot traffic volume. But for Airbnb, early data suggests visitors arriving via AI chatbots may be further along in the booking process than those coming from traditional Google searches.
Airbnb also didn’t specify which chatbots are driving traffic. Chesky referenced OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and others in broader remarks about model availability.
Why we care. AI assistants are emerging as a top-of-funnel discovery layer. The quality of that traffic may outperform clicks from traditional search and align with past claims by Google and Microsoft that AI will drive more qualified traffic at lower volume.
AI search ambitions. Chesky described chatbots as “very similar to search” and positioned them as top-of-funnel discovery engines.
“I think these chatbot platforms are gonna be very similar to search. Gonna be really good top-of-funnel discoveries,” he said.
Rather than viewing them as disintermediators, Airbnb sees them as acquisition partners.
“We think they are gonna be positive for Airbnb,” Chesky added.
Chesky described the long-term goal as building an “AI-native experience” where the app “does not just search for you. It knows you.”:
“So AI search is live to a very small percent of traffic right now. We are doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we are gonna be experimenting with making AI search more conversational, integrating it into more than trip, and, eventually, we will be looking at sponsor listings as result of that. But we want to first nail AI search.”
AI inside Airbnb. Airbnb isn’t just benefiting from external AI platforms. It’s embedding AI into its operations.
Its in-house AI customer service agent now resolves nearly one-third of North American support tickets without a human, according to Chesky. The tool is English-only for now but is slated for global, multilingual rollout, including voice support.
Chesky said the goal is for AI to handle “significantly more than 30%” of tickets within a year.
Airbnb is also testing AI-powered conversational search in its app. The feature is live for a small percentage of users and is being iterated quickly rather than launched as a major product release.
Sponsored listings on hold for now. Airbnb has long faced questions about launching sponsored listings. On the call, Chesky said traditional ad units may not translate directly into conversational AI environments. The company is prioritizing AI search before designing sponsored placements in that format.
Airbnb’s search shift.Airbnb began moving its budget to brand marketing just before the rise of generative AI and AI-powered search. Airbnb bet on broader marketing initiatives, slashing its search marketing spending.
TikTok is giving entertainment marketers in Europe new tools to reach audiences with precision, leveraging AI to drive engagement and conversions for streaming and ticketed content.
What’s happening. TikTok is introducing two new ad types for European campaigns:
Streaming Ads: AI-driven ads for streaming platforms that show personalized content based on user engagement. Formats include a four-title video carousel or a multi-title media card. With 80% of TikTok users saying the app influences their streaming choices, these ads can directly shape viewing decisions.
New Title Launch: Targets high-intent users using signals like genre preference and price sensitivity, helping marketers convert cultural moments into ticket sales, subscriptions, or event attendance.
Context. The rollout coincides with the 76th Berlinale International Film Festival, underscoring TikTok’s growing role in entertainment marketing. In 2025, an average of 6.5 million daily posts were shared about film and TV on TikTok, with 15 of the top 20 European box office films last year being viral hits on the platform.
Why we care. TikTok’s new AI-powered ad formats let streaming platforms and entertainment brands target users with highly personalized content, increasing the likelihood of engagement and conversions.
With 80% of users saying TikTok influences their viewing choices (according to TikTok data), these tools can directly shape audience behavior, helping marketers turn cultural moments into subscriptions, ticket sales, or higher viewership. It’s a chance to leverage TikTok’s viral influence for measurable campaign impact.
The bottom line. For entertainment marketers, TikTok’s AI-driven ad formats provide new ways to engage audiences, boost viewership, and turn trending content into measurable results.
Meta Platforms is embedding newly acquired AI agent tech directly into Ads Manager, giving advertisers built-in automation tools for research and reporting as the company looks to show faster returns on its AI investments.
What’s happening. Some advertisers are seeing in-stream prompts to activate Manus AI inside Ads Manager.
Manus is now available to all advertisers via the Tools menu.
Select users are also getting pop-up alerts encouraging in-workflow adoption.
The feature rollout signals deeper integration ahead.
What is Manus. Manus AI is designed to power AI agents that can perform tasks like report building and audience research, effectively acting as an assistant within the ad workflow.
Why we care. Manus AI brings AI-powered automation directly into Meta Platforms Ads Manager, making tasks like report-building, audience research, and campaign analysis faster and more efficient.
Meta is currently prioritizing tying AI investment to measurable ad performance, giving advertisers new ways to optimize campaigns and potentially gain a competitive edge by testing workflow efficiencies early.
Between the lines. Meta is under pressure to demonstrate practical value from its aggressive AI spending. Advertising remains its clearest path to monetization, and embedding Manus into everyday ad tools offers a direct way to tie AI investment to performance gains.
Zoom out. The move aligns with CEO Mark Zuckerberg’s push to weave AI across Meta’s product stack. By positioning Manus as a performance tool for advertisers, Meta is betting that workflow efficiencies will translate into stronger ad results — and a clearer AI revenue story.
The bottom line. For advertisers, Manus adds another layer of built-in automation worth testing. Early adopters may uncover time savings and optimization gains as Meta continues expanding AI inside its ad ecosystem.
A core targeting lever in Google Demand Gen campaigns is changing. Starting March 2026, Lookalike audiences will act as optimization signals — not hard constraints — potentially widening reach and leaning more heavily on automation to drive conversions.
What is happening. Per an update to Google’s Help documentation, Lookalike segments in Demand Gen are moving from strict similarity-based targeting to an AI-driven suggestion model.
Before: Advertisers selected a similarity tier (narrow, balanced, broad), and campaigns targeted users strictly within that Lookalike pool.
After: The same tiers act as signals. Google’s system can expand beyond the Lookalike list to reach users it predicts are likely to convert.
Between the lines. This effectively reframes Lookalikes from a fence to a compass. Instead of limiting delivery to a defined cohort, advertisers are feeding intent signals into Google’s automation and allowing it to search for performance outside preset boundaries.
How this interacts with Optimized Targeting. The new Lookalike-as-signal approach resembles Optimized Targeting — but it doesn’t replace it.
When advertisers layer Optimized Targeting on top, Google says the system may expand reach even further.
In practice, this stacks multiple automation signals, increasing the algorithm’s freedom to pursue lower CPA or higher conversion volume.
Opt-out option. Advertisers who want to preserve legacy behavior can request continued access to strict Lookalike targeting through a dedicated opt-out form. Without that request, campaigns will default to the new signal-based model.
Why we care. This update changes how much control advertisers will have over who their ads reach in Google Demand Gen campaigns. Lookalike audiences will no longer strictly limit targeting — they’ll guide AI expansion — which can significantly affect scale, CPA, and overall performance.
It also signals a broader shift toward automation, similar to trends driven by Meta Platforms. Advertisers will need to test carefully, rethink audience strategies, and decide whether to embrace the added reach or opt out to preserve tighter targeting.
Zoom out. The shift mirrors a broader industry trend toward AI-first audience expansion, similar to moves by Meta Platforms over the past few years. Platforms are steadily trading granular manual controls for machine-led optimization.
Why Google is doing this. Digital markerter Dario Zannoni, has two reasons as to why Google is doing this:
Strict Lookalike targeting can cap scale and constrain performance in conversion-focused campaigns.
Maintaining high-quality similarity models is increasingly complex, making broader automation more attractive.
The bottom line. For performance marketers, this is another step toward automation-centric buying. While reduced control may be uncomfortable, comparable platform changes have often produced performance gains in mainstream use cases. Expect a new testing cycle as advertisers measure how expanded Lookalike signals affect CPA, reach, and incremental conversions.
First seen. This update was spotted by Zannoni who shared his thoughts on LinkedIn.
Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.
In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.
The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.
Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:
“You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”
Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:
“And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”
Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:
“Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”
Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.
Dean explained how LLM-based representations changed how Google matches queries to content.
Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:
“Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”
That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.
Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:
“One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
“And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.
Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:
“Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
“Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
“And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”
That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.
Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:
“In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”
That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:
“If you’ve got last month’s news index, it’s not actually that useful.”
Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:
“There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”
Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.
If you look at job postings on Indeed and LinkedIn, you’ll see a wave of acronyms added to the alphabet soup as companies try to hire people to boost visibility on large language models (LLMs).
Some people are calling it generative engine optimization (GEO). Others call it answer engine optimization (AEO). Still others call it artificial intelligence optimization (AIO). I prefer large model answer optimization (LMAO).
I find these new acronyms a bit ridiculous because while many like to think AI optimization is new, it isn’t. It’s just long-tailSEO — done the way it was always meant to be done.
Why LLMs still rely on search
Most LLMs (e.g., GPT-4o, Claude 4.5, Gemini 1.5, Grok-2) are transformers trained to do one thing: predict the next token given all previous tokens.
AI companies train them on massive datasets from public web crawls, such as:
Common Crawl.
Digitized books.
Wikipedia dumps.
Academic papers.
Code repositories.
News archives.
Forums.
The data is heavily filtered to remove spam, toxic content, and low-quality pages. Full pretraining is extremely expensive, so companies run major foundation training cycles only every few years and rely on lighter fine-tuning for more frequent updates.
So what happens when an LLM encounters a question it can’t answer with confidence, despite the massive amount of training data?
AI companies use real-time web search and retrieval-augmented generation (RAG) to keep responses fresh and accurate, bridging the limits of static training data. In other words, the LLM runs a web search.
To see this in real time, many LLMs let you click an icon or “Show details” to view the process. For example, when I use Grok to find highly rated domestically made space heaters, it converts my question into a standard search query.
Many of us long-time SEO practitioners have praised the value of long-tail SEO for years. But one main reason it never took off for many brands: Google.
As long as Google’s interface was a single text box, users were conditioned to search with one- and two-word queries. Most SEO revenue came from these head terms, so priorities focused on competing for the No. 1 spot for each industry’s top phrase.
Many brands treated long-tail SEO as a distraction. Some cut content production and community management because they couldn’t see the ROI. Most saw more value in protecting a handful of head terms than in creating content to capture the long tail of search.
Fast forward to 2026. People typing LLM prompts do so conversationally, adding far more detail and nuance than they would in a traditional search engine. LLMs take these prompts and turn them into search queries. They won’t stop at a few words. They’ll construct a query that reflects whatever detail their human was looking for in the prompt.
Suddenly, the fat head of the search curve is being replaced with a fat tail. While humans continue to go to search engines for head terms, LLMs are sending these long-tail search queries to search engines for answers.
While AI companies are coy about disclosing exactly who they partner with, most public information points to the following search engines as the ones their LLMs use most often:
ChatGPT – Bing Search.
Claude – Brave Search.
Gemini – Google Search.
Grok – X Search and its own internal web search tool.
Perplexity – Uses its own hybrid index.
Right now, humans conduct billions of searches each month on traditional search engines. As more people turn to LLMs for answers, we’ll see exponential growth in LLMs sending search queries on their behalf.
The principles of long-tail SEO haven’t changed much. It’s best summed up by Baseball Hall of Famer Wee Willie Keeler: “Keep your eye on the ball and hit ’em where they ain’t.”
Success has always depended on understanding your audience’s deepest needs, knowing what truly differentiates your brand, and creating content at the intersection of the two.
As straightforward as this strategy has been, few have executed it well, for understandable reasons.
Reading your customers’ minds is hard. Keyword research is tedious. Content creation is hard. It’s easy to get lost in the weeds.
Happily, there’s someone to help: your favorite LLM.
Here are a few best practices I’ve used to create strong long-tail content over the years, with a twist. What once took days, weeks, or even months, you can now do in minutes with AI.
1. Ask your LLM what people search when looking for your product or service
The first rule of long-tail SEO has always been to get into your audience’s heads and understand their needs. This once required commissioning surveys and hiring research firms to figure out.
But for most brands and industries, an LLM can handle at least the basics. Here’s a sample prompt you can use.
Act as an SEO strategist and customer research analyst. You're helping with long-tail keyword discovery by modeling real customer questions.
I want to discover long-tail search questions real people might ask about my business, products, and industry. I’m not looking for mere keyword lists. Generate realistic search questions that reflect how people research, compare options, solve problems, and make decisions.
Company name: [COMPANY NAME]
Industry: [INDUSTRY]
Primary product/service: [PRIMARY PRODUCT OR SERVICE]
Target customer: [TARGET AUDIENCE]
Geography (if relevant): [LOCATION OR MARKET]
Generate a list of 75 – 100 realistic, natural-language search queries grouped into the following categories:
AWARENESS
• Beginner questions about the category
• Problem-based questions (pain points, frustrations, confusion)
CONSIDERATION
• Comparison questions (alternatives, competitors, approaches)
• “Best for” and use-case questions
• Cost and pricing questions
DECISION
• Implementation or getting-started questions
• Trust, credibility, and risk questions
POST-PURCHASE
• Troubleshooting questions
• Optimization and advanced/expert questions
EDGE CASES
• Niche scenarios
• Uncommon but realistic situations
• Advanced or expert questions
Guidelines:
• Write queries the way real people search in Google or ask AI assistants.
• Prioritize specificity over generic keywords.
• Include question formats, “how to” queries, and scenario-based searches.
• Avoid marketing language.
• Include emotional, situational, and practical context where relevant.
• Don't repeat the same query structure with minor variations.
• Each query should suggest a clear content angle.
Output as a clean bullet list grouped by category.
You can tweak this prompt for your brand and industry. The key is to force the LLM (and yourself) to think like a customer and avoid the trap of generating keyword lists that are just head-term variations dressed up as long-tail queries.
With a prompt like this, you move away from churning out “keyword ideas” and toward understanding real customer needs you can build useful content around.
Most large brands and sites don’t realize they’ve been sitting on a treasure trove of user intelligence: on-site search data.
When customers type a query into your site’s search box, they’re looking for something they expect your brand to provide.
If you see the same searches repeatedly, it usually means one of two things:
You have the information, but users can’t find it.
You don’t have it at all.
In both cases, it’s a strong signal you need to improve your site’s UX, add meaningful content, or both.
There’s another advantage to mining on-site search data: it reveals the exact words your audience uses, not the terms your team assumes they use.
Historically, the challenge has been the time required to analyze it. I remember projects where I locked myself in a room for days, reviewing hundreds of thousands of queries line by line to find patterns — sorting, filtering, and clustering them by intent.
If you’ve done the same, you know the pattern. The first few dozen keywords represent unique concepts, but eventually you start seeing synonyms and variations.
All of this is buried treasure waiting to be explored. Your LLM can help. Here’s a sample prompt you can use:
You're an SEO strategist analyzing internal site search data.
My goal is to identify content opportunities from what users are searching for on my website – including both major themes and specific long-tail needs within those themes.
I have attached a list of site search queries exported from GA4. Please:
STEP 1 – Cluster by intent
Group the queries into logical intent-based themes.
STEP 2 – Identify long-tail signals inside each theme
Within each theme:
• Identify recurring modifiers (price, location, comparisons, troubleshooting, etc.)
• Identify specific entities mentioned (products, tools, features, audiences, problems)
• Call out rare but high-intent searches
• Highlight wording that suggests confusion or unmet expectations
STEP 3 – Generate content ideas
For each theme:
• Suggest 3 – 5 content ideas
• Include at least one long-tail content idea derived directly from the queries
• Include one “high-intent” content idea
• Include one “problem-solving” content idea
STEP 4 – Identify UX or navigation issues
Point out searches that suggest:
• Users cannot find existing content
• Misleading navigation labels
• Missing landing pages
Output format:
Theme:
Supporting queries:
Long-tail insights:
Content opportunities:
UX observations:
Again, customize this prompt based on what you know about your audience and how they search.
The detail matters. Many SEO practitioners stop at a prompt like “give me a list of topics for my clients,” but this pushes the LLM beyond simple clustering to understand the intent behind the searches.
I used on-site search data because it’s one of the richest, most transparent, and most actionable sources. But similar prompts can uncover hidden value in other keyword lists, such as “striking distance” terms from Google Search Console or competitive keywords from Semrush.
Even better, if your organization keeps detailed customer interaction records (e.g., sales call notes, support tickets, chat transcripts), those can be more valuable. Unlike keyword datasets, they capture problems in full sentences, in the customer’s own words, often revealing objections, confusion, and edge cases that never appear in traditional keyword research.
Your goal is to create content so strong and authoritative that it’s picked up by sources like Common Crawl and survives the intense filtering AI companies apply when building LLM training sets. Realistically, only pioneering brands and recognized authorities can expect to operate in this rarefied space.
For the rest of us, the opportunity is creating high-quality long-tail content that ranks at the top across search engines — not just Google, but Bing, Brave, and even X.
This is one area where I wouldn’t rely on LLMs, at least not to generate content from scratch.
Why?
LLMs are sophisticated pattern matchers. They surface and remix information from across the internet, even obscure material. But they don’t produce genuinely original thought.
At best, LLMs synthesize. At worst, they hallucinate.
Many worry AI will take their jobs. And it will — for anyone who thinks “great content” means paraphrasing existing authority sources and competing with Wikipedia-level sites for broad head terms. Most brands will never be the primary authority on those terms. That’s OK.
The real opportunity is becoming the authority on specific, detailed, often overlooked questions your audience actually has. The long tail is still wide open for brands willing to create thoughtful, experience-driven content that doesn’t already exist everywhere else.
We need to face facts. The fat head is shrinking. The land rush is now for the “fat tail.” Here’s what brands need to do to succeed:
Dominate searches for your brand
Search your brand name in a keyword tool like Semrush and review the long-tail variations people type into Google. You’ll likely find more than misspellings. You’ll see detailed queries about pricing, alternatives, complaints, comparisons, and troubleshooting.
If you don’t create content that addresses these topics directly — the good and the bad — someone else will. It might be a Reddit thread from someone who barely knows your product, a competitor attacking your site, a negative Google Business Profile review, or a complaint on Trustpilot.
When people search your brand, your site should be the best place for honest, complete answers — even and especially when they aren’t flattering. If you don’t own the conversation, others will define it for you.
The time for “frequently asked questions” is over. You need to answer every question about your brand—frequent, infrequent, and everything in between.
Go long
Head terms in your industry have likely been dominated by top brands for years. That doesn’t mean the opportunity is gone.
Beneath those competitive terms is a vast layer of unbranded, long-tail searches that have likely been ignored. Your data will reveal them.
Review on-site search, Google Search Console queries, customer support questions, and forums like Reddit. These are real people asking real questions in their own words.
The challenge isn’t finding questions to write about. It’s delivering the best answers — not one-line responses to check a box, but clear explanations, practical examples, and content grounded in real experience that reflects what sets your brand apart.
Expertise is now a commodity: Lean into experience, authority, and trust
Publishing expert content still matters, but its role has changed. Today, anyone can generate “expert-sounding” articles with an LLM.
Whether that content ranks in Google is increasingly beside the point, as many users go straight to AI tools for answers.
As the “expertise” in E-E-A-T becomes table stakes, differentiation comes from what AI and competitors can’t easily replicate: experience, authority, and trust.
That means publishing:
Original insights and genuine thought leadership from people inside your company.
Real customer stories with measurable outcomes.
Transparent reviews and testimonials.
Evidence that your brand delivers what it promises.
This isn’t just about blog content. These signals should appear across your site — from your About page to product pages to customer support content. Every page should reinforce why a real person should trust your brand.
Stop paywalling your best content
I’m seeing more brands put their strongest content behind logins or paywalls. I understand why. Many need to protect intellectual property and preserve monetization. But as a long-term strategy, this often backfires.
If your content is truly valuable, the ideas will spread anyway. A subscriber may paraphrase it. An AI system may summarize it. A crawler may access it through technical workarounds. In the end, your insights circulate without attribution or brand lift.
When your best content is publicly accessible, it can be cited, linked to, indexed, and discussed. That visibility builds authority and trust over time.
In a search- and AI-driven ecosystem, discoverability often outweighs modest direct content monetization.
This doesn’t mean content businesses can’t charge for anything. It means being strategic about what you charge for. A strong model is to make core knowledge and thought leadership open while monetizing things such as:
Tools.
Community access.
Premium analysis or data.
Courses or certifications.
Implementation support.
Early access or deeper insights.
In other words, let your ideas spread freely and monetize the experience, expertise, and outcomes around them.
Stop viewing content as a necessary evil
I still see brands hiding content behind CSS “read more” links or stuffing blocks of “SEO copy” at the bottom of pages, hoping users won’t notice but search engines will.
Spoiler alert: they see it. They just don’t care.
Content isn’t something you add to check an SEO box or please a robot. Every word on your site must serve your customers. When content genuinely helps users understand, compare, and decide, it becomes an asset that builds trust and drives conversions.
If you’d be embarrassed for users to read your content, you’re thinking about it the wrong way. There’s no such thing as content that’s “bad for users but good for search engines.” There never was.
Embrace user-generated content
No article on long-tail SEO is complete without discussing user-generated content. I covered forums and Q&A sites in a previous article (see: The reign of forums: How AI made conversation king), and they remain one of the most efficient ways to generate authentic, unique content.
The concept is simple. You have an audience that’s already passionate and knowledgeable. They likely have more hands-on experience with your brand and industry than many writers you hire. They may already be talking about your brand offline, in customer communities, or on forums like Reddit.
Your goal is to bring some of those conversations onto your site.
User-generated content naturally produces the long-tail language marketing teams rarely create on their own. Customers
Describe problems differently.
Ask unexpected questions.
Compare products in ways you didn’t anticipate.
Surface edge cases, troubleshooting scenarios, and real-world use cases that rarely appear in polished marketing copy.
This is exactly the kind of content long-tail SEO thrives on.
It’s also the kind of content AI systems and search engines increasingly recognize as credible because it reflects real experience rather than brand messaging many dismiss as inauthentic.
Brands that do this well don’t just capture long-tail traffic. They build trust, reduce support costs, and dominate long-tail searches and prompts.
In the age of AI-generated content, real human experience is one of the strongest differentiators.
For years, SEO has been shaped by the limits of the search box. Short queries and head terms dominated strategy, and long-tail content was often treated as optional.
LLMs are changing that dynamic. AI is expanding search, not eliminating it.
AI systems encourage people to express what they actually want to know. Those detailed prompts still need answers, and those answers come from the web.
That means the SEO opportunity is shifting from competing over a small set of keywords to becoming the best source of answers to thousands of specific questions.
Brands that succeed will:
Deeply understand their audience.
Publish genuinely useful content.
Build trust through real engagement and experience.
That’s always been the recipe for SEO success. But our industry has a habit of inventing complex tactics to avoid doing the simple work well.
Most of us remember doorway pages, exact match domains, PageRank sculpting, LSI obsession, waves of auto-generated pages, and more. Each promised an edge. Few replaced the value of helping users.
We’re likely to see the same cycle repeat in the AI era.
The reality is simpler. AI systems aren’t the audience. They’re intermediaries helping humans find trustworthy answers.
If you focus on helping people understand, decide, and solve problems, you’re already optimizing for AI — whatever you call it.
Over two months ago, Google began testing its AI-powered configuration tool. It allows you to ask AI questions about the Google Search Console performance reports and it would bring back answers for you. Well, Google is now rolling out this tool for all.
Google said on LinkedIn, “The Search Console’s new AI-powered configuration is now available to everyone!”
AI-powered configuration. AI-powered configuration “lets you describe the analysis you want to see in natural language. Your inputs are then transformed into the appropriate filters and settings, instantly configuring the report for you,” Google said.
Rolling out now. If you login to your Search Console account and click on the performance report, you may see a note at the top that says “New! Customize your Performance report using Al.”
When you click on it, you get into the AI tool:
More details. As we reported earlier, Google said “The AI-powered configuration feature is designed to streamline your analysis by handling three key elements for you.”
Selecting metrics: Choose which of the four available metrics – Clicks, Impressions, Average CTR, and Average Position – to display based on your question.
Applying filters: Narrow down data by query, page, country, device, search appearance, or date range.
Configuring comparisons: Set up complex comparisons (like custom date ranges) without manual setup.
Why we care. This is only supported in the Performance report for Search results. It isn’t available for Discover or News reports, yet. Plus, it is AI, so the answers may not be perfect. But it can be fun to play with and get you thinking about things you may not have thought about yet.
His conclusion – that AI tools produce wildly inconsistent brand recommendation lists, making “ranking position” a meaningless metric – is correct, well-evidenced, and long overdue.
But Fishkin stopped one step short of the answer that matters.
He didn’t explore why some brands appear consistently while others don’t, or what would move a brand from inconsistent to consistent visibility. That solution is already formalized, patent pending, and proven in production across 73 million brand profiles.
When I shared this with Fishkin directly, he agreed. The AI models are pulling from a semi-fixed set of options, and the consistency comes from the data. He just didn’t have the bandwidth to dig deeper, which is fair enough, but the digging has been done – I’ve been doing it for a decade.
Here’s what Fishkin found, what it actually means, and what the data proves about what to do about it.
Fishkin’s data killed the myth of AI ranking position
Fishkin and Patrick O’Donnell ran 2,961 prompts across ChatGPT, Claude, and Google AI, asking for brand recommendations across 12 categories. The findings were surprising for most.
Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same list in the same order. These are probability engines that generate unique answers every time. Treating them as deterministic ranking systems is – as Fishkin puts it – “provably nonsensical,” and I’ve been saying this since 2022. I’m grateful Fishkin finally proved it with data.
But Fishkin also found something he didn’t fully unpack. Visibility percentage – how often a brand appears across many runs of the same prompt – is statistically meaningful. Some brands showed up almost every time, while others barely appeared at all.
That variance is where the real story lies.
Fishkin acknowledged this but framed it as a better metric to track. The real question isn’t how to measure AI visibility, it’s why some brands achieve consistent visibility and others don’t, and what moves your brand from the inconsistent pile to the consistent pile.
That’s not a tracking problem. It’s a confidence problem.
AI systems are confidence engines, not recommendation engines
AI platforms – ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them – generate every response by sampling from a probability distribution shaped by:
What the model knows.
How confidently it knows it.
What it retrieved at the moment of the query.
When the model is highly confident about an entity’s relevance, that entity appears consistently. When the model is uncertain, the entity sits at a low probability weight in the distribution – included in some samples, excluded in others – not because the selection is random but because the AI doesn’t have enough confidence to commit.
That’s the inconsistency Fishkin documented, and I recognized it immediately because I’ve been tracking exactly this pattern since 2015.
City of Hope appearing in 97% of cancer care responses isn’t luck. It’s the result of deep, corroborated, multi-source presence in exactly the data these systems consume.
The headphone brands at 55%-77% are in a middle zone – known, but not unambiguously dominant.
The brands at 5%-10% have low confidence weight, and the AI includes them in some outputs and not others because it lacks the confidence to commit consistently.
Confidence isn’t just about what a brand publishes or how it structures its content. It’s about where that brand stands relative to every other entity competing for the same query – a dimension I’ve recently formalized as Topical Position.
I’ve formalized this phenomenon as “cascading confidence” – the cumulative entity trust that builds or decays through every stage of the algorithmic pipeline, from the moment a bot discovers content to the moment an AI generates a recommendation. It’s the throughline concept in a framework I published this week.
Every piece of content passes through 10 gates before influencing an AI recommendation
The pipeline is called DSCRI-ARGDW – discovered, selected, crawled, rendered, indexed, annotated, recruited, grounded, displayed, and won. That sounds complicated, but I can summarize it in a single question that repeats at every stage: How confident is the system in this content?
Is this URL worth crawling?
Can it be rendered correctly?
What entities and relationships does it contain?
How sure is the system about those annotations?
When the AI needs to answer a question, which annotated content gets pulled from the index?
Confidence at each stage feeds the next. A URL from a well-structured, fast-rendering, semantically clean site arrives at the annotation stage with high accumulated confidence before a single word of content is analyzed. A URL from a slow, JavaScript-heavy site with inconsistent information arrives with low confidence, even if the actual content is excellent.
This is pipeline attenuation, and here’s where the math gets unforgiving. The relationship is multiplicative, not additive:
C_final = C_initial × ∏τᵢ
In plain English, the final confidence an AI system has in your brand equals the initial confidence from your entity home multiplied by the transfer coefficient at every stage of the pipeline. The entity home – the canonical web property that anchors your entity in every knowledge graph and every AI model – sets the starting confidence, and then each stage either preserves or erodes it.
Maintain 90% confidence at each of 10 stages, and end-to-end confidence is 0.9¹⁰ = 35%. At 80% per stage, it’s 0.8¹⁰ = 11%. One weak stage – say 50% at rendering because of heavy JavaScript – drops the total from 35% to 19% even if every other stage is at 90%. One broken stage can undo the work of nine good ones.
This multiplicative principle isn’t new, and it doesn’t belong to anyone. In 2019, I published an article, How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Gary Illyes. He described how Google calculates ranking “bids” by multiplying individual factor scores rather than adding them. A zero on any factor kills the entire bid, no matter how strong the other factors are.
Google applies this multiplicative model to ranking factors within a single system, and nobody owns multiplication. But what the cascading confidence framework does is apply this principle across the full 10-stage pipeline, across all three knowledge graphs.
The system provides measurable transfer coefficients at every transition and bottleneck detection that identifies exactly where confidence is leaking. The math is universal, but the application to a multi-stage, multi-graph algorithmic pipeline is the invention.
This complete system is the subject of a patent application I filed with the INPI titled “Système et procédé d’optimisation de la confiance en cascade à travers un pipeline de traitement algorithmique multi-étapes et multi-graphes.” It’s not a metaphor, it’s an engineered system with an intellectual lineage going back seven years to a principle a Google engineer confirmed to me in person.
Fishkin measured the output – the inconsistency of recommendation lists. But the output is a symptom, and the cause is confidence loss at specific stages of this pipeline, compounded across multiple knowledge representations.
You can’t fix inconsistency by measuring it more precisely. You can only fix it by building confidence at every stage.
The corroboration threshold is where AI shifts from hesitant to assertive
There’s a specific transition point where AI behavior changes. I call it the “corroboration threshold” – the minimum number of independent, high-confidence sources corroborating the same conclusion about your brand before the AI commits to including it consistently.
Below the threshold, the AI hedges. It says “claims to be” instead of “is,” it includes a brand in some outputs but not others, and the reason isn’t randomness but insufficient confidence.
The brand sits in the low-confidence zone, where inconsistency is the predictable outcome. Above the threshold, the AI asserts – stating relevance as fact, including the brand consistently, operating with the kind of certainty that produces City of Hope’s 97%.
My data across 73 million brand profiles places this threshold at approximately 2-3 independent, high-confidence sources corroborating the same claim as the entity home. That number is deceptively small because “high-confidence” is doing the heavy lifting – these are sources the algorithm already trusts deeply, including Wikipedia, industry databases, and authoritative media.
Without those high-authority anchors, the threshold rises considerably because more sources are needed and each carries less individual weight. The threshold isn’t a one-time gate. Once crossed, the confidence compounds with every subsequent corroboration, which is why brands that cross it early pull further ahead over time, while brands that haven’t crossed it yet face an ever-widening gap.
Not identical wording, but equivalent conviction. The entity home states, “X is the leading authority on Y,” two or three independent, authoritative third-party sources confirm it with their own framing, and the AI encodes it as fact.
This fact is visible in my data, and it explains exactly why Fishkin’s experiment produced the results it did. In narrow categories like LA Volvo dealerships or SaaS cloud computing providers – where few brands exist and corroboration is dense – AI responses showed higher pairwise correlation.
In broad categories like science fiction novels – where thousands of options exist and corroboration is thin – responses were wildly diverse. The corroboration threshold aligns with Fishkin’s findings.
Authoritas proved that fabricated entities can’t fool AI confidence systems
Authoritas published a study in December 2025 – “Can you fake it till you make it in the age of AI?” – that tested this directly, and the results confirm that Cascading Confidence isn’t just theory. Where Fishkin’s research shows the output problem – inconsistent lists – Authoritas shows the input side.
Authoritas investigated a real-world case where a UK company created 11 entirely fictional “experts” – made-up names, AI-generated headshots, faked credentials. They seeded these personas into more than 600 press articles across UK media, and the question was straightforward: Would AI models treat these fake entities as real experts?
The answer was absolute: Across nine AI models and 55 topic-based questions – “Who are the UK’s leading experts in X?” – zero fake experts appeared in any recommendation. Six hundred press articles, and not a single AI recommendation. That might seem to contradict a threshold of 2-3 sources, but it confirms it.
The threshold requires independent, high-confidence sources, and 600 press articles from a single seeding campaign are neither independent – they trace to the same origin – nor high-confidence – press mentions sit in the document graph only.
The AI models looked past the surface-level coverage and found no deep entity signals – no entity home, no knowledge graph presence, no conference history, no professional registration, no corroboration from the kind of authoritative sources that actually move the needle.
The fake personas had volume, they had mentions, but what they lacked was cascading confidence – the accumulated trust that builds through every stage of the pipeline. Volume without confidence means inconsistent appearance at best, while confidence without volume still produces recommendations.
AI evaluates confidence — it doesn’t count mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities fundamentally can’t build.
AI citability concentration increased 293% in under two months
Authoritas used the weighted citability score, or WCS, a metric that measures how much AI engines trust and cite entities, calculated across ChatGPT, Gemini, and Perplexity using cross-context questions.
I have no influence over their data collection or their results. Fishkin’s methodology and Authoritas’ aren’t identical. Fishkin pinged the same query repeatedly to measure variance, while Authoritas tracks varied queries on the same topic. That said, the directional finding is consistent.
Their dataset includes 143 recognized digital marketing experts, with full snapshots from the original study by Laurence O’Toole and Authoritas in December 2025 and their latest measurement on Feb. 2. The pattern across the entire dataset tells a story that goes far beyond individual scores.
The top 10 experts captured 30.9% of all citability in December. By February, they captured 59.5% – a 92% increase in concentration in under two months.
The HHI, or Herfindahl-Hirschman Index, the standard measure of market concentration, rose from 0.026 to 0.104 – a 293% increase in concentration. This happened while the total expert pool widened from 123 to 143 tracked entities.
More experts are being cited, the field is getting bigger, and the top is pulling away faster. Dominance is compounding while the long tail grows.
This is cascading confidence at population scale. The experts who actively manage their digital footprint – clean entity home, corroborated claims, consistent narrative across the algorithmic trinity – aren’t just maintaining their position, they’re accelerating away from everyone else.
Each cycle of AI training and retrieval reinforces their advantage – confident entities generate confident AI outputs, which build user trust, which generate positive engagement signals, which further reinforce the AI’s confidence. It’s a flywheel, and once it’s spinning, it becomes very, very hard for competitors to catch up.
At the individual level, the data confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a gain of +2.02. That’s not because I’m more famous than everyone else on the list.
It’s because we’ve been systematically building my cascading confidence for years – clean entity home, corroborated claims across the algorithmic trinity, consistent narrative, structured data, deep knowledge graph presence.
I’m the primary test case because I’m in control of all my variables – I have a huge head start. In a future article, I’ll dig into the details of the scores and why the experts have the scores they do.
The pattern across my client base mirrors the population data. Brands that systematically clean their digital footprint, anchor entity confidence through the entity home, and build corroboration across the algorithmic trinity don’t just appear in AI recommendations.
They appear consistently, their advantage compounds over time, and they exit the low-confidence zone to enter the self-reinforcing recommendation set.
AI retrieves from three knowledge representations simultaneously, not one
AI systems pull from what I call the Three Graphs model – the algorithmic trinity – and understanding this explains why some brands achieve near-universal visibility while others appear sporadically.
The entity graph, or knowledge graph, contains explicit entities with binary verified edges and low fuzziness – either a brand is in, or it’s not.
The document graph, or search engine index, contains annotated URLs with scored and ranked edges and medium fuzziness.
The concept graph, or LLM parametric knowledge, contains learned associations with high fuzziness, and this is where the inconsistency Fishkin documented comes from.
When retrieval systems combine results from multiple sources – and they do, using mechanisms analogous to reciprocal rank fusion – entities present across all three graphs receive a disproportionate boost.
The effect is multiplicative, not additive. A brand that has a strong presence in the knowledge graph and the document index and the concept space gets chosen far more reliably than a brand present in only one.
This explains a pattern Fishkin noticed but didn’t have the framework to interpret – why visibility percentages clustered differently across categories. The brands with near-universal visibility aren’t just “more famous,” they have dense, corroborated presence across all three knowledge representations. The brands in the inconsistent pool are typically present in only one or two.
The Authoritas fake expert study confirms this from the negative side. The fake personas existed only in the document graph, press articles, with zero entity graph presence and negligible concept graph encoding. One graph out of three, and the AI treated them accordingly.
What I tell every brand after reading Fishkin’s data
Fishkin’s recommendations were cautious – visibility percentage is a reasonable metric, ranking position isn’t, and brands should demand transparent methodology from tracking vendors. All fair, but that’s analyst advice. What follows is practitioner advice, based on doing this work in production.
Stop optimizing outputs and start optimizing inputs
The entire AI tracking industry is fixated on measuring what AI says about you, which is like checking your blood pressure without treating the underlying condition. Measure if it helps, but the work is in building confidence at every stage of the pipeline, and that’s where I focus my clients’ attention from day one.
Start at the entity home
My experience clearly demonstrates that this single intervention produces the fastest measurable results. Your entity home is the canonical web property that should anchor your entity in every knowledge graph and every AI model. If it’s ambiguous, hedging, or contradictory with what third-party sources say about you, it is actively training AI to be uncertain.
I’ve seen aligning the entity home with third-party corroboration produce measurable changes in bottom-of-funnel AI citation behavior within weeks, and it remains the highest ROI intervention I know.
Cross the corroboration threshold for the critical claims
I ask every client to identify the claims that matter most:
Who you are.
What you do.
Why you’re credible.
Then, I work with them to ensure each claim is corroborated by at least 2-3 independent, high-authority sources. Not just mentioned, but confirmed with conviction.
This is what flips AI from “sometimes includes” to “reliably includes,” and I’ve seen it happen often enough to know the threshold is real.
Knowledge graph presence (structured data, entity recognition), document graph presence (indexed, well-annotated content on authoritative sites), and concept graph presence (consistent narrative across the corpus AI trains on) all need attention.
The Authoritas study showed exactly what happens when a brand exists in only one – the AI treats it accordingly.
Work the pipeline from Gate 1, not Gate 9
Most SEO and GEO advice operates at the display stage, optimizing what AI shows. But if your content is losing confidence at discovery, selection, rendering, or annotation, it will never reach display consistently enough to matter.
I’ve watched brands spend months on display-stage optimization that produced nothing because the real bottleneck was three stages earlier, and I always start my diagnostic at the beginning of the pipeline, not the end.
Maintain it because the gap is widening
The WCS data across 143 tracked experts shows that AI citability concentration increased 293% in under two months. The experts who maintain their digital footprint are pulling away from everyone else at an accelerating rate.
Starting now still means starting early, but waiting means competing against entities whose advantage compounds every cycle. This isn’t a one-time project. It’s an ongoing discipline, and the returns compound with every iteration.
Fishkin proved the problem exists. The solution has been in production for a decade.
Fishkin’s research is a gift to the industry. He killed the myth of AI ranking position with data, he validated that visibility percentage, while imperfect, correlates with something real, and he raised the right questions about methodology that the AI tracking vendors should have been answering all along.
But tracking AI visibility without understanding why visibility varies is like tracking a stock price without understanding the business. The price is a signal, and the business is the thing.
AI recommendations are inconsistent when AI systems lack confidence in a brand. They become consistent when that confidence is built deliberately, through:
The entity home.
Corroborated claims that cross the corroboration threshold.
Multi-graph presence.
Every stage of the pipeline that processes your content before AI ever generates a response.
This isn’t speculation, and the evidence comes from every direction.
The process behind this approach has been under development since 2015 and is formalized in a peer-review-track academic paper. Several related patent applications have been filed in France, covering entity data structuring, prompt assembly, multi-platform coherence measurement, algorithmic barrier construction, and cascading confidence optimization.
The dataset supporting the work spans 25 billion data points across 73 million brand profiles. In tracked populations, shifts in AI citability have been observed — including cases where the top 10 experts increased their share from 31% to 60% in under two months while the overall field expanded. Independent research from Authoritas reports findings that align with this mechanism.
Fishkin proved the problem exists. My focus over the past decade has been on implementing and refining practical responses to it.
This is the first article in a series. The second piece, “What the AI expert rankings actually tell us: 8 archetypes of AI visibility,” examines how the pipeline’s effects manifest across 57 tracked experts. The third, “The ten gates between your content and an AI recommendation,” opens the DSCRI-ARGDW pipeline itself.
Google Ads is rolling out a beta feature that lets advertisers connect external data sources directly inside conversion action settings, tightening the link between first-party data and campaign measurement.
How it works. A new section in conversion action details — labeled “Get deeper insights about your customers’ behavior to improve measurement” — prompts advertisers to connect external databases to their Google tag.
Supported integrations include platforms like BigQuery and MySQL
The goal is to enrich conversion metrics and improve performance signals
The feature appears in a highlighted prompt within data attribution settings
Rollout is gradual and currently marked as Beta
Why we care. Direct integrations could reduce friction in syncing offline or backend data with ad measurement. This beta from Google Ads makes it easier to connect first-party data directly to conversion tracking, which can improve measurement accuracy and campaign optimization.
By integrating sources like BigQuery or MySQL, brands can feed richer customer data into their signals, helping offset data loss from privacy changes. In practical terms, better data in means smarter bidding, clearer attribution, and potentially stronger ROI.
Between the lines. Embedding data connections inside conversion settings — rather than requiring separate pipelines — makes advanced measurement more accessible to everyday advertisers, not just enterprise teams.
Zoom out. As ad platforms compete on measurement accuracy, native data integrations are becoming a key differentiator, especially for brands investing heavily in proprietary customer data.
In a perfect world, you could call up a top customer to pick their brain about a piece of content. But in reality, it can be extremely difficult and time-consuming to conduct audience interviews every time you need to create a new topic or refresh an old piece.
A few years ago, content marketing was simpler – keyword intent and quality content was enough to rank at the top of Google’s SERP to get clicks. But in the new era of AI, expectations are different.
Audience research has become critical. However, some companies may not have the resources to perform it.
One way to better understand your target audience is to create a custom GPT in ChatGPT, configured with your persona research. These aren’t replacements for audience research or interviews, but they can help you quickly identify what might be missing or wrong in your content.
Below, I’ll explain how GPTs work so you can use them for audience research.
Perform audience research
Now that the SEO landscape is evolving, audience research is one of your strongest tools to understand the “why” behind search intent.
Here are several easy-to-use methods and tools to get you started on research.
SparkToro: Search by website, interest, or specific URL to segment different audience types. Research can be in-depth or give an overview of your audience.
Review mining: Create automations through various tools and scrape reviews of your company or competitors to see what users are saying, and then analyze them. What does your target customer like? Why did they like it? What didn’t they like? Why?
Listen to calls/review leads: Listen to sales team interactions with customers to hear questions in real time and what led up to a call with a particular client.
Now that you have all your research and your persona, it’s time to make a GPT.
First, log in to ChatGPT, then go to Explore GPTs in the sidebar.
In the upper right corner, click on Create.
Once there, prompt ChatGPT with your audience research data and persona information. You can paste in screenshots of your data to make it easier.
Once all your data is in and a GPT is created, you can start talking to it. Under the Configure tab, you can use conversation starters to ask it about changes, updates, and copy.
These GPTs, like all AI models, aren’t 100% accurate. They don’t replace a real audience survey or interview, but they can help you quickly identify issues with a piece of content and how it might not connect with your audience.
Here’s an example of an optimized page. GPT “Hank” helped make sure the section above the fold did what was intended.
Hank has said what’s working, what isn’t working, and where to improve.
But should you take his advice 100% of the time? Of course not.
But the GPT helps quickly identify issues you may have missed. That’s where the real benefit of using a GPT comes in.
Nothing analyzed or generated by AI is conclusive evidence. If you’re unsure your GPT is giving you accurate information, double-check by prompting it to provide evidence from the sources you gave it.
The GPT can correct itself if the information sounds off. When it does, again ask for evidence from the persona information you provided to double-check the new information.
Update your persona-based GPT
You can always add more information to your GPT to make it more robust.
To do this, go back to Explore GPTs in ChatGPT.
Instead of Create, go to My GPTs in the top right-hand corner.
Click on your persona.
Click on Configure to update, add, or delete your current information.
Remember that a persona is never a one-and-done situation. The more you learn about your audience and the more information you give a GPT, the better, to keep it up to date.
Leverage persona GPTs for SEO content
Personas aren’t absolute, and AI can hallucinate.
But both tools can still help you optimize content.
Once you’re comfortable creating personas, you can build them for your general audience, specific segments, and individual campaigns.
SEO and marketing are always changing, and you can’t just set it and forget it. As you gain audience insights or if audience intent shifts, update information or delete anything no longer relevant in your GPT.
When leveraged correctly, these tools can work with SEO to drive traffic and gain more conversions.
Some advertisers are reporting that a Google Ads system tool designed for low-activity bulk changes is automatically enabling paused keywords — a behavior many account managers say they haven’t seen before.
What advertisers are seeing. Activity logs show entries tied to Google’s “Low activity system bulk changes” tool that include actions enabling previously paused keywords. The log entries appear as automated bulk updates, with a visible “Undo” option.
Historically, the tool has been associated mainly with pausing inactive elements, not reactivating them.
What we don’t know. Google hasn’t publicly documented the behavior or clarified whether this is an intentional feature, a limited experiment, or a bug.
It’s also unclear what triggers the reactivation or how broadly the behavior is rolling out.
Why we care. Unexpected keyword reactivation can quietly alter campaign delivery, affecting budgets, pacing, and performance — especially in tightly controlled accounts where paused keywords are intentional.
For agencies and in-house teams, the change raises new concerns about automation overriding manual controls.
What advertisers should do now. Account managers may want to review change histories regularly, watch for unexpected keyword activations, and use undo functions quickly if unintended changes appear.
Until Google provides clarification, closer monitoring may be necessary for accounts relying heavily on paused keyword structures.
First seen. The issue was first flagged by Performance Marketing Consultant Francesco Cifardi on LinkedIn.
Hiring an SEO agency can be a game-changer for brands looking to outshine the competition in search results.
That said, an SEO agency is only as good as its partnership with its clients. That’s when SEO’s true value can be realized.
What this looks like practically is working together towards shared goals and keeping momentum high. Sometimes that’s easier said than done.
Here’s what you can do to ensure you get the most from your SEO agency partnership.
Because when you’re aligned, you make progress faster and, in turn, can better prove ROI.
Align SEO with what moves the business
Your company sets the business goals, and SEO’s job is to get the traffic to help you reach them.
The more you align on goals with your agency, the more effective an SEO program will be.
Before any campaign is launched, the business and the agency need to discuss how to align SEO with your business goals.
This meeting is even more effective when you can get cross-departmental stakeholders to weigh in.
Objectives can be anything – for instance, market expansion, revenue, building brand authority, enhancing the customer experience, or something else. When executed well, SEO can support nearly any business goal.
This is also an excellent time to facilitate SEO training across teams.
When departments are aligned on the foundational concepts of SEO, they can understand SEO’s function and their role in it.
What does a productive kickoff meeting look like? Here are some things that are important to cover:
Your pain points: Even if you already discussed your SEO pain points during the sales call, it’s important that your SEO team hears it directly from you and has an opportunity to ask questions.
The ins and outs of your business: Help the SEO team understand your business as best you can. You know your industry better than anyone, and the more the agency knows, the better your SEO program will be.
The program’s scope: Make sure you understand the scope and everyone’s role in the project. For example, how long is each phase of the project? Who is responsible on the agency side for which tasks? Who will move things forward at the client company?
In-house capabilities: Update your SEO team on your current capabilities and resources, such as how many writers, developers or designers you have available for tackling tasks.
Common roadblocks: Discuss how to prepare for common roadblocks in SEO implementation. Your SEO agency is well-positioned to speak about these kinds of things, so you can be proactive on your end.
Communication methods: You will want to know how to communicate with the agency (emails, Slack channels, Zoom meetings, etc.) and how often. The more communication, the better. Both parties benefit from staying top of mind; the last thing you want is to sign a contract and then things go dark.
Reporting methods: Find out how the agency will report progress. Is it monthly? Quarterly? In what formats will the reports be delivered? Will the reporting structure meet your needs to show ROI to stakeholders?
Setting all these expectations early creates accountability that keeps the project moving and makes it easier to measure success later.
If needed: Shift your mindset from ‘SEO vendor’ to expert partner
If you’ve put in the research, vetted several agencies, and hired the best one, then there’s a bit of mindset work that may need to happen next to make the relationship as strong as it can be.
While blind trust isn’t the goal, SEO agency clients should prepare themselves to receive and trust their SEO agency’s advice – after all, that’s why you hired them.
Give your agency the visibility it needs to perform
This is relatively simple.
Giving your SEO agency full visibility into historical and real-time performance sets your SEO team up for success on day one.
Set up a protocol for agency access to:
Google Search Console, GA4, Bing Webmaster Tools, your CMS, and relevant third-party analytics or reporting tools.
CRM data or lead-quality feedback to help the agency align SEO efforts with revenue goals.
Any context on past search performance, campaign history, and prior SEO initiatives.
Revenue or operational data as needed, so you have another way to corroborate SEO performance.
Finally, ensure agency access is built into onboarding for any new tools and systems you adopt that may impact SEO.
If you’ve done a good job of including the department leaders in the SEO planning phase, then you will likely get more done.
To remain accountable, it’s advisable that all necessary team members attend key meetings with your SEO agency.
The more they hear things firsthand, the smoother the implementation will go.
However, even the best plans can go awry when teams with competing priorities collide. This might be a question of culture, but that doesn’t mean you can’t make progress.
You might look into solutions like:
Cross-departmental team-building activities.
Open communication about the purpose and goals of the SEO project.
Feedback loops to review the status of your interdepartmental collaboration and identify ways to improve.
The more streamlined and responsive the collaboration, the faster your SEO efforts can gain traction.
Create SEO content that’s powered by brand knowledge
Your agency brings deep expertise in SEO, but only you know your brand, offerings, and customers on a deep level.
This is why collaboration on content is necessary to create truly relevant, helpful content that search engines will rank.
Rather than relegating SEO content to the agency with minimal involvement, commit to being an active partner in the process.
This can include the following action items:
Align on voice, brand, and messaging early
Sharing brand guidelines, tone of voice, and existing messaging frameworks are all helpful.
Your agency should work as an extension of your marketing team, and that starts with having some guidelines for how the brand communicates.
Transfer institutional knowledge
Nothing can get a content team up to speed quicker than reviewing existing content assets or plugging into an internal calendar.
Here are some ways to transfer knowledge with your SEO agency:
Provide access to internal resources like product documentation, customer FAQs, or sales enablement content.
Keep the team updated on relevant marketing and sales activities, such as events and promotions.
Give real-time access to an editorial calendar to help plan for future content for the site, whether it’s editing existing pages or creating new ones.
Bring subject matter experts into the process
Identify in-house subject matter experts who can provide input or be interviewed as needed for the content.
You can’t satisfy the “experience” or “expertise” aspect of Google’s E-E-A-T framework for quality content without some firsthand knowledge.
Collaborate before content is written
Work together on outlines or briefs to align on structure and intent before drafting begins.
Content is much stronger when in-house and agency teams are aligned before the content creation process starts.
Review for relevance
Review drafts not just for accuracy, but for alignment with your customers’ needs and expectations.
The most important thing in SEO content is ensuring it’s relevant to your customers.
The best content will align with your brand and your customers and sound like it came from your company.
Strong SEO content comes from brands that bring the knowledge only they can provide.
One of the biggest bottlenecks to SEO progress is waiting. Waiting for approvals, feedback, access, answers – all of these hinder your ability to compete faster in the search results.
For a more streamlined process, approve all deliverables and tasks promptly.
Making a commitment to moving the project forward could mean you have hard deadlines for turnaround times.
To make this step more efficient, you can look into:
Analyzing the approval workflow and identifying any bottlenecks upfront.
Eliminating unnecessary approval steps or people to simplify the process.
Establishing clear review/approval guidelines upfront to reduce confusion, which can slow the approval process.
Using technology that helps make the process smoother (like Slack or others), where people can collaborate with ease.
Leaning on your SEO agency to prioritize which tasks will yield the highest return and go from there.
In SEO, speed is often a competitive advantage. Streamlining the approval process is one way to keep momentum.
Prioritize implementation above all else
Don’t be surprised by the results you didn’t get from the work you didn’t do.
Often, there are SEO tasks that need to be implemented on your end and require resources.
Not implementing SEO agency recommendations is a prevalent challenge, and probably one of the biggest reasons clients end up leaving an agency.
The sole purpose of spending anything on marketing is to bring more money in than went out.
When internal teams stall on implementing SEO tasks, it can halt the agency’s momentum, hinder your search progress, and waste the company’s budget.
Technical SEO execution is often where SEO projects lose the most momentum.
This is where bringing in IT and dev teams early on in the SEO process is invaluable. When they understand why a change matters and impacts performance, they are more likely to get on board.
Regardless, you can still build in some guardrails to be proactive:
Prioritize SEO tickets in sprint planning.
Involve IT or dev early in discussions that include technical implementation.
Allow direct communication between your agency and development team to speed up resolution.
Create a process for flagging and tracking outstanding technical tasks.
Take the time to make the updates that your SEO team says are worth the effort.
If the task is difficult or time-consuming but will have a big impact, do everything you can to get it done.
Doing what your competitors are unwilling or unable to do is how you win.
The technical foundation is what enables SEO to scale. The faster you can clear any roadblocks here, the sooner your investment starts delivering results.
SEO is a long game. It’s only natural that excitement and momentum are high early on, but after a while, engagement can taper off.
Client-agency partnerships often find their groove over time. The brand trusts the agency, and the agency knows what to do.
But from this place, cracks can begin to form. Maybe you’re not communicating as much, and that lack of communication can lead to gaps in knowledge – on both sides.
Here are some tips to help you stay just as engaged on Day 365 as you were on Day 1.
Be present in reviews and check-ins
Be sure to attend regular check-ins and reporting calls to review progress and surface questions.
It’s easy to miss calls when things pull you away, but staying committed to SEO means showing up, even if it’s just for 15 minutes.
Keep SEO connected to business changes
Share updates on business priorities, product launches, or changes in market strategy.
It’s important that SEO remains at the strategy table so it can adjust as needed.
Use performance data to drive conversations and decisions
Trust in your SEO agency also comes with accountability. So hold your SEO team accountable.
If you see a decline in rankings, traffic, or revenue coming from search, you need to have a conversation if they haven’t already brought it up.
Use these performance insights to adjust tactics, reallocate resources, or explore new opportunities.
Strong SEO results start with strong partnerships
SEO works best when both sides do their part.
The more aligned and collaborative you are with your SEO agency, the faster your SEO program can gain traction and deliver value.
When both sides bring their expertise, stay engaged, and remove friction from the process, SEO becomes a strategic business initiative.
On episode 341 of PPC Live The Podcast, I speak to Andrea Cruz, Head of B2B at Tinuiti, to unpack a mistake many senior marketers quietly struggle with: freezing when clients demand answers you don’t immediately have.
The conversation explored how communication missteps can escalate client tension — and how the right mindset, preparation, and culture can turn those moments into career-defining growth.
From hands-on marketer to team leader
As Cruz advanced in her career, she shifted from managing campaigns directly to leading teams running large, complex accounts. That transition introduced a new challenge: representing work she didn’t personally execute day to day.
When clients pushed back — questioning performance or expectations — Cruz sometimes froze. Saying “I don’t know” or delaying a response could quickly erode trust and escalate frustration.
Her key realization: senior leaders are expected to provide perspective in the moment. Even without every detail, they must guide the conversation confidently.
How to buy time without losing trust
Through mentorship and experience, Cruz developed a practical technique: asking clarifying questions to gain thinking time while deepening understanding.
Examples include:
Asking clients to clarify expectations or timelines
Requesting additional context around their concerns
Confirming what the client already knows about the situation
These questions serve two purposes: they slow down emotionally charged moments and ensure responses address the real issue, not just the surface complaint.
For Cruz, this approach was especially important as a non-native English speaker, giving her space to process complex conversations and respond clearly.
A solutions-first culture beats blame
Cruz emphasized that mistakes are inevitable — but how teams respond defines long-term success.
At Tinuiti, the focus is not on assigning blame but on answering two questions:
Where are we now?
How do we get to where we want to be?
This solutions-oriented mindset creates psychological safety. Teams can openly acknowledge errors, run post-mortems, and identify patterns without fear. Cruz argues that leaders must model this behavior by sharing their own mistakes, not just scrutinizing others’.
That transparency builds trust internally and with clients.
Proactive communication builds stronger client relationship
Rather than waiting for clients to surface problems, Cruz encourages teams to raise issues first. Acknowledging underperformance — even when clients haven’t noticed — demonstrates accountability and strengthens partnerships.
She also recommends tailoring communication styles to each client. Some prefer concise updates; others want detailed explanations. Documenting these preferences helps teams deliver information in ways that resonate.
Regular check-ins about business roadblocks — not just campaign metrics — position agencies as strategic partners, not just media operators.
Common agency mistakes in B2B advertising
Cruz didn’t hold back on recurring issues she sees in audits:
Budgets spread too thin: Running too many channels with insufficient spend leads to meaningless data and weak performance.
Underfunded campaigns: B2B CPCs are inherently high. Campaigns generating only a few clicks per day rarely produce actionable results.
Her advice is blunt: if the budget can’t support a channel properly, it’s better not to run it.
AI is more than a summarization tool
On AI, Cruz cautioned against shallow usage. Treating AI as a simple spreadsheet summarizer misses its broader potential.
Her team is experimenting with advanced applications — automated audits, workflow integrations, and operational efficiencies. She compares AI’s role to medical diagnostics: a powerful assistant that augments expert judgment, not a replacement for it.
For marketers, that means staying curious and continuously exploring new use cases.
The takeaway: preparation and passion drive resilience
Cruz’s central message is simple: mistakes will happen. What matters is preparation, adaptability, and maintaining a solutions-first mindset.
By anticipating client needs, personalizing communication, and embracing experimentation, marketers can transform stressful moments into opportunities to build credibility.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
This Role in One Sentence Turn competitor backlink lists into clean outreach campaigns—find contacts, send smart messages, track replies tightly, and keep decisions fast. Is This You? You like tidy spreadsheets and clear notes more than big talk. You follow up without getting weird about it. You notice patterns fast (and write them down). You […]
Position Overview: Alan Gray LLC is seeking a strategic, research-driven professional to join our company as a Research & Insights Strategist, reporting directly to the COO. In this role you will drive industry intelligence, thought leadership and competitive positioning across the insurance and reinsurance landscape, helping transform these insights into actionable go-to-market activities. The ideal […]
The Company Renewal by Andersen is the replacement division of the 120 year old Andersen Corporation. Andersen is the oldest and largest window and door manufacture in North America. We focus on doing one thing, and doing it well, building the best products in the industry. We build the only unique window offering available in […]
Who We Are DCG is an award-winning, full-service engagement, digital, research, and data company with over 15 years of experience supporting the military, Veterans, and the American public. DCG strategically researches, plans, executes, and evaluates large-scale, multi-platform outreach initiatives across a wide range of mission-driven issues including human trafficking awareness, mental health stigma reduction, suicide […]
Job Description Principal Digital Strategist Location: Austin, TX (Hybrid. In office Monday, Wednesday, Friday) Reports To: Chief Digital Officer Team: Digital Strategy The Opportunity Intellibright is entering its next phase of growth. Our clients are larger. Their expectations are higher. And the work is no longer about managing channels or reporting metrics. It is about […]
Job Description Ingram Content Group (ICG) is searching for a Manager, Online Sales & Marketing to join our team in New York. In this role, you will lead metadata optimization and marketing initiatives and best practices for Ingram Publishing Services (IPS) publishers. You will also lead the marketing strategy for IndiePubs.com, Ingram’s direct to consumer e-commerce platform. […]
Job Description DeepSee delivers an open and flexible agentic platform to accelerate AI adoption for financial services in front, middle, and back-office operations. Our cloud-based platform seamlessly integrates with existing bank architectures, whether theyre just starting their AI transformation journey or looking to enhance existing in-house capabilities with Agentic AI solutions. With DeepSees pre-trained & […]
Digital Marketing Manager The Digital Marketing Manager will be expected to lead a team that effectively crafts and implements digital marketing initiatives including search marketing, social media, email marketing and lead management for clients in a variety of industries. Candidates should expect to be engaged in managing multiple team members, clients and simultaneous projects, assisting […]
About Us HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers […]
Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. About this role We’re hiring an SEO Outreach Specialist to partner with high-authority brands and build high-quality backlinks to support our clients’ growth and authority. You will […]
About GenScript GenScript Biotech Corporation (Stock Code: 1548.HK) is a global biotechnology group. Founded in 2002, GenScript has an established global presence across North America, Europe, the Greater China, and Asia Pacific. GenScript’s businesses encompass four major categories based on its leading gene synthesis technology, including operation as a Life Science CRO, enzyme and synthetic […]
Senior PPC Manager Wilshire Law Firm is a distinguished, award-winning legal practice with over 18 years of experience, specializing in Personal Injury, Employee Rights, and Consumer Class Action lawsuits. We are dedicated to upholding the highest standards of Excellence and Justice and are united in our commitment to achieve the best outcome for our clients. […]
Job Description Job Description This is a remote position. Marketing Manager – Social Media, Email, SEM/SEO Type: Full-Time, Exempt Location: Remote (U.S.-based) Salary: $90,000 – $110,000 annually plus benefits Application Window: Open until filled About Outdoor Afro Outdoor Afro celebrates and inspires Black connections and leadership in nature. Our network connects Black people with our […]
Description The Performance Marketing Project Manager supports Mad Fish Digital’s growing client portfolio, managing paid media, SEO, and other performance marketing projects. This role blends strong project management leadership with a deep understanding of digital marketing channels & strategy. You will ensure high-volume, complex, and fast-moving workstreams are delivered efficiently, on time, within scope, and […]
Job Description Maxwood Furniture, a rapidly growing furniture company with over two decades of success, is home to an expanding portfolio of brands, including Max & Lily, Plank + Beam, Maxtrix, and more. With thriving direct-to-consumer (DTC) websites, we’re seeking a Google Ads Strategist to join our e-Commerce team. If you’re passionate about driving high-impact […]
You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.
Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.
Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
The response also includes a token estimate header intended to help developers manage context windows.
Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.
What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.
Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.
Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.
A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.
Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:
“In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”
And Microsoft’s Fabrice Canel said:
“Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.
The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:
“When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
“The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”
Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…
Google Ads is rolling out a feature that lets advertisers calculate conversion value for new customers based on a target return on ad spend (ROAS), automatically generating a suggested value instead of relying on manual estimates.
The update is designed for campaigns using new customer acquisition goals, where advertisers want to bid more aggressively to attract first-time buyers.
How it works. Advertisers enter their desired ROAS target for new customers, and Google Ads proposes a conversion value aligned with that goal. The system removes some of the guesswork involved in estimating how much a new customer should be worth in bidding models.
The feature doesn’t yet adjust dynamically at the auction, campaign, or product level. Advertisers still apply the value at a broader setting rather than letting the system vary bids based on context.
Why we care. Assigning the right value to a new customer is a weak spot in performance bidding. Many advertisers manually set a flat value that doesn’t always reflect profitability or long-term goals.
By tying suggested conversion values to a target ROAS, advertisers can now optimise towards a more strategy-driven bidding, potentially improving how acquisition campaigns balance growth and efficiency.
What advertisers are saying. Early reactions suggest the feature is a meaningful improvement over static manual inputs. Founder of Savvy Revenue, Andrew Lolk argues the next step would be auction-level intelligence that adjusts values depending on campaign or product performance.
What to watch. If Google expands the feature to support more granular adjustments, it could further reshape how advertisers structure acquisition strategies and value lifetime customer growth.
For now, the tool offers a more structured way to calculate new customer value.
First seen. This update was first spotted by Founder and Digital Marketer Andrew Lolk who showed the new setting on LinkedIn.
SEO is moving out of the marketing silo into organizational design. Visibility now depends on how information is structured, validated, and aligned across the business.
When information is fragmented or contradictory, visibility becomes unstable. The risk isn’t just ranking volatility – it’s losing control of how your brand is interpreted and cited.
For SEO leaders, the choice is unavoidable: remain a channel optimizer or shape the systems that govern how your organization is understood and cited. That shift isn’t happening in a vacuum. AI systems now interpret, reconcile, and assemble information at scale.
The visibility shift beyond rankings
The future of organic search will be shaped by LLMs alongside traditional algorithms. Optimizing for rankings alone is no longer enough. Brands must optimize for how they are interpreted, cited, and synthesized across AI systems.
Clicks may fluctuate and traffic patterns may shift, but the larger change is this: visibility is becoming an interpretation problem, not just a positioning problem. AI systems assemble answers from structured data, brand narratives, third-party mentions, and product signals. When those inputs conflict, inconsistency becomes the output.
In the AI era, collaboration can’t be informal or personality-driven. LLMs reflect the clarity, consistency, and structure of the information they ingest. When messaging, entity signals, or product data are fragmented, visibility fragments with them.
This is a leadership challenge. Visibility can’t be achieved in a silo. It requires redesigning the systems that govern how information is created, validated, and distributed across the organization. That’s how visibility becomes structural, not situational.
If visibility is structural, it needs a system.
Building the visibility supply chain
Collaboration shouldn’t depend on whether the SEO manager and PR manager get along. It must be built into the content supply chain.
To move from a marketing silo to an operational design, we must treat content like an industrial product that requires specific refinement before it’s released into the ecosystem.
This is where visibility gates come in: a series of nonnegotiable checkpoints that filter brand data for machine consumption.
Implementing visibility gates
Think of your content moving through a high-pressure pipe. At each joint, a gate filters out noise and ensures the output is pure:
The technical gate (parsing)
The filter: Does the new product page template use valid schema.org markup (product, FAQ, review)?
The goal: Ensuring the raw material is structured so LLMs can ingest the data without friction.
The brand signal gate (clustering)
The filter: Does the PR copy align with our core entities? Are we using terminology that helps LLMs cluster our brand correctly?
The goal: Removing linguistic drift that confuses an LLM’s understanding of who we are.
The accessibility/readability gate (chunking)
The filter: Is the content structured for RAG (retrieval-augmented generation) systems?
The goal: Moving away from fluff and towards high-information-density prose that can be easily chunked and retrieved by an AI.
The authority and de-duplication gate (governance)
The filter: Does this asset create “knowledge cannibalization” or internal noise?
The goal: Acting as a final sieve to remove conflicting information, ensuring the LLM sees only one single source of truth.
The localization gate (verification)
The filter: Is the entity information consistent across global regions?
The goal: Ensuring cross-referenced data points align perfectly to build model trust.
If gates protect what enters the ecosystem, accountability ensures that behavior changes.
Embedding visibility into cross-functional OKRs
But alignment without visibility into results won’t sustain change.
The most sophisticated infrastructure will fail if it relies on the SEO team’s influence alone.
To move beyond polite collaboration, visibility must be codified into the organization’s performance DNA.
We need to shift from SEO-specific goals to shared visibility OKRs.
When a product owner is measured on the machine-readability of a new feature, or a PR lead is incentivised by entity citation growth, SEO requirements suddenly migrate from the bottom of the backlog to the top of the sprint.
What shared OKRs look like in an operational design:
For product teams: “Achieve 100% schema validation and <100ms time-to-first-byte for all top-tier entity pages.”
For PR and communications: “Increase ‘brand-as-a-source’ citations in LLM responses by 15% through high-authority, entity-aligned placements.”
For content teams: “Ensure 90% of new assets meet the ‘high information density’ threshold for RAG retrieval.”
When stakeholders’ KPIs are tied to the brand’s digital footprint, visibility is no longer “the SEO team’s job.” Instead, it becomes a collective business imperative.
This is where the magic happens: the organizational structure finally aligns with the way modern search engines actually work.
Measuring visibility across the organization
The gates ensure the quality of what we put into the digital ecosystem; the unified visibility dashboard measures what we get out. Breaking down silos starts with transparent data.
If the PR team can see which mentions drive AI citations and source links in AI Overviews, they’re more likely to shift toward high-authority, contextually relevant publications instead of chasing any media outlet.
We need to shift from reporting rankings to reporting entity health and Share of Model (SoM). This dashboard is the organization’s single source of truth, showing that when we pass the visibility gates correctly, our brand authority grows with humans and machines.
Systems and incentives matter, but they don’t operate on their own.
Having the right infrastructure isn’t enough. We need a specific set of qualities in the workforce to drive this model. To navigate the visibility transformation, we need to move away from hiring generalists and start hiring for the two distinct pillars of an operational search strategy.
In my experience, this requires a strategic duo: the hacker and the convincer.
Feature
The hacker (technical architect)
The convincer (visibility advocate)
Core mission
Ensuring the brand is discoverable by machines.
Ensuring the brand is supported by humans.
Primary domain
RAG architecture, schema, vector databases, and LLM testing.
Cross-departmental OKRs, C-suite buy-in, and PR/brand alignment.
Success metric
Share of model (SoM) and information density.
Resource allocation and budget growth.
The gate focus
Technical, accessibility, and authority gates.
Brand signal and localization gates.
The hacker: The engine room
Deeply technical, driven, and a relentless early adopter. They don’t just “do SEO.” They reverse-engineer how Perplexity attributes trust and how Google’s knowledge vault weighs brand entities.
They find the “how.” They aren’t just optimizing for a search bar, but are optimizing for agentic discovery, ensuring your brand is the path of least resistance for an LLM’s reasoning engine.
The convincer: The social butterfly of data
This is the visionary who brings people together and talks the language of business results. They act as the social glue, ensuring the hacker’s technical insights are actually implemented by the brand, tech, and PR teams. They translate schema validation into executive visibility, ensuring that the budget flows where it’s needed most.
How AI visibility reshapes in-house and agency roles
As roles evolve, the brand-agency relationship shifts with them. If you’re an in-house SEO manager today, you’re likely evolving into a chief visibility officer, focusing on the “convincer” role of internal politics and resource allocation.
Historically, agencies were the training ground for talent, and brands hired them for execution. That dynamic may flip. In this new era, brands could become training grounds for junior specialists who need to understand a single entity deeply and manage its internal gates.
Meanwhile, agencies may evolve into elite strategic partners staffed by seasoned visibility hackers who help brands navigate high-level visibility transformation that in-house teams are often too siloed or time-constrained to see.
To prepare your team for the shift to SEO as an operational approach, take these steps:
Set the vision: Do you want to be part of the change? Define what visibility-first looks like for your business.
Take stock of talent: Do you have hackers and convincers? Audit your team not just for skills, but for mindset.
Audit the gaps: Where does communication break down? Find friction points between SEO and PR, or SEO and product, and fix them quickly.
Shift the KPIs: Move away from rankings and toward channel authority, impressions, sentiment share, and, most importantly, revenue and leads.
Be radically transparent: Clarity is key. You’ll need new templates, job descriptions, and responsibilities. Data should be shared in real time. There’s no room for siloed thinking.
What the first 90 days should look like:
Days 1-30 (Audit): Map your brand’s entity footprint. Where does your brand data live, and where is it conflicting?
Days 31-60 (Infrastructure): Embed visibility gates into your CMS or project management tool, such as Jira or Asana.
Days 61-90 (Incentives): Tie 10% of the PR and product teams’ bonuses to information integrity or AI citation growth.
The SEO leader as a systems architect
As we move further into the age of AI, the successful SEO leader will no longer be the person who simply moves a page from position four to position one. They’ll be the systems architect who builds the infrastructure that allows a brand to be seen, understood, and recommended by machines and humans alike.
This transition is messy. It requires challenging old thought patterns and communicating transparently and directly to secure buy-in. But by redesigning the structures that create silos, we don’t just “do SEO.” We build a resilient organization that is visible by default, regardless of what the next algorithm or LLM brings.
The future of search isn’t just about keywords. It’s about how your organization’s information flows through the digital ecosystem. It’s time to stop optimizing pages and start optimizing organizations.
For a long time, PPC performance conversations inside agencies have centered on bidding – manual versus automated, Target CPA versus Maximize Conversions, incrementality debates, budget pacing and efficiency thresholds.
But in 2026, that focus is increasingly misplaced. Across Google Ads, Meta Ads, and other major platforms, bidding has largely been solved by automation.
What’s now holding performance back in most accounts isn’t how bids are set, but the quality, volume, and diversity of creative being fed into those systems. Recent platform updates, particularly Meta’s Andromeda system, make this shift impossible to ignore.
Bidding has been commoditized by automation
Most advertisers today are using broadly similar bidding frameworks.
Google Smart Bidding uses real-time signals across device, location, behavior, and intent that humans can’t practically manage at scale. Meta’s delivery system works in much the same way, optimizing toward predicted outcomes rather than static audience definitions.
In practice, this means most advertisers are now competing with broadly the same optimization engines.
Google has been clear that Smart Bidding evaluates millions of contextual signals per auction to optimize toward conversion outcomes. Meta has likewise stated that its ad system prioritizes predicted action rates and ad quality over manual bid manipulation.
The implication is simple. If most advertisers are using the same optimization engines, bidding is no longer a sustainable competitive advantage. It’s table stakes.
What differentiates performance now is what you give those algorithms to work with – and the most influential input is creative.
Andromeda makes creative a delivery gate
Meta’s Andromeda update is the clearest evidence yet that creative is no longer just a performance lever. It’s now a delivery prerequisite. This matters because it changes what gets shown, not just what performs best once shown.
Meta published a technical deep dive explaining Andromeda, its next-generation ads retrieval and ranking system, which fundamentally changes how ads are selected.
Instead of evaluating every eligible ad equally, Meta now filters and ranks ads earlier in the process using AI models trained heavily on creative signals, improving ad quality by more than 8% while increasing retrieval efficiency.
What this means in practice is critical for marketers. Ads that don’t generate strong engagement signals may never meaningfully enter the auction, regardless of targeting, budget, or bid strategy.
If your creative doesn’t perform, the platform doesn’t just charge you more. It limits your reach altogether.
Creative is now the primary optimization input on Meta
Meta has repeatedly stated that creative quality is one of the strongest drivers of auction outcomes.
In its own advertiser guidance, Meta highlights creative as a core factor in delivery efficiency and cost control. Independent analysis has reached the same conclusion.
A widely cited Meta partnered study showed that campaigns using a higher volume of creative variants saw a 34% reduction in cost per acquisition, despite lower impression volume.
The reason is straightforward. More creative gives the system more signals. More signals improve matching. Better matching improves outcomes.
Andromeda accelerates this effect by learning faster and filtering harder. This is why many advertisers are experiencing plateaus even with stable bidding and budgets. Their creative inputs are not keeping pace with the system’s learning requirements.
While Google has not branded its changes as dramatically as Meta, the direction is the same. Performance Max, Demand Gen, Responsive Search Ads, and YouTube Shorts all rely heavily on creative assets to unlock inventory.
Google has explicitly stated that asset quality and diversity influence campaign performance. Accounts with limited creative assets consistently underperform those with strong asset coverage, even when bidding strategies and budgets are otherwise identical.
Google has reinforced this by introducing creative-focused tools such as Asset Studio and Performance Max experiments that allow advertisers to test creative variants directly. As with Meta, the algorithm can only optimize what it is given.
Strong creative expands reach and efficiency. Weak creative constrains both.
Many agencies are seeing the same pattern across accounts. Performance improves after structural fixes or bidding changes. Then it flattens.
Scaling spend leads to diminishing returns. The instinct is often to revisit bids or efficiency targets. But in most cases, the real constraint is creative fatigue.
Audiences have seen the same hooks, visuals, and messages too many times. Engagement drops. Estimated action rates fall. Delivery becomes more expensive.
This isn’t a platform issue. It’s a creative cadence issue. Creative testing is the missing optimization lever in mature accounts.
Most agencies are structurally set up to optimize bids, budgets, and structure faster than they can produce new creative.
Creative takes time. It requires strategy, copy, design, video, approvals, and iteration. Many retainers still treat creative as a one-off or an add-on rather than a core performance input. The result is predictable. Accounts are technically sound but creatively starved.
If your account has had the same core ads running for three months or more, performance is almost certainly being limited by creative volume, not optimization skill.
High-performing accounts today look messy on the surface with dozens of ads, multiple hooks, frequent refreshes, and constant testing. That isn’t inefficiency. That’s how modern PPC works.
Creative testing is a process, not a campaign
One of the biggest mistakes agencies make is treating creative testing as episodic. Launch new ads. Wait four weeks. Review results. Declare winners and losers. That approach is too slow for how fast platforms learn and audiences fatigue.
High-performing teams treat creative like a product roadmap. There’s always something new in development. Always something learning. Always something being retired.
Effective creative testing focuses on one variable at a time: hook, opening line, visual style, offer framing, social proof, or call to action.
It’s not about finding “the best ad.” It’s about building a library of messages the algorithm can deploy to the right people at the right time.
Once you accept that creative is the constraint, the operational implications are unavoidable. If creative is the main constraint, agency processes need to change.
Creative should be planned alongside media, not after it. Retainers should include ongoing creative production, not just optimization time. Testing frameworks should be explicit and documented.
At a minimum, agencies should be asking:
How often are we refreshing creative by platform?
Are we testing new hooks or just new designs?
Do we have enough volume for the algorithm to learn?
Are we feeding performance insights back into creative strategy?
The best agencies now operate closer to content studios than optimization factories. That’s where the value is.
Creative is the performance lever
Bidding, tracking, and structure still matter. But in 2026, those are table stakes.
If your PPC performance is stuck, the answer is rarely another bidding tweak. It’s almost always better creative. More of it. Faster iteration. Smarter testing.
The platforms have told us this. The data supports it. The accounts prove it.
Creative is no longer a nice-to-have. It’s the performance lever. The agencies that recognize that will be the ones that continue to grow.
We’re in a new era where web content visibility is fragmenting across a wide range of search and social platforms.
While still a dominant force, Google is no longer the default search experience. Video-based social media platforms like TikTok and community-based sites like Reddit are becoming popular search engines with dedicated audiences.
This trend is impacting how news content is consumed. Google’s current news SERP evolution is directly influenced by the personalization of query responses offered by LLMs and the rise in influencer authority enabled by social media platforms.
Google has responded by creating its own AI-powered SERP features, such as AI Overviews and AI Mode, and surfacing more content from social media platforms that provide the “helpful, reliable, people-first content” that Google’s ranking systems prioritize.
Now that search and social are more intertwined than ever, a new paradigm is needed – one in which newsroom audience teams made up of social media, SEO, and AI specialists work holistically on a daily basis toward a cohesive content visibility goal.
When optimizing news content for social platforms, publishers should also consider how those posts may perform in the Google SERP. I’ll cover optimizing for specific SERP features below, but first, you’ll want to think about making your news content social-friendly.
Optimize news content for social media platforms
First, a dose of sanity. Publishers should resist the temptation to optimize content for every social media platform.
It’s better to pick one or two social platforms – where an audience is already established and that offer the best opportunity for growth – than to create accounts on every social platform and let them languish.
Review analytics and conduct audience surveys to gain insights into which platforms your audience already consumes news content.
Here’s a breakdown by platform of which content types work best and how content from each platform can appear on Google.
YouTube
If you’re producing YouTube video content, make sure to follow video SEO best practices. This comprehensive YouTube SEO guide will help you develop a successful video strategy and ensure video titles align with your content.
Per Google, YouTube’s search ranking system prioritizes three elements:
Relevance: Metadata needs to accurately represent video content to be surfaced as relevant for a search query.
Engagement: Includes factors such as a video’s watch time for a specific user query.
Quality: Video content should show topic expertise, authoritativeness, and trustworthiness.
One trend I’ve noticed in YouTube videos on the Google SERP is that older event content can continue to drive visibility rankings long after the event has ended and well after the related article has faded in search rankings.
Explainer videos also demonstrate longevity on the Google SERP. In this government shutdown explainer video, Yahoo Finance includes the expert’s credentials in the description box, further emphasizing the topic expertise element that YouTube’s ranking system prioritizes.
YouTube can also help your visibility in AI Overviews. Nearly 30% of Google AI Overviews cite YouTube, according to BrightEdge. YouTube was cited most often for tutorials, reviews, and shopping-related queries.
While Facebook may not be the cool kid on the block anymore, the social platform has served a diverse set of users over its long history, from its initial audience of college kids to now attracting an older, majority female audience, per Pew Research Center data.
Community-based content and entertainment news that sparks conversation is key to engagement success on Facebook.
While Meta removed the dedicated news tab on Facebook in 2023-2024, leading to cratering Facebook referrals for news publishers, it’s worth noting that Facebook posts have been rising in Google SERP visibility over the last year, so it may be time to reconsider the platform from a search perspective.
In my review of Google search visibility, Facebook posts about holidays and the full moon appear consistently, and the short-form video format is popular.
Since Elon Musk took over the platform in 2022, the audience has shifted to the political right. While the left’s exodus made headlines, usage of X for news is stable or increasing, especially in the U.S., according to the 2025 Digital News Report from the Reuters Institute.
Breaking news, live updates, and political news dominate X feeds and Google visibility, but don’t overlook sports content, where X posts perform well on both the Google SERPs and Discover.
Instagram
This platform emphasizes stylish, visually driven stories and topics, such as red-carpet fashion at award shows. Health topics, especially nutrition and self-care, are also popular.
Sports posts from Instagram, especially game highlights, often surface on the Google SERP as part of a dedicated publisher carousel or in “What people are saying.”
Reddit
A unique aspect of Reddit is that its user base is often not on other social platforms. For news publishers, this can mean a golden opportunity for niche community engagement, but also requires a dedicated strategy that may not translate well to other platforms.
A wide range of news content can perform well on Reddit, from trending topics to health explainers to live sports coverage, but having a deep understanding of the platform’s audience is critical, as is following the Reddit rules of conduct.
Publishers should spend time studying the types of news articles and conversations that drive strong engagement on subreddits before posting anything. Per Reddit, the platform’s largest audiences gravitate toward the following topics:
Technology.
Health.
Direct to consumer (DTC).
Gaming.
Parenting.
The community discussion forum content from Reddit makes it a natural to appear in the Google SERP as part of the “What people are saying” carousel. The Reddit posts I see most often surfaced by Google are related to sports, entertainment, and business.
The TikTok user base leans female and has a greater share of people of color. Approximately half of 18- to 29-year-olds in the U.S. self-report going on TikTok at least once daily, per Pew Research data.
Visual, conversational, and opinion-based content for younger audiences performs best on TikTok. Niche community content also works well; think fashion, #BookTok, etc.
Remember that short-form video requires a dedicated strategy to maximize engagement and reach, and it’s important to keep in mind that TikTok audiences value authenticity over the polish of a professional newsroom production.
Entertainment and shopping content (sales, product reviews) are the categories in which TikTok demonstrates the most Google visibility.
Pinterest
While Pinterest may feel like an old-school social platform, Gen Z is its fastest-growing audience. That being said, Pinterest attracts users from across a wide range of age groups. According to Pinterest’s global data, its audience is 70% women and 30% men.
Don’t overlook the power of Pinterest for lifestyle content niches. Trends around fashion, home decor, DIY, crafts, recipes, and celebrity content are top performers on this visual social platform.
News publishers interested in this platform should have robust lifestyle content that is actionable and delivered with a motivational tone.
How-to and before/after formats are popular. Excellent quality visuals in a vertical format with a 2:3 aspect ratio and text overlays are recommended. Pinterest supports a more relaxed posting schedule compared to other social platforms. Weekly posting is ideal, since much of the content on Pinterest is evergreen.
Similar to Google Trends, Pinterest Trends can help news publishers stay on top of trending topics on the platform.
Social content opportunities by Google SERP feature
If you’re looking to appear in a particular SERP feature, it’s helpful to know how social platform content appears in each type.
Top Stories (or News Box)
The crown jewel of the Google SERP for news publishers, this feature is dedicated to breaking news and developing news stories as well as capturing updates for the big news stories and trends of the moment.
Thumbnail selection is critical for Top Stories. Publishers should pay close attention to the News Box descriptive labels to ensure content is optimized to match the specific intent or angle Google is seeking.
While historically a SERP feature that showcased traditional news publishers, Google is now including relevant social media content in the mix. The Instagram post in Top Stories below is an Instagram Reel from the Detroit Free Press.
Live update articles are often featured in the News Box and are a great format to embed social media posts.
It helps break up walls of texts and serves as a showcase for a news publisher’s live, original reporting from the scene, eyewitness accounts, and related social content that demonstrates a publisher’s subject expertise.
What people are saying
This Google SERP feature is ideal for capturing audience reaction and user-generated content from a variety of social platforms. Short-form video is often featured in this space.
It’s a showcase for any story or topic that drives emotional engagement, including reactions to everything from a celebrity death to a sporting event outcome to a viral trend. Severe weather is also a recurring topic.
Knowledge Panel
There’s a growing interest in this Google SERP feature among news publishers, especially those publishers who produce entertainment content.
Depending on the configuration, publishers have the opportunity to earn a ranking for an image, social post, or article, such as a celebrity biography.
While content opportunities are limited in the Knowledge Panel, they offer more exclusivity, which can increase CTR. YouTube and Instagram are commonly cited here, but X and TikTok have also been growing in visibility.
Google Discover
This social-search hybrid product, which features trending, emotionally engaging content based on a user’s web and app activity, requires a separate optimization strategy.
The keys to Discover visibility are identifying topics that spark curiosity and ensuring articles are formatted for frictionless consumption.
Discover has been considered a “black box” when it comes to content optimization, but there are several basic elements to implement that can increase visibility.
Viral hits may spike a news publisher’s Discover performance temporarily, but as Harry Clarkson-Bennett outlines, publishers need to analyze their Discover performance over time at the entity level to build a smart optimization strategy.
Google’s official Discover optimization tips discourage clickbait practices that actually work quite well on the platform, such as salacious quotes in headlines and content about controversial topics and strong opinion perspectives.
I would never recommend a publisher produce clickbait, but for tabloid publishers, content with a strong, contentious perspective overperforms on Discover, regardless of the official Google guidance.
Headlines and images require serious consideration. While Google is running an experiment in which their AI tool rewrites headlines for Discover, direct, action-oriented, and emotion-driven headlines traditionally perform best. There’s no specific character count recommendation, but at a certain point (typically 100+ characters), the headline will get truncated and an ellipsis will be used.
Images must be formatted to Discover specifications (at least 1,200 pixels wide) and should be eye-catching to make people stop and click. Keep articles short or include a summary box at the top of longer articles. Format articles for scanability.
This Forbes X post featured on my Discover feed nails the elements essential for inclusion.
Politics, sports, and entertainment topics that favor an opinion-driven perspective can drive strong engagement on Discover. For YMYL (Your Money Your Life) content, which can also perform well on Discover, focus on accuracy, expert sources, and lean into the curiosity gap.
YouTube and X are the dominant social platforms featured on Discover, according to a Marfeel study.
This was further confirmed by Clara Soteras, who shared insights from Andy Almeida of Google’s Trust and Safety team as presented at Google Search Central Live in Zurich in December 2025.
Almeida noted that Discover’s algorithm has been updated to “include content from YouTube, Instagram, TikTok, or X published by content creators.”
Instead of feeling dismayed by the increased competition from social media platform content appearing on Google’s SERPs and Discover, news publishers should welcome the additional opportunities for their content to be seen.
In a social and AI-powered search landscape, brand visibility is the key metric. Whether that visibility comes from a news publisher article, video, or social post, it still counts toward brand engagement.
While search strategies have long focused on algorithms, optimizing content for a social-forward SERP requires a different focus. The merging of social and search will spark a holistic audience team revolution in newsrooms, reduce redundant practices, and inspire a content strategy powered by people over algorithms.
As the SaaS market reels from a sell-off sparked by autonomous AI agents like Claude Cowork, new data shows a 53% drop in AI-driven discovery sessions. Wall Street dubbed it the “SaaSpocalypse.”
Whether AI agents will replace SaaS products is a bigger question than this dataset can answer. But the panic is already distorting interpretation, and this data cuts through the noise to show what SEO teams should actually watch.
Copilot went from 0.3% to 9.6% of SaaS AI traffic in 14 months
From November 2024 to December 2025, SaaS sites logged 774,331 LLM sessions. ChatGPT drove 82.3% of that traffic, but Copilot’s growth tells a different story:
SaaS AI Traffic by Source (Nov 2024 – Dec 2025)
Source
Sessions
Share
ChatGPT
637,551
82.3%
Copilot
74,625
9.6%
Claude
40,363
5.2%
Gemini
15,759
2.0%
Perplexity
6,033
0.8%
Starting with just 148 sessions in late 2024, Copilot grew more than 20x by May 2025. From May through December, it averaged 3,822 sessions per month, making it the second-largest AI referrer to SaaS sites by year-end 2025.
Investors erased $300 billion from SaaS market caps over fears that AI agents will replace enterprise software. But this data points to a less dramatic force: proximity.
Copilot thrives because it captures intent inside the workflow. Standalone tools saw a 53% traffic drop while workplace-embedded AI grew 20x.
Software evaluation is work, and Copilot sits where that work happens.
When someone asks, “What CRM should we use for a 20-person sales team?” while building a business case in Excel, that moment is captured—one ChatGPT never sees. The May surge reflects that activation: Microsoft 365 users realizing they could research software without opening a new tab.
41.4% of SaaS AI traffic lands on internal search pages
SaaS AI discovery sends users to internal search results first, not product pages.
Top SaaS Landing Pages by LLM Volume
Page Type
LLM Sessions
% of AI Traffic
Penetration vs Site Avg
Search
320,615
41.4%
8.7x
Blog
127,291
16.4%
8.1x
Pricing
40,503
5.2%
3.2x
Product
39,864
5.1%
2.0x
Support
34,599
4.5%
2.1x
Despite capturing 320,615 sessions — more than blog, pricing, and product pages combined — this dominance likely reflects LLM limitations, not superior content. LLMs route users to search when they lack a specific answer.
For SaaS companies watching their stock crater, that’s useful news: there’s a concrete technical fix. The 41.4% isn’t an existential threat. It’s a crawlability problem.
When an LLM can’t find a direct answer, it defaults to the site’s internal search. The AI treats your search bar as a trusted backup, assuming the search schema will generate a relevant page even if a specific product page isn’t indexed.
At 1.22%, search page penetration is 8.7x the site average. The cause is a “safety net” effect, not optimization.
When more specific pages — like Product or Pricing — lack the data an LLM needs, it falls back to broader search results. LLMs recognize the search URL structure and trust it will return something relevant, even if they can’t predict what.
Blog pages follow with 127,291 sessions and 1.13% penetration. These are structured comparison posts — “best CRM for small teams” or “Salesforce alternatives” — that LLMs cite when they have specific recommendations.
Pricing pages show 0.45% penetration; product pages, 0.28%. When users ask about software selection, LLMs route to comparison surfaces — search and blog — first. Direct product or pricing pages get cited only when the query is already vendor-specific.
The July peak and Q4 decline reflect corporate work cycles
SaaS AI traffic peaked in July at 146,512 sessions, then declined steadily through Q4:
Month
Sessions
Change
July 2025
146,512
Peak
August 2025
120,802
-17.5%
September 2025
134,162
+11.1%
October 2025
135,397
+0.9%
November 2025
107,257
-20.8%
December 2025
68,896
-35.8%
Every platform declined. ChatGPT’s volume was cut in half, dropping from 127,510 sessions in July to 56,786 by year-end. Copilot fell from 4,737 to 2,351. Perplexity dropped from 7,475 to 3,752.
Two factors drove the slide:
People weren’t working. August is vacation season, November includes Thanksgiving, and December is the holidays. Software research happens during work hours; when offices close, discovery drops.
Q4 ends the fiscal “buying window.” Most teams have spent their annual budgets or are deferring contracts until Q1 funding opens. Even teams still working aren’t evaluating tools because there’s no budget left until the new fiscal year.
The July peak reflects midyear momentum: people are working, and Q3 budgets are still available. The Q4 decline reflects both fewer researchers and fewer active buying cycles.
This is where the sell-off narrative breaks down.
Investors treat a 53% traffic drop as proof that AI discovery is stalling. But the data aligns with standard B2B fiscal cycles.
AI isn’t failing as a discovery channel. It’s settling into the same seasonal rhythms as every other B2B buying behavior.
What this data means for SEO teams
Raw traffic numbers don’t show where to invest. Penetration rates and landing page distribution reveal what matters.
Track penetration by page type, not site-wide averages
SaaS shows 0.41% sitewide AI penetration, but that average hides concentration. Search pages reach 1.22%—8.7x higher. Blog pages hit 1.13%. Pricing pages are at 0.45%. Product pages lag at 0.28%.
If you’re only tracking total AI sessions, you’re measuring the wrong metric. AI traffic could grow 50% while penetration on high-value pages declines. Volume hides what matters: where AI users concentrate when they arrive with intent.
Action:
Segment AI traffic by page type in GA4 or your analytics platform.
Track penetration (AI sessions ÷ total sessions) by page category monthly.
Identify pages with elevated concentration, then optimize those surfaces first.
Search results pages are now a primary discovery surface
Internal search captures 41.4% of SaaS AI traffic. If those results aren’t crawlable, indexable, or structured for comparison, you’re invisible to the largest segment of AI-driven buyers.
Most SaaS sites treat internal search as navigation, not content. Results return paginated lists with minimal product detail, no filter signals in URLs, and JavaScript-rendered content LLMs can’t parse.
Action:
With 41.4% of traffic hitting internal search, treat your search bar as an API for AI agents.
Make search pages crawlable (check robots.txt and indexability).
Add structured data using SoftwareApplication or Product schema.
Surface comparison data — pricing, key features, user count — directly in results, not just product names.
Make your data legible to LLMs — pricing and content both
The sell-off is pricing in obsolescence, but for most SaaS companies the real risk is invisibility. Pricing pages show 0.45% AI penetration—below the 0.46% cross-industry average. Blog pages captured 127,291 sessions at 1.13% penetration, but only when content directly answered selection queries. The pattern is clear: LLMs cite what they can read and parse. They skip what they can’t.
Many SaaS sites still gate pricing behind contact forms. If pricing requires a sales conversation, AI won’t recommend you for “tools under $100/month” queries. The same applies to blog content. When someone asks, “What CRM should I use?” the LLM looks for posts that compare options, define criteria, and explain tradeoffs. Generic thought leadership on CRM trends doesn’t get cited.
Action:
Publish pricing on a dedicated, crawlable page. Include representative examples, seat minimums, contract terms, and exclusions.
Keep pricing transparent. Transparent pages get cited; gated pages don’t.
Replace generic blog posts with structured comparison pages. Use tables and clear data points.
Remove fluff. Provide grounding data that lets AI verify compliance and integration capabilities in seconds, not minutes.
Workplace-embedded AI is growing 10x faster than standalone LLMs
Copilot grew 15.89x year over year. Claude grew 7.79x. ChatGPT grew 1.42x. The fastest growth is in tools embedded in existing workflows.
Workplace AI shifts discovery context. In ChatGPT, users are explicitly researching. In Copilot, they’re asking questions mid-task—drafting a proposal, building a comparison spreadsheet, or reviewing vendor options with their team.
Action:
Track Copilot and Claude referrals separately from ChatGPT. Monitor which pages these sources favor.
Recognize intent: these users aren’t browsing — they’re mid-task, deeper in evaluation, and closer to a purchase decision.
Show up in workplace AI discovery to support real-time purchase justification.
Survival favors the findable
The 53% drop from July to December reflects AI usage settling into the software buying process. Buyers are learning which decisions benefit from AI synthesis and which don’t. The remaining traffic is more deliberate, concentrated on complex evaluations where comparison matters.
For SaaS companies, the window for early positioning is closing. The $300 billion sell-off is hitting the sector broadly, but the companies that survive the repricing will be those buyers can find when they ask an AI agent, “Should we renew this contract?”
Teams investing now in transparent pricing, crawlable data, and comparison-focused content are building that findability while competitors debate whether AI discovery matters.
In Google AI Overviews and LLM-driven retrieval, credibility isn’t enough. Content must be structured, reinforced, and clear enough for machines to evaluate and reuse confidently.
Many SEO strategies still optimize for recognition. But AI systems prioritize utility. If your authority can’t be located, verified, and extracted within a semantic system, it won’t shape retrieval.
This article explains how authority works in AI search, why familiar SEO practices fall short, and what it takes to build entity strength that drives visibility.
Why traditional authority signals worked – until they didn’t
For years, SEOs liked to believe that “doing E-E-A-T” would make sites authoritative.
Author bios were optimized, credentials showcased, outbound links added, and About pages polished, all in hopes that those signals would translate into authority.
In practice, we all knew what actually moved the needle: links.
E-E-A-T never really replaced external validation. Authority was still conferred primarily through links and third-party references.
E-E-A-T helped sites appear coherent as entities, while links supplied the real gravitas behind the scenes. That arrangement worked as long as authority could be vague and still rewarded.
It stops working when systems need to use authority, not just acknowledge it. In AI-driven retrieval, being recognized as authoritative isn’t enough. Authority still has to be specific, independently reinforced, and machine-verifiable, or it doesn’t get used.
Being authoritative but not used is like being “paid” with experience. It doesn’t pay the bills.
Search no longer operates on a flat plane of keywords and pages. AI-driven systems rely on a multi-dimensional semantic space that models entities, relationships, and topical proximity.
In that semantic space, entities function much like celestial bodies in physical space, discrete objects whose influence is defined by mass, distance, and interaction with others.
E-E-A-T still matters, but the framework version is no longer a differentiator. Authority is now evaluated in a broader context that can’t be optimized with a handful of on-page tasks.
In AI Overviews, ChatGPT, Claude, and similar systems, visibility doesn’t hinge on prestige or brand recognition. Those are symptoms of entity strength, not its source.
What matters is whether a model can locate your entity within its semantic environment and whether that entity has accumulated enough mass to exert influence.
That mass isn’t decorative. It’s built through third-party citations, mentions, and corroboration, then made machine-legible through consistent authorship, structure, and explicit entity relationships.
Models don’t trust authority. They calculate it by measuring how densely and consistently an entity is reinforced across the broader corpus.
Smaller brands don’t need to shine like legacy publishers. In a semantic system, apparent size and visibility don’t determine influence. Density does.
In astrophysics, some planets appear enormous yet exert surprisingly weak gravity because their mass is spread thinly. Others are much smaller, but dense enough to exert stronger pull.
AI visibility works the same way. What matters isn’t how large your brand appears to humans, but how concentrated and reinforced your authority is in machine-readable form.
The problem with E-E-A-T was never the concept itself. It was the assumption that trustworthiness could be meaningfully demonstrated in isolation, primarily through signals a site applied to itself.
Over time, E-E-A-T became operationalized as visible, on-page indicators: author bios, credentials, About pages, and lightweight citations.
These signals were easy to implement and easy to audit, which made them attractive. They created the appearance of rigor, even when they did little to change how authority was actually conferred.
That compromise held when search systems were willing to infer authority from proxies. It breaks down in AI-driven retrieval, where authority must be explicitly reinforced, independently corroborated, and machine-verifiable to carry weight.
Surface-level trust markers don’t fail because models ignore them. They fail because they don’t supply the external reinforcement required to give an entity real mass.
In a semantic system, entities gain influence through repeated confirmation across the broader corpus. On-site signals can help make an entity legible, but they don’t generate density on their own. Compliance isn’t comprehension, and E-E-A-T as a checklist doesn’t create gravitational pull.
In human-centered search, these visible trust cues acted as reasonable stand-ins. In LLM retrieval, they don’t translate. Models aren’t evaluating presentation or intent. They’re evaluating semantic consistency, entity alignment, and whether claims can be cross-verified elsewhere.
Applying E-E-A-T principles only within your own site won’t create the mass that machines need to recognize, align with, and prioritize your entity in a retrieval system.
AI doesn’t trust, it calculates
Human trust is emotional. Machine trust is statistical.
They reward clean extraction. Lists, tables, and focused paragraphs are easiest to reuse.
They cross-verify facts. Redundant, consistent statements across multiple sources appear more reliable than a single sprawling narrative.
Retrieval models evaluate confidence, not charisma. Structural decisions such as headings, paragraph boundaries, markup, and lists directly affect how accurately a model can map content to a query.
This is why ChatGPT and AI Overview citations often come from unfamiliar brands.
It’s also why brand-specific queries behave differently. When a query explicitly names a brand or entity, the model isn’t navigating the galaxy broadly. It’s plotting a short, precise trajectory to a known body.
With intent tightly constrained and only one plausible source of truth, there’s far less risk of drifting toward adjacent entities.
In those cases, the system can rely directly on the entity’s own content because the destination is already fixed. The models aren’t “discovering” hidden experts. They’re rewarding content whose structure reduces uncertainty.
The semantic galaxy: How entities behave like bodies
LLMs don’t experience topics, entities, or websites. They model relationships between representations in a high-dimensional semantic space.
That’s why AI retrieval is better understood as plotting a course through a system of interacting gravitational bodies rather than “finding” an answer. Influence comes from mass, not intention.
Over time, citations, mentions, and third-party reinforcement increase an entity’s semantic mass. Each independent reference adds weight, making that entity increasingly difficult for the system to ignore.
Queries move through this space as vectors shaped by intent. As they pass near sufficiently massive entities, they bend. The strongest entities exert the greatest gravitational pull, not because they are trusted in a human sense, but because they are repeatedly reinforced across the broader corpus.
Extractability doesn’t create that gravity. It determines what happens after attraction occurs. An entity can be massive enough to warp trajectories and still be unusable if its signals aren’t machine-legible, like a planet with enough gravity to draw a spacecraft in but no viable way to land.
Authority, in this context, isn’t belief. It’s gravity, the cumulative pull created by repeated, independent reinforcement across the wider semantic system.
Entity strength vs. extractability
Classic SEO emphasized backlinks and brand reputation. AI search desires entity strength for discovery, but demands clarity and semantic extractability to be included.
Entity strength – your connections across the Knowledge Graph, Wikidata, and trusted domains – still matters and arguably matters more now. Unfortunately, no amount of entity strength helps if your content isn’t machine-parsable.
Consider two sites featuring recognized experts:
One uses clean headings, explicit definitions, and consistent links to verified profiles.
The other buries its expertise inside dense, unstructured paragraphs.
Only one will earn citations.
LLMs need:
One entity per paragraph or section.
Explicit, unambiguous mentions.
Repetition that reinforces relationships (“Dr. Jane Smith, cardiologist at XYZ Clinic”).
Precision makes authority extractable. Extractability determines whether existing gravitational pull can be acted on once attraction has occurred, not whether that pull exists in the first place.
Structure like you mean it: Abstract first, then detail
LLM retrieval is constrained by context windows and truncation limits, as outlined by Lewis et al. in their 2020 NeurIPS paper on retrieval-augmented generation. Models rarely process or reuse long-form content in its entirety.
If you want to be cited, you can’t bury the lede.
LLMs read the beginning, but then they skim. After a certain number of tokens, they truncate. Basically, if your core insight is buried in paragraph 12, it’s invisible.
To optimize for retrieval:
Open with a paragraph that functions as its own TL;DR.
State your stance, the core insight, and what follows.
Expand below the fold with depth and nuance.
Don’t save your best material for the finale. Neither users nor models will reach it.
Stop ‘linking out,’ start citing like a researcher
The difference between a citation and a link isn’t subtle, but it’s routinely misunderstood. Part of that confusion comes from how E-E-A-T was operationalized in practice.
In many traditional E-E-A-T playbooks, adding outbound links became a checkbox, a visible, easy-to-execute task that stood in for the harder work of substantiating claims. Over time, “cite sources” quietly degraded into “link out a few times.”
A bad citation looks like this:
A generic outbound link to a blog post or company homepage offered as vague “support,” often with language like “according to industry experts” or “SEO best practices say.”
The source may be tangentially related, self-promotional, or simply restating opinion, but it does nothing to reinforce your entity’s factual position in the broader semantic system.
A good citation behaves more like academic referencing. It points to:
Primary research.
Original reporting.
Standards bodies.
Widely recognized authorities in that domain.
It’s also tied directly to a specific claim in your content. The model can independently verify the statement, cross-reference it elsewhere, and reinforce the association.
The point was never to just “link out.” The point was to cite sources.
Engineering retrieval authority without falling back into a checklist
The patterns below aren’t tasks to complete or boxes to tick. They describe the recurring structural signals that, over time, allow an entity to accumulate mass and express gravity across systems.
This is where many SEOs slip back into old habits. Once you say “E-E-A-T isn’t a checklist,” the instinct is to immediately ask, “Okay, so what’s the checklist?”
But engineering retrieval authority isn’t a list of tasks. It’s a way of structuring your entire semantic footprint so your entity gains mass in the galaxy the models navigate.
Authority isn’t something you sprinkle into content. It’s something you construct systematically across everything tied to your entity.
Make authorship machine-legible: Use consistent naming. Link to canonical profiles. Add author and sameAs schema. Inconsistent bylines fragment your entity mass.
Strengthen your internal entity web: Use descriptive anchor text. Connect related topics the way a knowledge graph would. Strong internal linking increases gravitational coherence.
Write with semantic clarity: One idea per paragraph. Minimize rhetorical detours. LLMs reward explicitness, not flourish.
Use schema and LLMS.txt as amplifiers: They don’t create authority. They expose it.
Audit your “invisible” content: If critical information is hidden in pop-ups, accordions, or rendered outside the DOM, the model can’t see it. Invisible authority is no authority.
E-E-A-T taught us to signal trust to humans. AI search demands more: understanding the forces that determine how information is pulled into view.
Rocket science gets something into orbit. Astrophysics navigates and understands the systems it moves through once there.
Traditional SEO focused on launching pages—optimizing, publishing, promoting. AI SEO is about mass, gravity, and interaction: how often your entity is cited, corroborated, and reinforced across the broader semantic system, and how strongly that accumulated mass influences retrieval.
The brands that win won’t shine brightest or claim authority loudest, nor will they be no-name sites simulating credibility with artificial corroboration and junk links.
They’ll be entities that are dense, coherent, and repeatedly confirmed by independent sources—entities with enough gravity to bend queries toward them.
In an AI-driven search landscape, authority isn’t declared. It’s built, reinforced, and made impossible for machines to ignore.
AI search visibility in beauty is increasingly shaped before a prompt is ever entered.
Brands that appear in generative answers are often those already discussed, validated, and reinforced across social platforms. By the time a user turns to AI search, much of the groundwork has been laid.
Using the beauty category as a lens, this article examines how social discovery influences brand visibility – and why AI search ultimately reflects those signals.
Discovery didn’t move to AI – it fragmented
Brand discovery has fragmented across platforms. AI tools influence mid-funnel consideration, but much discovery happens before a user enters a prompt.
The signals that determine AI visibility are formed upstream. By the time a user reaches generative search, preferences and perceptions may already be set. If brands wait until AI search to influence demand, the window to shape consideration has narrowed.
That upstream influence is increasingly social. Roughly two-thirds of U.S. consumers now use social platforms as search engines, per eMarketer research.
This shift extends beyond Gen Z and reflects how people validate information and discover brands. These same platforms consistently appear among the top citation sources in AI results. The dynamic is especially visible in the beauty category.
In a study our agency conducted with a beauty brand partner, we found that Reddit, YouTube, and Facebook ranked among the top cited domains in both AI Overviews and ChatGPT.
While Reddit is often viewed as an anti-brand environment, YouTube appears nearly as frequently in citation data, making it a logical and underutilized target for citation optimization.
The volume reality: Social behavior still outpaces AI
It’s easy to focus on headline figures around AI usage, including the billions of prompts processed daily. But when measured against business outcomes such as traffic and transactions, the scale looks different.
Social platforms are already embedded in mainstream search behavior. For many users, search-like activity on platforms such as TikTok and YouTube is habitual. Nearly 40% of TikTok users search the platform multiple times per day, and 73% search at least once daily.
Referral data reinforces the contrast. ChatGPT referral traffic accounted for roughly 0.2% of total sessions in a 12-month analysis of 973 ecommerce sites, a University of Hamburg and Frankfurt School working paper found. In the same dataset, Google’s organic search traffic was approximately 200 times larger than organic LLM referrals.
AI search is growing and strategically important. But in terms of repeat behavior, measurable sessions, and downstream transactions, social platforms and traditional search continue to operate at a substantially larger scale.
The validation loop: Why AI needs social
The most critical contrarian point for 2026 is that optimizing for social is also optimizing for AI. Large language models are not primary sources of truth. They function as mirrors, reflecting the consensus formed through human conversations in the data they are trained on.
AI systems also demonstrate skepticism toward brand-owned properties. One study found that only 25% of sources cited in AI-generated answers were brand-managed websites.
At the same time, AI engines prioritize third-party validation. Up to 6.4% of citation links in AI responses originated from Reddit, an analysis by OtterlyAI found. This outpaces many traditional publishers.
There’s also a measurable relationship between sentiment and visibility. Research shows a moderate positive correlation between positive brand sentiment on social media and visibility in AI search results.
Treating video as a “brand channel” or a social-first effort rather than a search surface is a strategic failure.
On platforms such as TikTok and YouTube, ranking signals are shaped by spoken language, on-screen text, and captions – signals AI crawlers increasingly use to “triangulate trust.”
In the beauty category, for example, ChatGPT accounts for about 4.3% of searches, while Google processes roughly 14 billion searches per day. However, for “how-to” and technique-based queries, consumers favor the detailed, personalized guidance of social-first video content.
Science-backed brands such as Paula’s Choice and CeraVe dominate AI-generated results because they publish deep, structured educational content. Meanwhile, more traditional marketing-led brands are significantly less visible.
The phrase “dermatologist recommended” correlates with high visibility in AI results because large language models treat expert social proof as a primary ranking signal, according to the same report.
Breaking the high-production barrier: Creating content at scale
One of the biggest hurdles brands cite is budget. Many believe they need a Hollywood production crew to compete in video environments. That is a legacy mindset.
In today’s environment, high-gloss production can be a deterrent. The current landscape rewards authenticity over polish. Consumers are looking for real people with real skin concerns, not highly filtered commercials.
Optimizing for video discovery doesn’t require filmmaking expertise. Brands can leverage internal talent without adding headcount.
Partner with creator platforms: Platforms such as Billow or Social Native allow brands to work with creators for as little as $500 per video. When mapped to a high-intent query, that investment can drive measurable search visibility outcomes.
Leverage social natives on staff: Often, the strongest asset is internal. Identify team members who are active on platforms such as TikTok and understand platform dynamics. Creating internal incentives or challenges to produce content can generate a steady stream of authentic assets while contributing to culture.
Make strategy the differentiator: A large following is not a prerequisite for visibility. In one case, a TikTok profile built from scratch with one part-time creator at $2,500 per month generated hundreds of thousands of views within 90 days. The focus was not on viral trends, but on meaningful transactional terms that drive revenue.
If a new profile can reach more than 100,000 views per video within three months on a limited budget, the barrier isn’t equipment. It’s clarity on the business case and disciplined execution.
The data is clear. Brands can’t win the generative engine if they’re losing the social conversation.
AI models function as mirrors, reflecting web consensus. If real users on Reddit, YouTube, and TikTok aren’t discussing a brand, AI systems have little to surface.
If marketers wait until a user reaches a ChatGPT prompt to shape perception, the opportunity has already narrowed.
Discovery happens upstream. Validation occurs in the loop between social proof and algorithmic citation.
Translating this into action requires rethinking team structure and priorities:
Stop the silos: Your SEO and social teams shouldn’t speak different languages. Both must focus on search surfaces.
Prioritize the “why” before the “what”: Don’t just fix a technical tag. Build the business case for how social sentiment and expert validation drive market share.
Embrace scrappy execution: Whether through $500 creator partnerships or internal social-native talent, start building authentic assets now.
We’re witnessing a shift from algorithm-driven discovery to community-driven discovery.
It’s agile and multidisciplinary, and when executed well, it can meaningfully impact the bottom line.