Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.
Why we care. This suggests agentic commerce isn’t ready to replace traditional shopping. Sending users to owned environments still drives higher conversion rates.
The details. Starting in November, Walmart offered about 200,000 products through OpenAI’s Instant Checkout. Users could complete purchases inside ChatGPT without visiting Walmart’s site.
Daniel Danker, Walmart’s EVP of product and design, said those in-chat purchases converted at one-third the rate of click-out transactions.
He called the experience “unsatisfying” and confirmed Walmart is moving away from it.
Goodbye, Instant Checkout. Instant Checkout was designed to let users complete purchases directly inside ChatGPT without visiting a retailer’s website. However, earlier this month, OpenAI confirmed it was phasing out Instant Checkout in favor of app-based checkout handled by merchants.
What’s changing. Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system.
A similar integration is coming to Google Gemini next month.
Perplexity’s new Comet browser for iOS defaults to Google Search. That’s because mobile queries often focus on navigation, local results, and transactions, where “Google does a much better job … than anyone else … including Perplexity,” according to Perplexity CEO Aravind Srinivas.
Comet for iOS. It includes Perplexity’s AI assistant directly in the browser. Comet for iOS also blends AI answers with standard search results. For many queries, you’ll still see a traditional results page.
You can ask questions by voice while browsing.
The assistant can summarize pages, answer questions, and take actions like drafting emails.
Deep Research features generate cited summaries and prep materials.
What Comet does. According to Perplexity, the assistant can act on your behalf. Examples include:
Summarizing articles and sharing outputs.
Researching people or topics across tabs.
Assisting with bookings or form fills.
What Perplexity is saying.
“The search experience in Comet iOS provides traditional search results pages for fast, local, and high-intent queries that are more common on mobile. Meanwhile, the Comet Assistant easily allows for more advanced knowledge and intelligence powered by the Perplexity answer engine. The intention is for users to have the smoothest browsing experience possible for the real use cases of iOS.”
Why we care. The near future of search increasingly looks hybrid, which means you’ll need to optimize for traditional Google results and AI-driven answers. This also reinforces Google’s dominance in commercial and local search while shifting competition to the AI layer.
Microsoft is changing how advertisers configure automated bidding, aiming to reduce complexity while keeping performance outcomes the same.
What’s happening. The platform is streamlining its bidding options by folding familiar targets like Target CPA and Target ROAS into broader automated strategies rather than standalone campaign settings.
Going forward, advertisers will choose between two core approaches: Maximize Conversions or Maximize Conversion Value, with optional targets layered on top.
Credit – Hana Kobzova of PPC News Feed
How it works. For conversion-focused campaigns, advertisers select Maximize Conversions and can optionally set a target CPA. For value-focused campaigns, they select Maximize Conversion Value and can optionally set a target ROAS.
Microsoft says the underlying bidding behavior has not changed — only the way advertisers configure it has been simplified.
Why we care. This update makes automated bidding simpler and more standardized, which lowers the barrier to using Microsoft Advertising’s performance tools at scale. By consolidating Target CPA and Target ROAS into broader strategies, it reduces setup complexity while still keeping key performance controls available as optional targets.
In practice, this means faster campaign setup, more consistent optimization behavior across accounts, and fewer structural differences between how advertisers manage conversion and value-based bidding.
What’s staying the same. Existing campaigns using Target CPA or Target ROAS will continue to run normally without any required updates. Portfolio bid strategies also remain unchanged.
The bigger picture. The change is part of a broader push to make automated bidding more accessible, reducing setup decisions while maintaining control over performance goals.
Bottom line. Microsoft is consolidating bidding options into simpler frameworks, keeping familiar optimization controls available but moving them into a more streamlined setup experience.
Google is doubling down on the infrastructure behind “agentic commerce,” introducing new capabilities to its Universal Commerce Protocol (UCP) while making it easier for retailers to plug in.
Google says UCP — its open standard for connecting retailers to AI-powered shopping experiences — is getting new features designed to make online buying feel more like a traditional storefront, even when handled by automated agents.
What’s new. The latest updates focus on making shopping via AI agents more functional and flexible.
A new cart capability allows agents to add or save multiple products from a single retailer in one go, mirroring how a typical shopper builds a basket.
There’s also a catalog feature, giving agents access to real-time product data such as pricing, inventory and variants when needed. The goal is to make interactions more accurate and responsive.
Another addition is identity linking. This lets shoppers carry over logged-in benefits — like member pricing or free shipping — when using platforms connected through UCP, rather than losing those perks outside a retailer’s own site.
Why we care. This update accelerates the shift toward AI-driven, agent-led shopping, where platforms like Search and the Google Gemini app may choose, compare and even purchase products on users’ behalf. That makes product data quality — pricing, inventory and feeds — very important for visibility, while simplified onboarding and support from platforms like Salesforce and Stripe suggest rapid adoption, giving early movers a competitive edge.
Zoom out. UCP is designed as a modular system. Retailers and platforms can choose which capabilities to adopt, rather than implementing everything at once.
That flexibility is key as the industry experiments with how much control to hand over to AI-driven shopping experiences.
What Google is doing. Google plans to bring these capabilities into its own ecosystem, including AI-powered experiences in Search and the Google Gemini app.
The company is also working to expand adoption by lowering the barrier to entry. A simplified onboarding process inside Merchant Center is expected to roll out over the coming months.
Every time a new large language model (LLM) drops or Google tweaks an AI Overview, the SEO industry loses its mind. We develop this weird collective amnesia, scrambling to optimize for features that were actually mapped out in patent offices 10 years ago. We’re so obsessed with the now and the next that we’ve stopped looking at the blueprints.
If you want to survive 2026, stop trying to be a futurist. Instead, be an archaeologist.
To actually deliver for our clients, we need a research framework that isn’t just reactive. It has to be a balance: Look back at the foundational patents to understand the rules, and look ahead to see how AI is finally being given the muscle to enforce them.
The archaeology of SEO
There’s a massive misconception that to understand AI search, you need to be a prompt engineer or read every new research paper from OpenAI. You don’t. The logic governing today’s magic is often math that was written a decade ago.
We can’t talk about patent research without honoring the late, great Bill Slawski. For 20 years, he was the SEO industry’s archaeologist. While everyone else was arguing about keyword density, he was reading dry, technical filings to predict exactly where we’re standing right now.
History proves his method worked.
Agent rank (2007): Slawski analyzed agent rank nearly 20 years ago. It described digital signatures connecting content to authors and assigning reputation scores. We ignored it then. Now? We call it E-E-A-T. Google finally got the computing power to actually run the numbers.
The algorithm isn’t magic. It’s math. When a new feature drops today, the engineering blueprints were likely filed between 2007 and 2016. If you want to win, go read the old stuff.
Strategy vs. mechanics: From ‘strings’ to ‘verified things’
Don’t get buried in buzzwords. Categorize your learning into two buckets: ”strategy” or ”mechanic.”
For years, the industry talked about moving from strings to things (entities). But in 2026, that’s just the baseline. We’ve moved from stringstoverifiable things. An entity is worthless if the AI can’t prove it’s real.
Think of it like building a house:
Semantic SEO is the architecture: It’s the vision. It’s making sure the meaning of your site actually matches what the user is looking for.
Entity SEO is the bricklaying: It’s using distinct nouns to build that vision so a machine can parse it.
Verification is the mortgage: This is the part most people miss. It’s turning those entities into findable, provable facts connected to a verified human. If you aren’t connecting your content to a provable human expert, you’re just adding to the noise.
AEO vs. GEO: Let’s stop using these interchangeably
The industry often uses AEO and GEO synonymously, but they require different content structures and serve different objectives.
Answer engine optimization (AEO)
AEO is for the “direct answer.” Think Siri, Alexa, or that single snippet at the top of the page. It’s binary. It’s rooted in those 2006 fact repository patents.
You need ”confidence anchors.” These are unnuanced, structured facts. The engine isn’t “thinking,” it’s fetching. If your fact isn’t provable and anchored to a verified source, the engine won’t risk a hallucination by citing you.
Generative engine optimization (GEO)
GEO is for the “synthesis.” This is Gemini or ChatGPT search explaining how something works. It was formally defined by researchers at Princeton and Georgia Tech in 2023.
You need information gain. These engines don’t just want a fact; they want to see how Concept A affects Concept B. They’re looking for relationships and unique perspectives.
In short, AEO is about being the fact. GEO is about being the authority that the AI trusts to explain those facts.
The trap of forward-projecting: Why the ‘basics’ are still the ‘floor’
There’s a danger in becoming an SEO time traveler. If you spend all your time in the patent archives or stress-testing GEO relationships, you might forget that the AI still has to reach your content.
You can have the most verified, E-E-A-T-heavy content in the world, but if your site’s technical health is a mess, the confidence anchors will never weigh in.
The persistence of technical debt
Basic SEO requirements haven’t changed. The tolerance for ignoring them has simply disappeared.
Crawl budget and efficiency: If your site is bloated with zombie pages or redirect loops, you’re wasting the crawler’s time. LLMs aren’t just looking for content. They’re looking for the cleanest path to a fact.
Core Web Vitals (CWV): More than a ranking factor, it’s a user-utility requirement. If your site doesn’t load instantly, the AI won’t recommend it as a source in a GEO overview.
The headless promise (and reality)
Many of the frustrating technical SEO issues we’ve fought for years — like bloated JavaScript and poor Largest Contentful Paint (LCP) — are finally being solved by headless/composable architectures. By decoupling the front end from the back end, we can deliver the raw, lightning-fast data that answer engines crave while maintaining a high-end experience for humans.
But headless isn’t a “get out of SEO jail free” card. It solves the speed problem, but it introduces new risks around dynamic rendering and metadata delivery.
Whether you’re on a 20-year-old CMS or a cutting-edge headless build, the today requirements are non-negotiable:
Clean URL structures: If the AI can’t deduce the hierarchy from the URL, you’ve already lost the semantic battle
Internal linking (the nervous system): This is how you prove relationships between entities. If your internal linking is broken, your synthesis logic doesn’t exist.
Indexability: If the bot is blocked by a poorly configured robots.txt or a noindex tag left over from staging, the most brilliant “verified human” insights in the world are invisible
You don’t get to play in the frontier of AEO and GEO until you’ve mastered the floor of technical SEO. Don’t let the shiny new objects make you forget the shovel work.
The Slawski deep dive: Stop reading the latest “AI is changing everything” blog posts for five minutes. Go back to the SEO by the Sea archives. Search for Slawski’s analysis on the Knowledge Graph or the user context. You’ll see the 2026 roadmap hidden in plain sight.
The E-E-A-T math audit: Check your assets against Patent 2015/0331866. Are you actually providing the contribution metrics (such as verifiable reviews) that the patent specifically asks for?
Phase 2: The laboratory
The verification pivot: Audit your entities. Are they just names on a page? Link them to a verified LinkedIn profile or a Knowledge Panel. If it’s not verified, it’s not an entity, it’s just a string of text.
Schema stress testing: Don’t just use a plugin and walk away. Experiment with nesting. Try nesting a Person inside a Service as the provider. It works — I’ve seen it trigger rich results when nothing else did.
Phase 3: The frontier
The confidence anchor audit: Look at your top pages. Does every topic have a clear definition? [Entity] is [attribute]. If you’re being vague, you’re invisible to AEO.
The synthesis test: This is a quick one. Paste your article into an LLM and ask it to explain the relationship between your two main topics using only your text. If it has to go to the web to find the answer, you haven’t built the relationship well enough for GEO.
The SEO time traveler isn’t looking back because they’re nostalgic. They’re looking back because they want the blueprint. When you realize AEO is just the modern enforcement of a 20-year-old patent and GEO is just the evolution of semantic relationships, the chaos of AI updates disappears.
Stop optimizing for strings. Start optimizing for verified facts. Give the engine a fact it can’t doubt, connected to a person it trusts, and a relationship it can’t ignore.
The future of search wasn’t written this morning — it was written years ago. You just have to be the one to actually build it.
On the evolution of fact-based search (AEO foundations)
The fact repository patent: Google LLC. (2006). Browseable Fact Repository. U.S. Patent 7,761,436. Analyzing the architecture of structured information retrieval.
Authoritative verification: Google Search Central. Fact Check Structured Data (ClaimReview). Official documentation on how engines verify claims through structured data.
On generative engine optimization (GEO foundations)
The GEO framework: Aggarwal, V., et al. (2023). GEO: Generative Engine Optimization. Princeton University, Georgia Institute of Technology, and the Allen Institute for AI. The definitive study on how LLMs cite and prioritize authoritative sources.
The Slawski legacy: Slawski, B. (Various). SEO by the Sea Archives. For historical context on Agent Rank, phrase-based indexing, and entity metrics.
Multi-location brands are investing heavily in content. But more content doesn’t automatically mean more growth.
I keep seeing the same issue. Each individual location has a blog, and they all cover the same topics. Same keywords. Same structure. Same search intent. The goal is local visibility, but the result is often internal competition and diluted authority.
Building an effective content strategy for multi-location brands requires clarity around roles. What should live at the corporate level to build authority, and what should stay local to drive relevance and conversions? Without that alignment, brands risk competing with themselves instead of winning in search.
Where the strategy breaks down
Most multi-location content issues aren’t intentional. They’re often the result of growth without a clear content framework, or simply too many cooks in the kitchen without overall governance.
Corporate teams are focused on building brand authority and scaling marketing efforts. At the same time, local teams or franchisees want content that answers their customers’ questions and lives on their own site, rather than sending users elsewhere. The assumption is simple: more content equals more visibility.
However, without clear ownership or strategic keyword targeting, overlap becomes inevitable. Similar topics are published across multiple URLs, and over time, this creates internal competition rather than building authority for the entire site.
What type of content belongs at corporate
In general, corporate should own the content that applies to the brand as a whole and build authority at scale. This includes blog content that targets broader informational queries and answers user questions, no matter where users are located.
Educational resources, industry insights, and evergreen topics perform best when consolidated in one place rather than duplicated across multiple URLs.
Core service, product, and line-of-business pages should also be centralized. These pages define what the brand offers and typically remain consistent across markets. While location pages can reference and support this foundational content, they often don’t need to be recreated at the local level unless they differ between locations.
Brand-level content, such as company history, leadership, mission, and differentiators, should also sit at the corporate level. These elements reinforce credibility and should be standardized across the organization.
In some cases, region-specific service variations.
On location pages specifically, there are additional opportunities to highlight uniqueness:
Location-specific testimonials and reviews.
Team bios.
Owner messages or stories.
Events or awards.
Community partnerships.
Descriptive content about the location or service area.
Location-specific imagery.
These elements can live on a single, well-built location page or expand into a microsite structure (pages living under a subfolder) when it makes sense for the business. Remember, the goal of these pages is to strengthen relevance, target geo-modified and local intent queries, and ultimately drive conversions.
One common concern with location pages is duplicate content. The question often becomes, how much duplicate content is acceptable? Instead of focusing on a percentage of unique versus shared content, teams should focus on what’s most useful for the user.
Typically, content that doesn’t need to be unique across every location includes:
When content production lacks clear governance, it can lead to a range of issues that affect organic visibility and crawl efficiency. Over time, this can cause inconsistent rankings, diluted authority, and missed opportunities to convert traffic into leads.
Keyword cannibalization
Keyword cannibalization occurs when multiple pages across a site target the same keywords and search intent. Instead of strengthening rankings, those pages end up competing against each other in search results, and, in some cases, may not get indexed at all.
For multi-location brands, this often happens when individual locations publish similar blog content. For example, a plumbing brand might have multiple location pages with blogs, each posting a blog post titled “Tips to fix a leaky faucet,” creating several URLs targeting the same informational query.
A more strategic approach is to consolidate that topic into a single, strong corporate-level post. This would allow the brand to serve as the authoritative source, build backlinks, answer users’ questions effectively, and strengthen the site’s overall credibility.
Google choosing the ‘wrong page’
When multiple pages on a website are targeting the same or overlapping keywords, search engines have to determine which one to rank, and sometimes it’s not the page you intended.
On a multi-location site, that may mean a local blog ranks nationally for a topic that would be better suited to live on the corporate site and build broader brand authority. While the page may be relevant to the query, it may not guide users clearly to the next step, leading to customer confusion or bounces.
It may also cause users who aren’t in-market to leave the site after absorbing the information because there’s no clear next step for them, or because they only see information about services in Austin, Texas, while they’re located in Cleveland, Ohio.
Instead, consolidating authority on a single, well-ranking page that clearly directs users to take action, whether that means finding their nearest location or submitting a form, would be more beneficial for the brand and users.
Crawl inefficiencies
Publishing multiple blog posts on the same topic, especially when the answer doesn’t vary by location, can result in duplicate or low-value content. While these pages may be regularly crawled due to internal linking, they often never make it into the index.
At scale, this can become a bigger issue, especially for sites with many locations that publish similar informational topics. For a site with dozens or hundreds of locations, having similar blog posts across those locations can create crawl bloat, where search engines may spend time and resources crawling repetitive or low-impact URLs rather than more high-impact pages.
Diluted link equity
When similar content exists across multiple URLs, backlinks and internal links are split among pages instead of consolidating authority on a single strong page. Rather than building momentum around a single piece of content, link equity is distributed across competing versions.
For multi-location brands, this can weaken overall ranking potential. Consolidating authoritative content at the corporate level allows links, authority, and trust signals to compound, strengthening the entire domain and supporting location pages more effectively.
Creating a plan: How corporate and local can work together
After defining roles, move to governance. Multi-location brands need a shared plan for ownership, keyword targeting, and team collaboration.
Before new content gets created, the right questions need to be asked, such as:
Is this topic location- or region-specific, or is it broader for any consumer?
Would publishing this for only one location add value to those specific customers?
Would publishing it across multiple locations make sense?
Who should own the keyword? The brand or a specific location?
Who does it make sense for the information to come from?
Clear keyword mapping and a centralized content calendar can prevent overlap before it starts. When teams understand their roles, content supports overall growth instead of competing internally.
Content collaboration also creates opportunities to strengthen E-E-A-T signals for the site as a whole. Corporate can cover broader educational topics while drawing on real expertise and experience from local teams.
For example, a roofing company might want to write a post about how often homeowners should replace their roofs. The topic is universal. However, the answer could vary by region due to factors such as the material used in that area or the weather.
The blog could include quotes from franchise owners or team members across different regions to provide insights into regional factors, such as heat and humidity in the South versus harsh winter weather in the North.
This would allow corporate to own the topic and give locations the opportunity to provide their unique expertise and experiences. Plus, linking to relevant location pages can reinforce context and create stronger internal linking throughout the site.
Another option would be to create a local hub within the blog.
Search may be changing, but many of the fundamentals remain the same. High-quality, well-structured content that genuinely helps users is what earns visibility.
With Google’s AI Overviews and large language models pulling from authoritative sources, content that clearly answers questions and reflects real expertise is even more valuable. Pages created solely to scale across multiple locations — without adding unique value — are unlikely to perform consistently, and can even hurt a site in the long run.
Content shouldn’t be treated as a volume game. More pages alone won’t drive growth. What matters is planning, ownership, and alignment.
When corporate and local teams build a shared content strategy, it helps turn content into a growth driver rather than just more pages on a site.
The Visibility Governance Maturity Model (VGMM) is about something most SEO programs lack: clear ownership, documented processes, and decision rights that keep your work from being undone by teams who don’t understand it.
So how do you actually score that?
Each domain uses a bank of governance questions tailored to the business. They’re not about how SEO is executed. They’re not about tools. And they’re not an audit.
What VGMM questions are designed to reveal
VGMM questions go to managers and the C-suite — the people who should know about governance but often don’t. Meanwhile, you (the SEO practitioner) actually know whether standards are documented, whether QA is in place, and whether processes exist.
VGMM diagnoses organizations where SEO knowledge lives in practitioners’ heads, rather than in documented, governed processes. If VGMM surveyed only practitioners, it would measure whether you know what to do (you do). But governance maturity measures whether the organization can sustain capability when you’re on vacation, when you get promoted, or when you leave.
Questions go to managers because governance gaps show up as:
“I don’t know the answer to that.”
“I’d have to ask Sarah.”
“We used to have a process, but it’s not enforced anymore.”
“Each team does it differently.”
“That’s documented somewhere, I think?”
When managers can’t answer governance questions, that’s the signal. It means processes aren’t institutionalized.
Single point of failure (SPOF) questions can cap your organization at Level 2 maturity until they’re resolved.
Here are some examples of SPOF question:
“If [key person] left tomorrow, could the organization maintain SEO standards without them?”
“Is SEO knowledge documented in a way that’s transferable to new team members?”
“Are there at least two people who understand how [critical system] works?”
Right now, you’re probably the SPOF. You’re the person who knows where all the bodies are buried, how the redirects work, why that weird canonical setup exists, and what breaks if someone changes X. That feels like job security. It’s actually a job prison.
When VGMM identifies you as an SPOF:
Leadership realizes your knowledge needs to be documented.
You get resources to create documentation.
You get approval to train other people.
You get your own tools, training, and conference budgets. (Yay!)
Your expertise becomes institutional, not personal.
You can take a vacation without disasters.
The organization can’t move past Level 2 until SPOF conditions are cleared. This forces leadership to address hero-dependency.
How domain scores become VGMM score
Each domain model (SEOGMM, CGMM, WPMM, etc.) produces a maturity score based on its own question bank. Here’s how they roll up:
Step 1: Domain assessment
Each domain asks 30-60 governance questions tailored to that area. Questions are behavior-based, not opinion-based:
“Do you think SEO standards are important?” (opinion)
“Are SEO standards documented and approved by [role]?” (behavior)
Step 2: Weighted scoring
Answers are weighted based on impact. Not all governance failures are equal:
Missing documentation = lower weight.
No ownership for critical decisions = higher weight.
SPOF identified = can cap maturity level regardless of other scores.
Step 3: SPOF constraint
If SPOF conditions exist, the domain score maxes out at Level 2 (emerging) even if other governance is strong. You can’t be structured (Level 3) when capability depends on one person.
Step 4: Domain aggregation
Domain scores average into the overall VGMM score with adjusted weighting based on:
Your industry (ecommerce weights performance governance higher).
Your business model (SaaS weights content governance higher).
Your complexity (international weights workflow governance is higher).
Step 5: Final maturity level
The overall VGMM score maps to maturity levels:
Level 1 (0-30%): Ad hoc/unmanaged
Level 2 (31-50%): Aware/emerging
Level 3 (51-70%): Structured/defined
Level 4 (71-90%): Integrated/coordinated
Level 5 (91-100%): Optimized/sustained
Why questions change between models
Domain questions adapt to the maturity model being used.
SEOGMM questions focus on:
Technical SEO governance (schema, redirects, crawl management).
Content optimization standards.
Performance monitoring and alerts.
LVMM questions focus on:
Location data governance across distributed sites.
Google Business Profile management and ownership.
Review response workflows and accountability.
NAP (Name, Address, Phone) consistency
IVMM questions focus on:
Market-specific SEO governance across countries.
Translation workflow and quality controls.
Local compliance and regulatory requirements.
Cross-market coordination and escalation.
Same governance principles, different operational contexts. An ecommerce company doesn’t need LVMM. A restaurant chain with 500 locations absolutely does.
VGMM scores are internal quality metrics, not competitive benchmarks. A 62% score doesn’t mean you’re ahead of another organization at 58%. Here’s why.
Organization focused on technical debt: WPMM weighted higher.
The only meaningful comparison is your organization against itself over time:
Q1 2025: 42% (Level 2)
Q3 2025: 58% (Level 3) ← Progress
Q1 2026: 61% (Level 3) ← Sustained improvement
Use VGMM to answer:
Are we improving quarter over quarter?
Which domains are holding us back?
Where should we invest in governance?
Are SPOF conditions getting resolved?
Don’t use VGMM to answer:
Are we better than Competitor X?
What’s the industry average score?
Should we publicize our score?
What VGMM scoring means for you
As an SEO practitioner, this scoring approach protects you.
You’re not being blamed
When governance assessment reveals gaps, managers are answering questions about organizational capability. They’re not evaluating your individual performance. The assessment asks, “Does the organization have documented standards?” not “Is the SEO person doing a good job?”
SPOF detection is your escape hatch
When SPOF questions flag that the organization depends entirely on you, leadership sees it as an organizational risk — not as proof you’re valuable. They can’t move to Level 3 until they fix it, which means resources for documentation, training, and knowledge transfer.
Weighted scoring highlights systemic issues
When content governance scores low, but SEO governance scores high, it shows other domains aren’t holding up their end. This redirects leadership attention to where governance actually needs strengthening.
Progress tracking shows your impact
When your organization moves from Level 2 to Level 3 over two quarters, you have concrete evidence that governance investments are working. This isn’t “traffic went up 15%,” it’s “organizational capability improved measurably.”
The difference between hero work and sustainable SEO
VGMM’s scoring approach is designed to:
Diagnose organizational capability gaps without blaming individuals.
Make your implicit knowledge visible as institutional risk.
Force leadership to address hero-dependency.
Track progress in ways that make governance investments defensible to finance.
The assessment focuses on whether the organization can sustain your work without you. That’s the difference between being an indispensable hero (exhausting) and being a strategic professional whose expertise is institutionalized (sustainable).
As conversational search gains traction, the bigger question isn’t who has more users, but who can monetize them.
Google enters this phase with a massive advantage: mature ad systems, deep advertiser adoption, and decades of optimization. Early AI Mode signals point to a measured rollout.
The panic phase is over
After a period of panic within the company, Google’s built-in advantages, coupled with massive capital expenditures, have helped it regain ground on category leader ChatGPT in LLM search.
The dust will continue to settle, and analysts have different takes. But one signal stands out: in a major validation, Apple has chosen Google to power its own AI.
It was perhaps premature to assume Google Search would simply lose to ChatGPT on product. That was the consensus at the start of 2025. Google shares fell about 30% from peak to trough before rallying 130%. Today, the company is valued at roughly $3.6 trillion, just behind Apple.
Why did Google’s recent progress in LLM conversational queries — in the form of AI Overviews and AI Mode — have such a large impact on the company’s valuation in such a short time?
Ultimately, it comes down to visibility of financial projections. In a company with so much to defend, Google’s CFO and leadership team needed to determine whether shifts in user behavior — in how search works and how it makes money — would weaken the business model or reinforce it.
Net-net: Google before the shift: huge. Google after the shift: ditto.
Google stock price. The market changed its mind.
Visibility — in the sense of financial planning, not in the SERP — means a great deal to Google’s advertisers, too.
A large proportion of your annual digital advertising budget is likely allocated to Google. You also still care about how you appear in organic results and increasingly, how your company appears in AI Mode, ChatGPT, Claude, and similar environments.
“I’m fine with 30% less of my business coming in from Google, and figuring out lots of complicated ways to replace it,” … said no advertiser ever.
How monetization will play out in AI search
The competition between monetization models in LLM conversations — especially between the two leaders, ChatGPT and Google’s AI Mode — will play out differently from the broader race for overall user share. There are several moving parts to keep an eye on:
Overall assumptions about ad formats and “how to monetize.”
Pace of rollout.
Whether users and public opinion recoil at ads.
Advertiser success rates based on performance measurement.
Advertiser adoption, including adoption by the agency ecosystem.
Platform targeting options.
Advantages of fuller-funnel ad journeys and data collection.
Privacy, safety, policies, and enforcement.
An all-encompassing consumer brand vs. a better mousetrap.
And a few other factors.
Right now, OpenAI is at a critical moment because it’s still so early in its monetization. It’s still testing an inefficient auction model confined to a small group of large advertisers. (Some ads, from their pilot, spotted here.) It may be some time before more mature tools and reporting emerge.
Most recently, OpenAI brought ad platform Criteo (often used for retargeting) on as a partner. The Trade Desk, the world’s largest non-Google DSP for programmatic, is also in the mix. Some observers have speculated about deeper partnerships or even an acquisition of The Trade Desk, though that seems unlikely.
In any case, outsourcing inventory to programmatic partners is a pragmatic step in OpenAI’s monetization strategy. It also underscores how early the company is in building a scalable ads business.
Despite a broad rollout with partners, OpenAI is stepping back from “checkout in chat” integrations after limited adoption from both merchants and consumers. When your primary competitor has a 25-year head start, the learning curve is steep.
So does it make sense now for advertisers to lean into evolving Google user behavior and figure out how to ride the wave?
AI Mode considerations for Google advertisers
Expect the transition to more AI Mode sessions — and eventual monetization — to be smoother than initially anticipated. If you’re an advertiser, AI Mode need not equal panic mode.
How do these LLM sessions look to users? Obvious to you and me, but likely less so for many searchers.
Depending on how you search, AI Overviews may appear above other results on the SERP. That’s becoming a natural extension of Google Search sessions.
But that’s not the real conversational layer. The LLM workflow happens in AI Mode. How often users go there remains to be seen.
It’s improving quickly. Unlike ChatGPT, Google AI Mode downplays how it finds information, whether it is “reasoning,” and which model is being used. The experience feels relatively seamless.
It’s still early, but ads are already appearing in some cases. The key question is how this evolves, and what advertisers should be paying attention to.
The key areas to watch are:
Extent of monetization.
Different ways to monetize.
Advertiser control and campaign types.
Reporting.
Funnel stage.
1. Extent of monetization
AI Mode is in a popularity contest and a price war with ChatGPT. Google will likely try to grind down competitors in LLM conversations by monetizing lightly and gradually. Perplexity and Anthropic, for their part, are completely shunning ads.
An ad-free AI Mode results page. We’re going to see a lot of this.
The result will be less ad volume in this space than you might expect. It may also increase the commercial value of organic visibility in LLM-driven results, leading to renewed focus on content and reputation fundamentals.
Forget ad campaign FOMO, then. It will be interesting to place ads alongside AI-driven sessions, but don’t break the bank. Implement, watch, and learn at your own pace.
2. Different ways to monetize
Experienced advertisers know there are a few ad formats to consider in any situation like this. The main ones would be: text ads triggered by keywords or similar signals, in a reasonably native format, and feed-based Shopping type ads.
Another way to make money is to allow direct checkout — to take a cut of transactions. As noted above, OpenAI is backtracking on this approach, though not eliminating it entirely. How important it will be for Google merchants (and Google itself) remains to be seen.
Google’s experience likely allows it, again, to play the long game, study the data, and bring partners and advertisers along for the ride, on an impressive scale.
Recently, Loblaw inked an integration deal with OpenAI. A week later, it made a similar deal with Google.
In terms of execution, we’ll want to be on the lookout for which kinds of campaign types in Google Ads make your ads eligible to show in AI Mode.
You can learn everything you want about how ads will show in AI Overviews in Google’s help files. Unsurprisingly, text and shopping campaigns from Performance Max, standard shopping, and keyword campaigns make your ad eligible to show in AI Overviews.
Google says less about AI Mode in its documentation, for now.
Our agency recently received a Google deck outlining a “Shopping Expansion” beta. There’s little mention of AI Mode, though one table, in a subtle way, refers to both AI Overviews and AI Mode.
My expectation is that Google will gradually ease users into AI Mode and test ads sparingly. Even if ads appear in a small share of sessions — say 0.5% — that will still generate significant data and feedback.
Advertiser control will likely be even more limited than it is today. In the world of feed-based ads, you have some levers, but the massive machine learning that controls matching is held by Google and the real-world behavioral ecosystem.
To a lesser extent, that’s also how keyword matching works. Micromanagers won’t be too comfortable, but the impact of the ads could still be powerful, especially with data-driven attribution.
Here’s hoping new signals, new reporting breakouts, and new levers become available to advertisers. Namely: audiences including cool personas; demographics; novel larger buckets around life stages; novel characteristics we haven’t even dreamt of yet, such as their language ability level or aspects of how they interact with the LLM.
4. Reporting
The real question is: will reporting be transparent and insightful? We need to at least be able to look at all available metrics for ads that showed in AI Mode specifically. Time will tell.
Microsoft seems to be the first out of the gate with AI-conversation-specific reporting breakouts. We expect no less from Google and are impatiently awaiting further guidance on this front — primarily on what kind of reporting will be directly available in the Google Ads interface.
It would be easy for the casual observer to blindly believe that somehow, you’ll never be eligible to show up in AI Mode or AI Overviews unless you adopt certain Google Ads campaign types. There’s a lot of rhetoric around AI Max.
I’d advise advertisers to do their own research and run their campaigns to suit themselves. Hint: AI Max isn’t the only magical gateway to AI-using users and might not even be a good or appropriate one for many advertisers.
Once reporting is beefed up, you’ll want to know how well the AI-specific inventory is doing, however your campaigns wind up serving there.
5. Funnel stage
But that leads us to a wrinkle. Although ads appearing astride AI Mode conversations could certainly be low-funnel (think Shopping ads in high-intent situations), much of the opportunity here is thematic. Your company may now enjoy new opportunities to associate itself with higher-order thinking, new audience definitions, and new intent characteristics.
This opportunity probably comes to your door dressed up as “lower ROAS.” It may be tempting, therefore, to shy away.
That’s a mistake.
Why?
Like what happened when everyone started using mobile phones, that’s where the consumer will be. Ugly early numbers shouldn’t blind us to the imperatives associated with scale.
Midsized to larger advertisers should step back and reimagine how they approach growth and market impact. There are meaningful opportunities for companies to align more closely with their audiences.
This has little to do with AI Max, and everything to do with how LLM-driven research works. Compare how publishers have traditionally assembled consumer personas — often from fragmented behavioral signals — with the much richer context that can emerge from ongoing interactions with an LLM.
A net shift up-funnel could follow. Imagine a world where a significant share of Google search sessions takes place within conversational experiences. Your ads will need to show up there, where appropriate. If that happens, your funnel — and your competitors’ — will move with it.
Google’s AI Overviews now appear on 14% of shopping queries, up 5.6x from 2.1% in November 2025, according to new Visibility Labs analysis.
Ecommerce brands have been mostly unaffected by AI-driven click loss in Search. That seems to be changing.
Why we care. As Google’s AI Overviews expand across product searches, ecommerce brands face a growing risk of losing visibility and clicks before shoppers reach standard organic or Shopping listings.
The details. The analysis targeted product-intent keywords tied to results with a Shopping box, paid or organic — terms like “weighted blanket,” “mushroom coffee,” “protein powder,” and “blue T-shirts.”
That produced 20,900,323 shopping keywords.
Of those, 2,919,229 triggered an AI Overview — 14.0% penetration.
What they’re saying. Report author Jeff Oxford, founder and CEO of Visibility Labs, concluded:
“Focusing on AI SEO is no longer a luxury, it’s becoming a necessity. Ecommerce sites need to think beyond traditional SEO and start incorporating AI SEO best practices into their search optimization strategy.”
Small publishers are seeing sharp traffic declines from AI search experiences, according to new data from thousands of global sites using Chartbeat analytics.
The details. Publishers with 1,000 to 10,000 daily pageviews lost 60% of search referral traffic over two years, Chartbeat found.
Mid-sized sites with 10,000 to 100,000 daily pageviews lost 47%.
Large publishers with more than 100,000 daily pageviews were down 22%.
Reality check. AI referrals aren’t replacing lost search traffic.
Google Search pageviews fell 34% year over year.
Google Discover dropped 15%.
ChatGPT referrals rose 200% but still account for less than 1% of total traffic.
Yes, but. Traffic is shifting, not disappearing. Total weekly pageviews across publishers fell just 6% from 2024 to 2025, a typical swing tied partly to the news cycle. Search is shrinking as a share of traffic, while direct, internal, and messaging channels are growing.
Why we care. SEO has long been the growth engine for smaller sites. That’s no longer true. If you don’t have a strong brand, direct audience relationships, repeat visitors, or differentiated value, you face the biggest risk as search referrals decline.
Google is cleaning up outdated requirements in Google Ads, reflecting how legacy ad formats have evolved into newer, more automated products.
What’s happening. As of March 17th, Google discontinued multiple ad format policies, including those related to form ads, image quality, responsive ads, and text ads.
What changed. These requirements are being removed because the original formats have transitioned into newer campaign types and ad experiences, making the old policy frameworks no longer relevant.
Why we care. This update simplifies the policy landscape in Google Ads, reducing confusion around outdated requirements tied to legacy formats.
What advertisers should do. Advertisers are now expected to rely on current Google Ads policies and ad format requirements, which govern newer formats like automated and AI-driven campaigns.
The bottom line. By removing legacy requirements, Google is streamlining policies in Google Ads — signalling a continued move toward fewer, more unified standards for modern ad formats.
Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.
We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.
The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.
It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.
Google is expanding how inventory appears in Google Ads Search campaigns, giving automotive advertisers a more visual, product-rich format directly in text ads.
What’s happening. Google Ads now supports vehicle feed integration on Search ads, allowing advertisers to pull inventory from Google Merchant Center and enhance existing text ads with details like make, model, price, and images.
How it works. Vehicle listings appear as clickable assets alongside standard Search ads, either below or beside the main text. Users can click through to a specific vehicle detail page or a broader landing page, depending on the interaction.
Why we care. This update lets automotive advertisers bring real inventory directly into Search ads, making them more engaging and useful for high-intent users. It also means richer visibility without extra campaign setup, while potentially driving more qualified leads by showing key details upfront within Google Search.
Why it’s notable. The update brings Shopping-style visual elements into Search campaigns, helping advertisers showcase real inventory without needing separate campaign types.
For advertisers. Key benefits include a more engaging ad experience, the potential for higher-intent leads, and the ability to use existing Merchant Center feeds without duplicating setup.
Measurement. Performance can be tracked using the “Click type” segment, allowing advertisers to understand how users interact with vehicle listings versus standard ad components.
Matching. Google’s automation determines which vehicles appear based on user intent and query context, continuing the shift toward less manual control and more AI-driven ad assembly.
The bottom line. Vehicle feeds in Search campaigns give automotive advertisers a way to blend inventory with intent-driven queries, turning standard text ads into more dynamic, product-led experiences within Google Search.
When technical issues hold your SEO program back, progress stalls. Yet technical SEO remains a top priority for leading SEOs and Google, and a key factor correlated with rankings in Backlinko’s 2026 Google ranking factors report.
One of the biggest hurdles for in-house SEO programs is the lack of resources to implement changes to the website.
Up to 67% of respondents cite non-SEO dev tasks as the biggest reason technical SEO changes can’t be made, according to Aira’s State of Technical SEO Report.
This is costing businesses an additional $35.9 million in potential revenue each year, seoClarity estimates.
When you can’t do everything, focus on the technical SEO tasks that drive the most impact. Here are the priorities to start with.
Where to focus first: Prioritization techniques
Most enterprise SEO teams want to fix issues that impact the most pages, revenue, and user journeys. Aira’s report ranks in-house technical SEO changes in this order:
Quick wins (big impact, little effort).
Expected impact on KPIs.
Impact on users.
Best practices based on Google guidelines.
Industry changes and algorithm updates.
Still, with millions of pages, it’s difficult to know where to focus. Here are some tips:
To limit what you work on, start with small groups of keywords or specific product areas.
Fix any barriers to ranking.
Ensure all major pages are indexed.
Consolidate, improve, or remove low-quality pages that don’t need to be indexed.
Starting with a technical SEO audit lets you identify the exact technical issues you need to resolve, hopefully with a prioritized list of tasks.
If asked for the top foundational technical SEO fixes, I’d point to the following:
1. Site architecture
A well-organized site creates the foundation for your SEO program to run more smoothly. Site structure impacts key SEO outcomes, including crawling, indexing, and user experience, and getting this piece right really sets the stage for a site primed for search.
Fundamentally, site architecture (what I call “SEO siloing”) helps you organize a site around how people search. The goal is to have your content and navigation hierarchy mirror the keyword themes/queries people use and to couple that with content that answers intent across the customer journey.
For example, this is how a “power tools” section of a large ecommerce site might be siloed/organized:
The internal linking piece of siloing reinforces topical authority and funnels strength toward your primary landing pages. This alignment between search behavior, content themes, and site structure turns your site into a ranking asset.
Here are common site architecture issues to look for:
Important pages that are buried deep in the site (four-plus clicks from the homepage).
Orphaned or weakly linked high-value pages.
Any content topics that lack a clear thematic hub or silo.
Multiple pages competing for the same core query.
Lack of internal linking to connect and reinforce key content sections/silos.
Thin or fragmented supporting pages.
Taxonomy structures (like tags, archives, categories) that are competing with core pages.
A full site architecture overhaul is difficult in enterprise environments, so focus on the tasks you can reasonably get done. Consider these three action items to help make an impact with potentially the least resistance:
Strengthen internal linking to priority content
Internal linking can be deployed without changing the core site architecture/URL structure, so this is usually a faster win. Look to fix:
Revenue-driving pages that are not positioned as thematic hubs.
Topical pages that aren’t interlinked but support the customer journey.
Relevant blog content that doesn’t link back to specific topical hubs or service/product pages.
High-authority pages that are not linking to supporting pages.
Cross-linking between unrelated themes that may dilute topical focus.
Consolidate topics before rebuilding the structure
Instead of reorganizing the entire taxonomy, you can look for things like multiple pages that are targeting the same primary keyword/queries, thin variations of the same topic across different URLs and blog content that may be competing with key pages like products/services.
Here, you can merge overlapping content, choose and reposition one page as the thematic hub and redirect URLs as needed.
Elevate key pages closer to the top
When resources are tight or politics get in the way, you can reinforce the site architecture by ensuring that:
Priority pages are within two to three clicks.
You add contextual links to reinforce thematic hubs/silos by implementing things like “related resources.”
At the enterprise level, crawling and indexing issues are almost guaranteed. But which issues deserve immediate attention?
Fix indexing issues first
This step may feel obvious, but it’s often overlooked. When search engines aren’t indexing the pages that matter most, this step becomes a No. 1 priority on the “fix” list.
But with so many URLs on an enterprise site, it can be overwhelming to review the Google Search Console Page indexing report. So instead, you can start by filtering the Page Indexing report by your XML sitemap. Compare the URLs listed in the sitemap with what Google has indexed.
Any sitemap URLs that are not indexed should be investigated first. Determine why they’re excluded and fix those issues before expanding your analysis.
During your page reviews, you can do a quick triage by checking:
Robots.txt rules that may be blocking critical sections.
Noindex tags that may have been accidentally deployed.
Canonical tags that might be pointing to the wrong versions.
Any rendering issues preventing search engines from seeing content.
Eliminate signal dilution
It’s not uncommon for pages across a large site to send mixed signals to search engines. In enterprise environments, this often happens at the template level where one structural issue can weaken countless URLs.
Canonical tags that conflict with internal links or XML sitemaps.
Near-duplicate pages targeting the same primary query.
Redirect chains that are working inefficiently.
Important pages rendering with more than one URL.
Reduce crawl waste
For an enterprise site, crawl budget is a strategic resource. You want to avoid having crawlers spend time on pages that don’t matter. To see if this is happening, check for some common culprits:
Excess crawl activity on faceted navigation and parameter URLs (filters, sorting, pagination variations).
Internal search results being indexed.
Thin or competing archive structures (tag, category, or date archives).
Out-of-stock or low-value product pages cluttering the index.
Thin, auto-generated, or outdated location pages.
Staging or test environments accidentally being indexed.
Legacy or irrelevant content that’s still crawlable.
If your site is hard to use, it wastes the organic traffic that you’ve worked hard to get. Yelp and Pinterest are two examples of organizations that invested in site performance and experienced revenue and engagement lifts.
Yelp reported a 15% increase in conversion rate after improving page performance and reducing load times.
Pinterest reported that after launching its Progressive Web App, time spent increased 40%, user-generated ad revenue rose 44%, and core engagements grew 60%.
What requests should you prioritize?
Fix backend bottlenecks first
When the backend is performing poorly, it impacts everything from site speed and crawl efficiency to user experience metrics. Check for problems like:
High Time to First Byte (TTFB) on any key templates.
Sluggish performance on high-traffic pages.
Heavy CMS processing or middleware overhead that delays page generation.
Slow database queries that lengthen the server response time.
Some action items that can address these issues include:
Implementing full-page or edge caching for high-traffic templates.
Optimizing database queries and reducing CMS processing overhead on dynamic pages.
Upgrading hosting or moving to a scalable cloud infrastructure for traffic spikes.
Reduce JavaScript and rendering bottlenecks
Enterprise sites face more navigation issues — especially with filters or JavaScript — and accumulate script bloat. Tag managers, personalization engines, testing platforms, and third-party widgets stack up over time.
Unfortunately, no one wants to remove them because they’re not sure if they’re still needed. When you reduce execution overhead, it can improve interactivity and stability without having to redesign the site.
Here are some problems to look for:
Large JavaScript bundles that are loading sitewide.
Third-party scripts that are blocking rendering.
Poor Interaction to Next Paint (INP) scores.
Core content that’s dependent on client-side rendering.
Some high-impact fixes to consider:
Audit and remove unused or redundant third-party scripts.
Defer or lazy-load any non-critical JavaScript.
Shift critical content to render before JavaScript execution by deploying server-side rendering or hybrid rendering where possible.
Improve what users see first
Site performance is also about perceived speed and the first meaningful interaction for users. This is another area where Google’s Core Web Vitals become useful as a diagnostic tool.
Common culprits that cause issues in the user experience category include:
Hero images that are loading late.
Any render-blocking CSS or JavaScript.
Layout shifts that are caused by ads or dynamic elements.
Above-the-fold content that’s being delayed by non-critical assets.
When considering what to fix, focus on structural optimizations that change how the browser prioritizes what matters most:
Preload and properly size all above-the-fold images.
Inline critical CSS and defer any non-essential styles/scripts.
Reserve static space in the layout for dynamic or third-party elements (ads, embeds) to prevent layout shifts.
Improve speed
Improving page speed helps improve indexing. The slower and larger pages are, the fewer Google will crawl. That isn’t an issue if your site has 500 pages. It’s an issue getting a million pages indexed.
The Google Search Console Crawl Stats report is an underutilized tool. The report shows how Googlebot is crawling your site, including the total number of crawl requests, total download size and average response time for fetched resources.
About 63% of website traffic is mobile, according to Statista. But the majority of sites aren’t prioritizing their mobile experiences, according to a study by the Baymard Institute.
For example:
95% of sites put ads in key areas of the homepage that cause interaction issues.
61% don’t use the correct keyboard layouts, which cause accidental typos.
66% place tappable elements too close together, and 32% of sites have tappable elements that are too small.
A responsive website is the baseline. But mobile experiences go beyond this foundation. The most successful enterprises are thinking about how to create sites that are dialed in for mobile users.
While most would agree that many UX functions fall outside the realm of technical SEO, the ability of your site to retain and convert mobile traffic is a shared goal for SEO and UX teams.
With that in mind, you can analyze your mobile experiences alongside your colleagues by thinking about the following questions:
Are your most important pages meeting Core Web Vitals thresholds?
Is your critical content fully visible on mobile, or is it hidden behind tabs, accordions, or scripts?
Are you optimizing for mobile-first indexing by ensuring that structured data, internal links, etc., match desktop versions?
Is your content formatted for mobile scanning with short paragraphs, clear visual hierarchy, and fast-loading media?
Are you accounting for emerging user behaviors in your content, like voice queries and AI-generated summaries?
Is your navigation mobile-friendly, as in simple, thumb-friendly menus, intuitive hierarchy, and easy access to key actions?
Have you evaluated any gesture-based interactions, simplified checkout flows or reduced any input friction for mobile users?
Are you measuring real-user mobile performance (not just lab scores) to identify any friction in the wild?
Build momentum with high-impact technical wins
Technical SEO can feel overwhelming, especially when you don’t control the entire process. Focusing on fundamentals like site structure, crawlability, and user experience sets the stage for everything else in your SEO program.
Prioritize the areas that deliver the biggest impact for the least resistance, and build momentum from there.
Local SEO has a visibility problem, but it’s not where most teams think. It’s not about rankings for “near me” or service keywords.
It’s everything that happens before that moment, when customers are trying to figure out what’s wrong, what it means, and whether they need help at all. That gap is why so much high-intent demand slips through the cracks.
Service-first site structures miss real search behavior
Most local service websites are built the same way: a homepage at the top, then service pages, and often location pages underneath. It’s a good, clean structure, and it makes sense because it mirrors how the business thinks.
You offer drain cleaning, furnace repair, and emergency roof replacement, and you want to show up for “drain cleaning Brookline, MA,” or “furnace repair near me.” That structure also aligns with how Google’s local algorithm has historically rewarded local businesses.
The issue is that customers don’t always start with the service name. A lot of the time, they start with the problem in front of them.
“I need drain cleaning” isn’t always the first thing that pops into a homeowner’s mind. Instead, they might be thinking, “My kitchen sink is backed up, it smells, and I don’t want to make this worse.”
A property manager isn’t necessarily thinking of “HVAC maintenance.” They’re thinking, “This unit is blowing cold air again, and tenants are already complaining.”
If your site is built only around service names, you can miss a big part of the search journey, where people are diagnosing, comparing options, and trying to decide if this is a DIY or a “call someone now” situation.
That mismatch is why so many local sites underperform on some of the highest-value searches in their market. They may have strong service pages, but they don’t have pages designed for the way people actually search when the situation is unfolding. Jobs-to-be-done pages are a practical fix for that gap.
What is a jobs-to-be-done page?
A jobs-to-be-done (JTBD) page is built around what the searcher is trying to accomplish in real life, not what the service is called. It’s a “help + hire” page that lets the reader understand what’s happening, what their options are, and what a smart next step looks like, while also making it easy to contact a professional when they’re ready.
At a glance, it can look like a blog post because it’s informational, but its intent is different. A blog post often exists to attract traffic or cover a topic broadly. A JTBD page exists to support a decision and convert the right visitors into calls and estimate requests.
You can usually feel the difference immediately. A JTBD page doesn’t open with a long introduction. It opens by confirming the situation in plain language and offering a quick path forward if the issue is urgent. The goal is to reduce uncertainty fast, because uncertainty is what keeps people bouncing between search results instead of picking up the phone.
Service pages are still quite important, and they’re still the best fit for searches where the customer already knows exactly what they want and is choosing between providers. These pages tend to win for hire-ready searches like:
“Near me” searches.
“Best” searches.
Service + town searches.
The gap is that a huge portion of local demand shows up earlier as problem-first searches. People search for symptoms. They search “why,” “how,” “what does it cost,” and “is this dangerous.”
If your site only offers service pages, you’re often invisible during the earlier stage where trust is formed. The business that helps someone understand the problem is often the one they call when they decide it’s time.
JTBD pages help you show up earlier without drifting into generic informational content that doesn’t lead anywhere.
The JTBD pages that perform best tend to follow the same decision sequence customers follow in their heads. They start with symptoms, then move into likely causes, then options, then cost context, and then a clear line for when it’s time to call a pro.
1. Start with symptoms, not marketing
Starting with symptoms helps the reader self-identify quickly. You’re not trying to impress them yet. You’re trying to confirm they landed on the right page. A short symptoms section mirrors their lived experience and makes the content feel immediately relevant.
Right after symptoms is usually the best place for a small conversion nudge that’s practical, not salesy. Something like: “If you need this fixed today, call. If not, keep reading to understand what’s likely going on.”
2. Explain likely causes without pretending you can diagnose remotely
This is where a lot of local content goes wrong in either direction. Some sites oversimplify and turn every issue into a one-line answer. Others write a technical essay that overwhelms the reader.
A better approach is to list the most likely causes, ordered from common and simple to less common and more serious, and use conditional reasoning to show what would change the diagnosis. For example:
If it’s only one fixture, it’s often a localized issue.
If multiple fixtures are affected, it’s more likely downstream.
That kind of conditional guidance is useful, and it signals competence.
3. Give options: Safe checks, pro fixes, and what to avoid
After identifying the causes, people want to know what they can do right now. You don’t need a full DIY tutorial. The goal is triage.
Provide a few low-risk checks to help someone avoid an unnecessary call, along with clarity on when continuing to “try things” becomes risky or wasteful.
A simple options section often includes:
A few safe checks that take 5–10 minutes and don’t require special tools.
What a professional typically does on a service call, described in outcomes.
What not to do, focusing on the common actions that create damage.
This is also where conversions happen without pressure. When someone can visualize what a pro will do, the process feels less intimidating.
A lot of local conversions are anxiety conversions. People aren’t just buying the fix, they’re buying relief and certainty.
4. Include cost context without boxing yourself in
Pricing content doesn’t need to promise exact numbers. People are going to look it up anyway. If your page helps them understand realistic ranges and what drives cost, you become the safer choice.
A strong cost section usually covers:
A realistic range for the common, simple scenario.
The main factors that push costs higher (i.e., access, severity, time sensitivity, parts availability, recurring issues).
A quick note on how to avoid surprises.
The tone matters. You’re not selling a coupon. You’re reducing uncertainty.
5. Draw a bright line for ‘when to call a pro’
This is the conversion center of a JTBD page. Many pages just hint at it. The best ones state it clearly and make the triggers specific and unmissable.
Examples of “call a pro” triggers include:
The issue keeps returning within a day or two.
Multiple fixtures or rooms are affected.
There’s evidence of leaks, water damage, or sewage odors.
There’s anything involving gas, electrical proximity, or structural risk.
Delaying is likely to make the repair more expensive.
The reader wants permission to stop guessing. When you give them that permission after guiding them through symptoms, causes, options, and cost context, your CTA feels like the logical next step, not a marketing maneuver.
Where these pages should live on a local website
If you want these pages to feel like service assets rather than “blog content,” placement matters. Don’t bury them in a dated blog feed. Put them in a dedicated section like:
Problems we fix.
Help.
Homeowner guides.
Service resources.
This signals permanence and usefulness and makes internal linking cleaner. A good rule is to include clear conversion moments throughout the page without overdoing it:
Near the top for urgency.
Near “when to call a pro” for decision.
At the end for readiness.
Example: ‘Kitchen sink draining slow’ as a JTBD page
An effective version of this page opens with a plain-language title: “Kitchen sink draining slow? Here’s what causes it and what to do next.” The intro stays brief and sets expectations: most slow drains are caused by grease, soap scum, or buildup in the trap or branch line, and this guide covers safe checks, realistic options, and clear signs it’s time to call.
Symptoms come first, helping the reader quickly confirm they’re in the right place: slow draining, gurgling, odor, or backup when the dishwasher runs. From there, the page moves into likely causes, using conditional guidance to help narrow things down.
Next comes options: a few low-risk checks, a short “what not to do,” and a plain explanation of what a plumber typically does on a service call. This leads naturally into pricing context, with realistic ranges and the factors that influence cost.
Finally, “when to call a pro” makes the decision easy. Recurring clogs, multiple drains, leakage, sewage odor, or shared-building situations where DIY mistakes affect others all signal it’s time to bring in help.
The page is informational, but it’s decisional. It helps the reader choose a next step. That’s why it converts.
JTBD pages serve to complement and support existing service pages. A simple model is to keep your main service pages as core conversion targets, then add a “Problems we fix” cluster around your highest-value services.
For internal linking, JTBD pages link to the relevant service page as the “solve this quickly” path, and service pages link back to JTBD pages as the “not sure what’s causing it” path.
This expands your footprint into problem-first searches and funnels visitors into your service pages with more trust and clarity than they would have had if they arrived cold.
The easiest way to pick JTBD topics is to start with what customers say before they know the service name. Better starting points than a keyword tool include:
Transcripts.
Estimate requests.
Google reviews.
The questions your team answers every week.
Those phrases become your most natural page titles and headings because they’re already written in the customer’s language.
Once you have a starter list, use your favorite keyword tool to expand it and sanity-check demand. You’re looking for problem-first patterns like:
“Why is this happening.”
“What causes it.”
“Is this dangerous.”
“Should I shut it off.”
“How much does it cost.”
These queries are usually informational in intent and often sit one step before a call, especially when the symptom is urgent or recurring.
A quick way to qualify topics is to ask whether the query has a clear “hire” outcome hiding underneath it. “Furnace blowing cold air” does. “Toilet keeps running” does. “Why does my house have hard water” might, depending on the business. If the query is purely academic or doesn’t naturally lead to a service call, it’s usually better as a blog post, not a JTBD page.
Finally, don’t build these pages randomly. Cluster them around your highest-value services first, and make sure each JTBD page has a straightforward internal link path to the related service page as the “solve this quickly” option. That’s what turns a helpful page into booked work.
3 common mistakes that make these pages underperform
Even well-structured JTBD pages can fall short if they miss a few fundamentals.
Writing generic content
If the page could belong to any business in any city, it won’t earn trust or conversions. The fix is to include “what to expect” language and provide relevant local context without turning the page into geo-stuffing.
Over-teaching DIY
When a page becomes a full tutorial, it attracts the wrong audience and increases the chance of damage or liability. Keep DIY checks low-risk and focused on triage.
Avoiding the decision moment
If you don’t clearly state when to call a professional, you miss the main conversion opportunity on the page.
How JTBD pages support AI-driven search visibility
JTBD pages also tend to align with the queries that trigger AI answers in the first place. A lot of AI Overviews show up for problem-first searches, especially:
“Why is this happening.”
“What should I do next.”
“Is this serious.”
JTBD pages are designed to satisfy that moment, while a standard service page usually assumes the customer has already decided what they need.
The structure helps, too. When a page is organized into symptoms, likely causes, options, cost context, and clear “call a pro” thresholds, it becomes easier for systems to summarize accurately and cite specific passages without guessing.
If you want one simple upgrade, add a short “Quick take” paragraph near the top that summarizes the likely causes and next step in three to four sentences. It helps rushed readers and creates a clean block of text that AI systems can lift without distorting your meaning.
Local businesses don’t lose jobs because they lack service pages. They lose jobs because they’re invisible or unconvincing during the moment customers are trying to understand what’s happening.
Jobs-to-be-done pages are a practical way to meet customers earlier, answer the problem they’re actually searching for, and guide them toward a safe next step, including a clear path to book service.
When built with the right structure and intent, they become some of the most useful pages on a local website for both search performance and real-world leads.
For many advertisers, a 30-day click attribution is the default conversion window setting in Google Ads. Once that’s set, it’s rarely revisited. But what if your customers convert within a week, or even two days?
One of my clients, a DTC retailer in an intensely competitive industry, has an average conversion window of 2.2 days. Yet we were optimizing campaigns using a 30-day click window, which meant conversions were credited weeks after the initial interaction. This muddied the waters when assessing the true incremental impact of different advertising efforts, especially when trying to capture that impulse-buying behavior.
With that in mind, we transitioned the account from a 30-day click window to a 7-day click window in January. Here’s what changed and what we learned.
Inside the 7-day attribution test
This client allocates the majority of its marketing budget to Meta Ads. So, when looking at platform reporting, Meta Ads (unshockingly) accounted for the majority of sales. Since Google Ads operated on a 30-day click window at the time, that platform also accounted for a large percentage of sales.
When your average conversion lag is about two days, allowing 30 days of click credit can inflate perceived contribution in-platform. Because of this, neither platform’s incremental impact was clear, making it difficult for our client to know where to invest the majority of their advertising dollars.
Before making any changes, we analyzed conversion path data to understand how long customers were actually taking to purchase. Over the last three months, users converted in an average of 2.2 days, with the majority of conversions happening in less than a day:
We didn’t just flip the switch. We hypothesized that since the average conversion window was 2.2 days, we shouldn’t see too much volatility. To be safe, we first set up this new conversion action as a secondary conversion.
So it looked like this:
Step 1: Duplicate the primary purchase conversion with a 7-day click window and set it as a secondary conversion action.
Step 2: Monitor performance for two weeks.
Step 3: Transition it to primary optimization on January 12, 2026.
When you change a primary conversion action, smart bidding recalibrates, and learning phases reset. This phased approach allowed us to compare reporting side by side and prepare for any volatility.
We compared the 30 days post-conversion action change to the previous period, which included peak holiday shopping season.
Results (in-platform)
Cost: Down 6.3%
Conversions: Up 42.9%
Conversion value: Up 52.1%
ROAS: Up 62.3%
Initial results looked great, but we wanted to see if there was any measurable impact on the business.
Using Shopify sales data, we saw that total sales increased 20%, and net profit increased 30%.
More importantly, marketing mix modeling (MMM) data showed a shift in incremental contribution:
Google’s incremental ROAS increased 10% to 1.82
Meta incremental ROAS dropped 25% to 0.59.
This was the strongest indication that shortening the attribution window helped clarify channel contribution.
Now, in full transparency, we were also restructuring campaigns, adjusting budgets, and refining bidding during this time. So, we can’t give all the credit to the shorter attribution window. But we can say performance wasn’t negatively affected, and the contribution percentage improved.
With overlapping attribution between Meta and Google, both channels looked over-credited in-platform. By shortening Google’s click window, we limited its ability to claim delayed conversions that were likely influenced by other touchpoints. Tightening this window reduced cross-platform duplication and gave us a clearer view of incremental impact.
Additionally, instead of waiting weeks to understand campaigns’ actual ROAS, we could evaluate performance within days and make adjustments more confidently.
By reducing to a 7-day click window, we:
Decreased delayed attribution.
Tightened optimization feedback loops.
Improved performance diagnostics.
This change also significantly affected Smart Bidding behavior. Automated bidding strategies, such as target return on ad spend, optimize based on conversion signals. With a 30-day window, those signals are extended, meaning the algorithm reacts more slowly to performance shifts, such as bid adjustments, seasonality shifts, and budget reallocations.
Moving to a 7-day window continuously feeds fresher signals to Smart Bidding strategies. This created tighter alignment between spend and actual buying behavior. Combined with Marketing Mix Modeling data, the picture became even clearer.
\The cleaner attribution structure gave us stronger confidence in making account optimizations and, even better, helped our client make more informed business decisions about where to invest ad dollars.
In short, tightening the conversion window didn’t just change reporting. It improved the quality of the signal driving optimization decisions.
Shortening an attribution window could work for you, but you should consider the trade-offs.
Reported conversion volume will likely drop, at least initially. Removing delayed conversion credit can make performance appear weaker overnight, even if actual sales haven’t changed. That can create internal concern if your client or other stakeholders aren’t prepared.
Smart Bidding will need to recalibrate. Changing a primary conversion action is a significant change to an account. This will trigger a learning phase and short-term volatility, especially in accounts using automated bid strategies such as target ROAS and Max Conversion Value.
Most importantly, this approach only works if it aligns with your sales cycle. For high-consideration or longer purchase journeys, a 7-day window may undercount legitimate conversions, suppress ROAS, and limit optimization data. A shorter attribution window is only better if it reflects how your customers are actually buying.
Adjusting attribution wasn’t the silver bullet here. In this case, other account improvements were happening simultaneously, and this was just one lever.
Ultimately, this change wasn’t about improving platform metrics. It was about improving business insights.
For this client, aligning the attribution window with a 2.2-day conversion cycle improved conversion signal quality, enhanced Smart Bidding, clarified cross-channel impact, and gave leadership stronger confidence in where to invest.
Whether a 7-day click model makes sense depends on how closely your attribution settings reflect your account’s buying cycle.
Buyers ask a question. You answer it clearly. That’s the premise behind the “They Ask, You Answer” (TAYA) framework, and it holds up in AI-driven discovery.
In theory, it’s simple. In practice, teams struggle to anchor their approach and get started. The result is predictable: generic questions that produce generic content.
That’s a problem, especially as AI shifts search behavior from short queries to more detailed, contextual questions. The difference comes down to the questions you choose to answer. And that’s where a simple concept makes a big difference: buyer personas.
The problem with generic questions
Odds are, you and many of your competitors have already answered these questions somewhere, or could easily.
The generic question trap happens because when marketing teams brainstorm content ideas, they often start with topics like:
What is CRM software?
What is marketing automation?
What is warehouse management?
These are reasonable questions. But they’re also questions no real buyer actually asks.
Real buyers ask questions that reflect their situation and their problem. Something more like this:
“What CRM should a 10-person sales team use?”
“Why are leads slipping through the cracks in our marketing?”
“Why is our warehouse picking speed so slow?”
The difference is subtle but important. The second set of questions includes a person and a problem. That context completely changes the quality of the content.
Instead of typing short keywords, buyers ask detailed, contextual questions:
“I run a 15-person marketing team, and we’re struggling to track leads properly. What should we do?”
The AI explains the problem, outlines solutions, and suggests vendors. In other words, the buyer is having a consultation with an AI.
If your content explains why a specific persona experiences a specific problem, you have a much better chance of shaping how that problem is understood in the first place.
This puts you into the conversation and consideration set earlier, making it more likely you’ll stay in as the user refines their thinking.
Consider this scenario. I’ll use myself as an example.
Marcus.
50 years old.
Meeting some old friends in Birmingham, UK.
Looking for ideas of things to do for the day.
I start by asking a somewhat broad opening question:
“I’m looking for some ideas of things to do with friends in Birmingham on the weekend. I’m 50, and I have several male friends coming down to get together for a day. There will be some beers, no doubt, but we need some activities as well.”
Answers then include a bunch of top-level suggestions — bars, food, and activity-type bars. One of these suggestions is for an F1 gaming arcade. I like games, but not so much cars, so this leads my follow-up to dig in a bit more:
“Ah, we all like games. What about gaming arcades? What gaming arcades could you recommend?”
I get a bunch of recommendations, one of which is for a pinball arcade in Digbeth (a sub-area of Birmingham).
“Pinball Factory in Digbeth sounds fun. What else is there to do around there, food- and drinks-wise?”
I then get a set of responses that helps me narrow the list and formulate a perfect day and evening out for a group of old friends.
Being in the early part of the conversation lets you shape the dialogue and increases your chances of being part of the eventual solution.
Personas are the tools that let you think like your customers and figure out the kinds of questions they ask long before they get to what you have to offer.
When you can identify a customer segment, you can dig into that persona, understand their problems and goals, and think like your target customer to generate content ideas that help them decide earlier.
Now, instead of writing content for a generic avatar, write for specific people. For example, instead of “Things to do in Birmingham?” you might write, “The best day out in Birmingham for a group of 50-year-old gamers.”
You’re still addressing the same underlying topic. But now the content speaks directly to a real person experiencing a real problem.
That shift usually leads to much more useful content. This helps you work your way into those conversations, rather than relying on the brutal battleground of commercial queries.
A simple way to uncover better questions
You don’t need a complicated persona framework to make this work. In most cases, a simple three-question exercise will uncover the kinds of problems your buyers are actually trying to solve.
For each persona you serve, ask:
What are they responsible for? For example:
Hitting sales targets.
Generating marketing leads.
Running warehouse operations.
What problems make that responsibility difficult? Examples might include:
Missed sales targets.
Inefficient warehouse processes.
Poor lead tracking.
Slow picking speeds.
What would they ask Google or an AI assistant when that problem occurs?
Now the questions start to look very different. Instead of broad category topics like: “What is CRM software?”
You start to see questions like:
“Why are leads slipping through the cracks in our CRM?”
“What CRM should a small sales team use?”
“Why is our warehouse picking speed so slow?”
Those questions reflect real situations experienced by real people — exactly where the best content opportunities exist.
‘They Ask, You Answer’ works better with personas
Now we revisit the big five topic areas from TAYA: cost, problems, comparisons, reviews, and best-of. These topics already give us a powerful structure for content.
But when they’re approached generically, they often lead to content that looks exactly like everyone else’s.
So you can go from the typical, generic kinds of questions:
“How much does CRM software cost?”
“What problems do warehouse systems have?”
“HubSpot vs. Salesforce”
“Best CRM systems”
“Salesforce review”
To questions that are more connected to the needs of our target audience:
“What does CRM cost for a 10-person sales team?”
“Why do my warehouse managers struggle with picking accuracy?”
“HubSpot vs. Salesforce for a small B2B marketing team”
“Best CRM for growing sales teams”
“Is Salesforce worth it for a mid-size sales organization?”
The topic hasn’t changed, but the question now reflects the buyer’s reality. This shift produces more useful content and aligns with how people interact with AI assistants.
Those questions include their role, company size, or situation:
“We’re a small marketing team struggling to track leads properly. What CRM should we use?”
If your content already answers these persona-driven questions, you increase the chances that your explanation becomes part of that conversation.
In other words, personas don’t replace They Ask, You Answer. They make it more precise, moving you from answering generic topics to answering the exact questions buyers ask when solving a real problem.
Persona-driven questions improve TAYA content for three simple reasons.
They mirror how buyers actually think: People rarely search for textbook definitions. They search for solutions to problems. Personas keep the content anchored in those problems.
They produce more useful content: When you know who the content is for, it naturally includes better examples, more practical advice, and clearer explanations. In other words, content that genuinely helps someone move forward.
They align with how AI explains problems: AI assistants increasingly start by explaining the problem before recommending a solution. Content that clearly describes why a specific persona experiences a specific challenge fits neatly into this pattern. This increases the chances that your explanation influences the AI’s response.
One of the most common mistakes companies make with content marketing is starting with their product.
But buyers rarely start their journey there. They start with a problem.
Personas help keep your content anchored in the buyer’s world rather than your own product — remember, it’s about the customer, not you.
And that simple shift often makes the difference between content that merely exists and content that actually influences decisions.
Where you enter the conversation matters
“They Ask, You Answer” remains one of the most powerful frameworks available to marketers. But the effectiveness of the framework depends entirely on the quality of the questions you answer.
Personas help you turn vague topics into real problems and ask better questions. When your content speaks directly to those problems, buyers and AI systems are far more likely to trust your answers.
YouTube is experimenting with a format that keeps ads visible even after users skip — potentially reshaping how advertisers think about skippable inventory.
What’s happening. YouTube is testing a sticky banner overlay that appears once a user skips an ad. Instead of the ad disappearing entirely, a branded card remains on-screen until the viewer actively dismisses it.
How it works. After hitting “skip,” users return to their video as normal, but a persistent banner tied to the original ad stays visible within the player, extending the advertiser’s presence beyond the initial skip.
Why we care. This test from YouTube creates a way to maintain visibility even when users skip ads, potentially increasing brand recall without requiring full ad views.
It also changes how skippable performance may be evaluated, as impressions and engagement could extend beyond the initial ad, giving brands more value from the same inventory within Google’s ecosystem.
Why it’s notable. Skippable ads have traditionally meant lost visibility once skipped. This format changes that dynamic by offering a second chance for exposure, even when users opt out of the full ad experience.
Impact for advertisers. The update creates an opportunity for extended brand visibility and recall, but could also influence engagement metrics and how users perceive ad interruptions.
The bottom line. If rolled out widely, the sticky banner test could redefine what a “skipped” ad means — turning it into continued, lower-friction exposure rather than a full exit for advertisers on YouTube.
First seen. This update was first spotted by Founder & CEO of Adsquire Anthony Higman who shared spotting it on LinkedIn.
Google is incrementally improving metric visibility in Performance Max, giving advertisers more insight into how creative choices — particularly video — impact performance.
What’s happening. Google Ads has introduced a new “Ads using video” segment within Performance Max channel performance reporting, allowing advertisers to break down results based on whether video assets were included.
Why we care. Marketers can now compare performance across placements that used video versus those that didn’t, offering a clearer view into the role video plays across Google’s automated inventory.
It helps answer a key question in an automated environment: whether investing in video assets is driving better results, allowing you to make more informed creative and budget decisions inside Google Ads.
Between the lines. As video becomes more central across surfaces like YouTube and beyond, this update gives advertisers a way to validate the impact of investing in video assets within automated campaigns.
The bottom line. The new segment adds a layer of clarity to Performance Max, helping advertisers better evaluate video’s contribution without changing how campaigns are run inside Google Ads.
First spotted. This update was first spotted by PPC News Feed founder Hana Kobzova.
Google is expanding Personal Intelligence across AI Mode, Gemini, and Chrome in the U.S., moving it beyond beta into broader consumer use.
Why we care. Personal Intelligence pushes Google further into fully personalized search, using first-party data like Gmail and Photos. That makes results harder to replicate, rank against, or track — especially in AI Mode, where outputs may vary based on user history, purchases, and behavior.
The details. Personal Intelligence now works across:
AI Mode in Google Search (available now in the U.S.)
Gemini app (rolling out to free users)
Gemini in Chrome (rolling out)
How it works. Users can connect apps like Gmail and Google Photos so Google can tailor responses using personal context. Examples Google shared include:
Shopping recommendations based on past purchases and brand preferences.
Tech troubleshooting using receipt data to identify exact devices.
Travel suggestions using flight details, timing, and past trips.
Personalized itineraries and local recommendations.
Hobby suggestions inferred from user interests.
Availability. These features are available only for personal accounts, not Workspace users, Google said.
Although Google continues to test ads in AI Mode, users who connect apps to enable Personal Intelligence won’t see ads — and that isn’t changing right now, a Google spokesperson confirmed.
Early results: users find these business connections “helpful,” per Google.
But there’s a clear carveout: no ads for users who opt into app-connected, highly personalized experiences.
The details.Google today expanded Personal Intelligence in AI Mode as a beta to anyone in the U.S., allowing Gemini to generate more tailored responses by connecting data across its ecosystem, including Google Search, Gmail, Google Photos, and YouTube.
Opting into Personal Intelligence creates an ad-free experience inside AI Mode.
Why we care. Ads are coming to AI Mode, but Google is moving cautiously where personal data is deepest. Personal Intelligence experiences stay ad-free for now while Google works out the right balance.
What Google is saying. A Google spokesperson told Search Engine Land:
“There are currently no ads for people who choose to connect their apps with AI Mode. That isn’t changing right now.
“Over the past few months, we’ve been testing ads in AI Mode in the US. Our tests have shown that people find these connections to businesses helpful and open up new opportunities to discover products and services.
“In the future, we anticipate that ads will operate similarly for people who choose to connect their apps with AI Mode. Ads will continue to be relevant to things like your query, the context of the response and your interests.”
Bottom line. Personal Intelligence positions Google’s Gemini app as a more personalized assistant, setting the stage for future ad experiences built on richer, cross-platform user context.
Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.
“I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
“Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”
Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”
Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:
“Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
“We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”
What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:
“You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
“There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”
Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:
“Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”
A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:
“You are tempting fate by opening up a way for consumers to access your product within a large language model.”
“The big bad wolf will come to your door and say everything’s cool.”
For a long time, a nonprofit’s digital presence hasn’t been a “nice-to-have.” It’s the central hub for mission delivery, donor engagement, and advocacy.
Many organizations struggle with the technical and strategic foundations needed to turn a website and a few social accounts into a high-performing digital ecosystem.
The goal isn’t simply to “be online.” It’s to build reliable infrastructure, so your organization owns its narrative, protects its assets, and measures the impact of “free” digital efforts.
Here’s a practical look at the critical elements of managing a nonprofit’s digital presence — and the common pitfalls to avoid — based on my experience helping several organizations throughout my career.
If you help an organization with digital marketing and they aren’t following these practices, your first step should be getting their digital house in order.
1. Own your foundations: Domains and account control
In my experience, the most overlooked risk in nonprofit digital management is the lack of direct ownership of technical assets.
A well-meaning volunteer or third-party agency often registers a domain or creates a social account using personal credentials. If that individual leaves the organization, you risk losing access to your primary digital channel — the domain you should own and control.
I’ve worked with several organizations that had to start over completely because they lacked control.
Domain ownership: Ensure the domain is registered in the organization’s name using a generic “admin@” or “info@” email address that multiple stakeholders can access. Set the domain to auto-renew and use a registrar that offers robust security features.
Website hosting and management: The organization also needs to control its website hosting and administration. Use a similar approach to the one recommended for domain ownership.
Social media governance: Again, use a similar process to the one described above to establish ownership of key social media channels. Grant volunteers access via delegation on individual channels rather than sharing passwords. This allows you to revoke access immediately if a staff member or volunteer moves on, protecting your brand’s voice and security.
2. Move beyond ‘winging it’: The editorial calendar
A common mistake for nonprofits is posting only when there’s an immediate need, which is often only when making a fundraising appeal. This “broadcast-only” approach often leads to donor fatigue and low engagement.
To build a community, you need a content plan that balances stories of impact with actionable requests.
The 70/20/10 rule: Aim for 70% value-based content (success stories, educational info), 20% shared content from partners or community members, and only 10% direct “asks.”
The editorial calendar: Use a simple tool, even a shared spreadsheet, to map out your themes and individual pieces of content for the month. This ensures you aren’t scrambling for a post on Giving Tuesday, that everyone knows what’s expected of them, and that your messaging and pace of content creation remain consistent across email, social, and your blog.
3. Tracking what matters (and ignoring what doesn’t)
Data is only useful if it informs future decisions. Many organizations get bogged down in “vanity metrics” like total likes or page views without understanding whether those numbers lead to real-world outcomes.
Set up conversion tracking: It isn’t enough to know that 1,000 people visited your site. You need to know how many of them clicked the “Donate” button or signed up for your newsletter.
Behavioral analytics: Use cost-free tools like Google Analytics 4 and Microsoft Clarity to see where people are dropping off in your donation funnel. If 50% of people leave the site on your “Ways to Help” page, you may have a UX issue or a confusing call to action.
4. Optimize for the ‘mobile-first’ donor
Most global web traffic is now mobile, and for nonprofits, this is critical. Donors often engage with your content on social media on their phones and expect a seamless transition to your donation page.
Speed and simplicity: Fancy header videos, sliders, and bloated images slow down your site, like the nonprofit example in this article about bad website design. Less is more when speed is of the essence. Reduce friction to make your website more usable. For example, if your donation page takes more than three seconds to load or requires more form fields than necessary, you’re leaving donations on the table.
Payment flexibility: Incorporate digital wallets like Apple Pay, Google Pay, or PayPal. Reducing friction at the point of donation is one of the most effective ways to increase your conversion rate. Many nonprofits use third-party tools to manage donations, so keep payment flexibility in mind when choosing a payment partner.
Even well-intentioned nonprofits can undermine their digital presence with a few common mistakes.
Targeting ‘everyone’
One of the biggest mistakes is trying to reach everyone. A digital presence that tries to appeal to every demographic usually ends up appealing to no one. Define your “ideal supporter,” and tailor your language, imagery, and platform choice to them.
Neglecting accessibility
Accessibility is about inclusion. Ensure your images have alt text, your videos have captions, and your website colors have enough contrast for users with visual impairments. If a portion of your audience can’t interact with your site, you aren’t fulfilling your mission.
The ‘set it and forget it’ mentality
I often tell businesses to treat websites like any other business asset, and the same applies to nonprofits. Digital ecosystems require maintenance.
Links break, plugins need updates, and search algorithms change. A quarterly “digital audit” to check your site speed, broken elements, and SEO health is essential for long-term visibility.
Turning your digital ecosystem into a mission multiplier
A successful digital presence is built on the same principles as a successful mission: consistency, transparency, and clear communication. By owning your assets, planning your content, and grounding your decisions in data, you ensure that your digital ecosystem serves as a force multiplier for the people you’re trying to help.
If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.
The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.
The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently.
A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”
The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.
Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.
The competitive turn: Where absolute tests become relative ones
The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.
In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward.
The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.
At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?”
Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.
You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.
Multi-graph presence as structural advantage in ARGD(W)
The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.
The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.
This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.
For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph.
Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).
Annotation: The gate that decides what your content means across 24+ dimensions
Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.
At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.
Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.
“We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
“My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”
So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.
Annotation classification runs across five types of specialist models operating simultaneously per niche:
One for entity and identity resolution (core identity).
One for relationship extraction and intent routing (selection filters).
One for claim verification (confidence multipliers).
One for structural and dependency scoring (extraction quality).
One for temporal, geographic, and language filtering (gatekeepers).
This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.
Gatekeepers
They determine whether the content enters specific competitive pools at all:
Temporal scope (is this current?).
Geographic scope (where does this apply?).
Language.
Entity resolution (which entity does this content belong to?).
Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.
Core identity
This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment.
For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.
Selection filters
They add query routing: intent category, expertise level, claim structure, and actionability.
For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.
Extraction quality
Think:
Sufficiency (does this chunk contain enough to be useful?)
Dependency (does it rely on other chunks to make sense?)
Standalone score (can it be extracted and still work?)
Entity salience (how central is the focus entity?)
Entity role (is the entity the subject, the object, or a peripheral mention?)
Weak chunks get discarded before competition begins.
Confidence multipliers
These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.
Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.
An important aside on confidence
Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.
Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.
Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.
To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.
What happens when annotation fails you (silently)
Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.
I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.
Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version.
The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.
When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.
Measuring annotation quality in ARGDW
Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.
The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.
That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”
Your brand SERP tells you exactly what the algorithm understood
These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.
AI describes your brand using a competitor’s framing or category language.
Entity type is misclassified (person treated as organization, product treated as service).
AI can’t answer basic factual questions about your brand and offers without hedging.
If the algorithm can’t place you in a competitive set, it won’t recommend you
These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.
Absent from “best [product] for [use case]” results where you qualify.
Absent from “alternatives to [competitor]” results.
Absent from “[brand A] vs. [brand B]” comparisons for your category.
Named in comparisons but with incorrect differentiators or misattributed features.
Consistently ranked below competitors with weaker real-world authority signals.
For me, that last one is the most telling. Weaker brand, higher placement.
Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.
If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent
These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations.
The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.
Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
Not surfaced when the AI explains a concept you coined or own.
Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
Named as a generic example rather than a recommended solution.
The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
Entity present in the knowledge graph but invisible in discovery queries on AI platforms.
The three taxes you’re paying with sub-optimal annotation
Three revenue consequences follow from annotation failure, one at each layer of the funnel.
The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer.
The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you.
The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you.
Each tax is a direct read of how well annotation worked — or didn’t.
For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as:
BoFu failures point to entity-level misunderstanding.
MoFu failures point to competitive cohort misclassification.
ToFu failures point to topic-authority disconnection.
Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”
For the full classification model in academic depth, see:
Recruitment: The universal checkpoint where competition becomes explicit
Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.
Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”
The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction.
The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).
The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.
The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.
The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.
The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments:
Search results are daily to weekly.
Knowledge graph updates are monthly.
LLM updates are currently several months (when they choose to manually refresh the training data).
Grounding: Where the system checks its own work in real time
Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.
Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary.
The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.
In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer.
If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).
But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.
The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.
Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.
My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.
The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.
In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.
Display: Where machine confidence meets the person
Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).
Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.
This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.
UCD activates at display
You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.
The same content, grounded with the same confidence, presents differently depending on who is asking and why.
A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.
A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.
A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.
The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.
This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.
The framing gap at display
The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.
At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics.
At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames.
At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.
After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.
Won: The zero-sum moment where one brand wins and every competitor loses
Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses.
The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.
Three won resolutions in the competitive context
Won always resolves through three distinct mechanisms, each with different competitive dynamics.
Resolution 1: Imperfect click
The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone.
This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”
Resolution 2: Perfect click
The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment.
This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.
Resolution 3: Agential click
The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint.
The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.
The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure.
Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to.
Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.
Competitive escalation across the five ARGDW gates
The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.
The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
Display reduces to finalists, often one primary recommendation with supporting alternatives.
Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.
ARGDW: Relative tests. The scoreboard is on.
Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.
Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).
After establishing the 10-gate AI engine pipeline, what’s next?
The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.
Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).
Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.
My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.
I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”
People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.
The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.
This is the fifth piece in my AI authority series.
Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.
But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.
Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.
The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.
While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.
The research suggests search activity is roughly distributed as follows:
Traditional search engines: ~80% of searches, with Google alone at ~73.7%
Commerce platforms (Amazon, Walmart, eBay): ~10%
Social networks: ~5.5%
AI tools (ChatGPT, Claude, etc.): ~3.2%
Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.
The industry is focused on AI and missing the bigger mainstream shift
Much of the search industry conversation today is focused on AI. Questions like:
How do I rank in ChatGPT?
How do I optimize for AI search?
Will AI replace Google?
They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.
I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.
AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.
But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:
Amazon receives more searches than ChatGPT.
YouTube receives more searches than ChatGPT.
Even Bing receives more search activity.
Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.
Social platforms are now search engines
For many users, social platforms are now core search destinations. People look to:
TikTok for recommendations, restaurants, travel ideas, and products.
YouTube for tutorials, reviews, and problem-solving.
Reddit for honest discussions and community opinions.
Pinterest for inspiration and visual discovery.
Each platform plays a different role in the discovery journey.
Platform
What people search for
TikTok/Instagram
Discovery and recommendations
YouTube
Learning, tutorials, and reviews
Reddit
Real opinions and community discussions
Pinterest
Inspiration and planning
These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.
Social content is now appearing directly in Google results
As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.
Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.
Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:
Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.
That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.
Google’s AI Overviews often reference Reddit threads and YouTube videos.
Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.
This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.
A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.
The compounding discoverability effect
When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:
Rank in YouTube search.
Appear in Google search results.
Be referenced in AI-generated answers.
Be shared across social platforms.
Spread through private messaging and dark social channels.
Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.
And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.
Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.
Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.
While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.
When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.
Search everywhere: A new model for discoverability
Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.
Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.
Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.
That is the future of search. That is “search everywhere.”
Google is expanding capabilities in Google Ads Editor to give advertisers more creative flexibility, automation control, and budget precision — especially as AI-driven campaign types continue to evolve.
What’s new. The 2.12 release introduces a wide set of updates across Performance Max, Demand Gen, and video campaigns, with a clear focus on scaling creative assets and improving workflow efficiency.
Creative expansion. Performance Max campaigns now support up to 15 videos per asset group, allowing advertisers to feed more variations into Google’s AI for testing. The addition of 9:16 vertical images also reflects growing demand for mobile-first formats, particularly across surfaces like short-form video.
Campaign upgrades. Demand Gen campaigns get several enhancements, including new customer acquisition goals, brand guideline controls, and hotel feed integrations. A new minimum daily budget and a streamlined campaign build flow aim to improve stability and setup.
Video & AI control. Updates to non-skippable video formats and real-time bid guidance give advertisers more control over performance, while new text and brand guidelines help ensure AI-generated assets stay on-brand and compliant.
Budgeting shift. A new total campaign budget feature allows advertisers to set a fixed spend across a defined period — ideal for promotions or seasonal bursts — with Google automatically pacing delivery.
Workflow improvements. Account-level tracking templates, better visibility into Final URL expansion performance, clearer campaign status filters, and bulk link replacement tools are designed to reduce manual work and improve account management at scale.
Why we care. This update to Google Ads Editor gives them more creative flexibility and control over AI-driven campaigns, especially in Performance Max and Demand Gen. Features like increased video limits, vertical assets, and total campaign budgets help you test more, scale faster, and manage spend more efficiently.
It also improves workflows and brand safeguards, making it easier to guide automation while maintaining consistency and performance across Google Ads.
Between the lines. The update continues a broader trend: as automation increases, Google is giving advertisers more ways to guide AI rather than manually control every input.
The bottom line. Google Ads Editor 2.12 is less about one standout feature and more about incremental gains across creative, automation, and control — helping advertisers better manage increasingly AI-driven campaigns within Google Ads.
As Google rolls out AI Overviews, AI Mode in Search, and the Gemini ecosystem, we face a growing challenge: what happens when users get answers — and soon complete purchases — without leaving Google’s interfaces?
UCP is designed to help brands to sell to consumers without leaving the Gemini or LLM experience. Consumers can check out within the LLM, add rewards points, and fully execute the transaction. Here’s an example flow:
How Google’s Universal Commerce Protocol works
At its core, UCP standardizes how consumer AI interfaces communicate with merchant checkout systems. When a user tells Gemini, “Find me a highly rated, waterproof hiking boot in size 10 under $200 and buy it,” UCP is the invisible bridge that allows the AI to securely fetch inventory, process the payment, and confirm the order.
While Google’s developer documentation leans into technical jargon like “Model Context Protocol (MCP)” and “Agent2Agent (A2A) interoperability,” the implications are remarkably straightforward:
It uses your existing feeds: UCP plugs directly into your existing Google Merchant Center (GMC) shopping feeds. The inventory data you’re already managing for your campaigns is the same data that will power these AI transactions.
You keep the data: Unlike selling on some third-party marketplaces, where you lose the customer relationship, UCP ensures you remain the merchant of record. You process the transaction, you own the first-party customer data, and you control the post-purchase experience.
Frictionless checkout: By enabling checkouts directly within Google’s AI ecosystem, UCP can reduce cart abandonment and increase conversion rates among high-intent shoppers.
Like many LLM optimization recommendations, these steps come down to the fundamentals of managing your shopping feed and Merchant Center account.
Google outlined a few best practices. If you follow these four steps, you’ll be well-positioned for success.
1. Master your feed data hygiene
In an agentic commerce environment, your product feed is your primary sales tool. To ensure the AI accurately matches your products to highly specific user queries, you need to enrich your feed with granular details.
Write product titles that are 30 or more characters long.
Expand product descriptions to 500 or more characters.
Include Global Trade Item Numbers (GTINs), where relevant, to ensure accurate product matching.
Include three or more additional images alongside your primary product photo to engage visual shoppers.
Use lifestyle images, not just standard product shots on white backgrounds.
Ensure your image quality meets the standard of 1,500×1,500 pixels.
Categorize your inventory by product type and share key product highlights.
Prepare specific feed attributes required for UCP, such as returns, support information, and policy information.
Support Google’s Native Checkout when possible (checkout logic integrated directly into the AI interface). Google also offers another option called Embedded Checkout (an iframe-based solution for highly bespoke branding). This will work, but is suboptimal at this time.
To set your brand apart when AI is helping consumers make immediate, confident purchasing decisions, you must pass trust and convenience signals directly through your feed. The data shows that these elements directly impact the bottom line:
Indicate clearly if your brand offers free shipping.
Share your shipping speed (next day, two-day, etc.).
Display your return policy.
Submit sale prices when available. Regardless, ensure the feed represents the most accurate pricing details.
The shift to UCP requires foundational updates to how your backend systems interact with Google. You must work hand in hand with their development and SEO teams to prepare for these AI search experiences.
Migrate from the Content API to the Merchant API to enable real-time inventory updates and programmatic access to data and insights.
Upgrade your tag in Data Manager and implement Conversion with Cart Data to effectively use first-party data in your campaigns.
Prioritize content-rich pages for indexing and crawling, and ensure structured data is always supported by visible content.
Create your Business Profile and claim your Brand Profile to highlight your business information and brand voice on Google platforms.
Have your development team explore and prototype with UCP open source on GitHub to map APIs for checkout, session creation, and order management.
4. Additional features and tools beyond UCP to consider
Google is actively rolling out pilot programs designed specifically for the agentic era. Be proactive in adopting these new solutions rather than waiting for wide release:
Prepare for the “Business Agent,” a virtual sales associate that acts like a brand representative to answer product questions right on Google.
Consider the “Direct Offers Pilot,” a new way for advertisers to present exclusive discounts directly in AI Mode.
Inquire about the “Conversational Attributes Pilot,” which introduces dozens of new Merchant Center attributes designed to enhance discovery in the conversational commerce era.
The launch of Google’s Universal Commerce Protocol signals a significant shift. The SERP is becoming a transactional engine that increasingly operates within large language models.
UCP presents a meaningful opportunity. By removing friction between discovery and purchase, conversion rates could increase.
However, taking advantage of this requires stepping outside the Google Ads interface and working directly in your feed data and technical integrations, much like with Google Shopping. While this isn’t new, it’s becoming more important.
Ultimately, this comes down to the quality of your product data.