Reading view

Google’s AI Mode is citing Google more than any other site: Study

Google Search loop

Google’s AI Mode is increasingly citing Google itself — and often sending users back to another Google search, according to new SE Ranking research.

Why we care. AI search is meant to surface the best sources on the web. If Google increasingly cites itself, you may see fewer direct links and less traffic as more users stay inside Google.

The details. Google.com was the most cited source in AI Mode answers, accounting for 17.42% of all citations, SE Ranking found.

  • That makes Google.com the most referenced domain — more than the next six domains combined: YouTube, Facebook, Reddit, Amazon, Indeed, and Zillow.

Accelerating trend. In June 2025, Google cited itself in just 5.7% of AI Mode answers. That share is now tripled.

  • Nearly one in five AI citations now comes from Google. Including YouTube, Google-controlled properties account for roughly 20% of sources.

Self-preferencing on steroids. AI Overviews already link heavily to Google properties like Maps, Images, and YouTube. AI Mode appears to extend that approach by pushing users deeper into Google’s ecosystem, often through additional search results rather than external sites.

  • This keeps users interacting with Google surfaces where ads, reviews, and other monetized content appear.

What changed. Earlier AI Mode research showed Google mainly citing Google Business Profiles. That’s no longer the case:

  • 59% of Google citations now point to traditional Google search results.
  • 36.1% still reference Google Business Profiles.
  • Smaller shares link to Google Support (1.7%), Google Flights (0.1%), and other Google properties.
  • In many cases, AI Mode citations now show a mini search results panel beside the answer — effectively turning the citation into another search experience.

Industry differences. Google dominates citations across most topics. Some niches rely on Google even more:

  • Travel: 53.18% of citations
  • Entertainment & hobbies: 48.74% of citations
  • Real estate: 30.54% of citations

The only category where Google wasn’t the top source was Careers and Jobs, where Indeed appeared 3.1x more often than Google.

About the data. SE Ranking analyzed 68,313 keywords across 20 industries and more than 1.3 million AI Mode citations to measure how often Google.com appears as a cited source.

The report. Is Google stealing your clicks in AI Mode? (1.3M+ citations analyzed)

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Content Marketing Manager Location: El Segundo, California, USA (HQ) Reports To: Director of Marketing Role Type: Full-Time, On-Site Compensation: $75,000-$90,000 annually About QuikStor QuikStor is the leading SaaS facility management platform for the self-storage industry, delivering a purpose-built, scalable system that serves as the foundation for intelligent automation and modern facility operations. We […]
  • (Hybrid) Description We are expanding our marketing team and seeking a Content Marketing Specialist to play a key role in executing our marketing strategy. You will own the creation and publication of content across blogs, social media, email campaigns, and the company website, helping bring DMC’s brand to life across digital channels. A portfolio of […]
  • Job Description Position Title: E-commerce and SEO Specialist Compensation Range: $55,000 – $75,000 Location: Hybrid / On-site – Englewood, CO About GOLFTEC Enterprises: GOLFTEC Enterprises is a dynamic, technology-driven leader in the golf industry, uniting two premier brands—GOLFTEC and SKYTRAK—with a shared mission: to help people play better golf. GOLFTEC, the world leader in golf […]
  • The Role Wpromote is seeking a Senior Technical SEO Manager dedicated exclusively to the Southwest Airlines account. This isn’t a typical SEO role — it’s an opportunity to shape how a leading travel company competes in a transforming search landscape. You’ll be a key player focused on organic discoverability, cross-channel collaboration, and measurable revenue impact. […]
  • Our digital marketing agency helps multi-location home service brands generate leads across dozens of local markets. Our flagship client is a PE-backed home services company operating 40+ locations across the U.S. and Canada under several brands, and we are expanding our SEO team to support rapid growth. We are looking for a hands-on SEO Manager […]
  • Job Description Salary: $45,000-$50,000 DOE Position Summary We are looking for a creative and motivated Content Marketing Specialist to join our team in the roofing industry. This role is ideal for someone with at least one year of marketing experience who thrives in a fast-paced environment and enjoys creating visually engaging, results-driven content. You will […]
  • Job Description Title: Director of Digital Marketing Reports To: President/CEO Location: Bellingham, WA or Waynesboro, TN (negotiable) Department: Marketing Salary: $115K-$135K annual base salary for the initial six months, with transition to an attractive incentive-based compensation package designed to reward performance and contribution. About Us Seeking Health is a fast-growing nutritional supplement company with $50M […]
  • The SEO Strategist is responsible for owning and executing comprehensive SEO strategies across their assigned portfolio of clients. This includes analyzing performance, creating actionable roadmaps, managing content and optimization planning, and resolving client-specific SEO issues. Strategists play a critical cross-functional role in ensuring SEO is aligned with the broader goals of client success, advertising, content, […]
  • We are seeking a “full-stack” technical SEO to join our growing team of talent. In the role, you’ll own technical SEO delivery for an amazing pool of clients in markets as diverse as fashion and finance to automotive and travel. You will be advising clients and colleagues on architecture, performance, and technical best practices and […]
  • Job Description Director of Digital Marketing Healthcare is increasingly unaffordable for many Americans. For those who can afford it, they are in a health insurance system that has become more confusing, restrictive, and lower value with each passing year. Here at WeShare our mission is to bring better healthcare to America at a better price. […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Benefits 401(k) Bonus based on performance Competitive salary Dental insurance Free food & snacks Health insurance Opportunity for advancement Paid time off Training & development Vision insurance Are you passionate about supporting business owners and helping them grow? Do you enjoy being the go-to partner who helps clients show up at the right moment and […]
  • Overview Want to join an agency where your hard work will be recognized? An agency where you’ll have abundant opportunities to learn and grow while significantly impacting your skillset and your client’s bottom line? If so, hi, we’re Socium Media! We’re a performance marketing agency recently awarded as one of Adweek’s Fastest Growing Agencies in […]
  • About Greenlane Marketing Greenlane Marketing is not your average digital marketing agency. We were founded to be a true alternative to the standard agency model, one that prioritizes partnership, transparency, and data-driven results. We are a team of passionate, curious, and dedicated experts who are genuinely excited to tackle complex challenges and create custom strategies […]
  • ABOUT THE JOB Tapcheck is looking for a Sr. Performance Marketing Contractor to own and optimize our paid search programs across Google Ads and Microsoft Ads, with a strong emphasis on hands‑on execution and optimization. This is a 6-month part‑time contract role (approximately 15–20 hours per week) with opportunity for additional hours and transition to […]
  • Overview This role requires a hybrid schedule and will be based in our Fort Mill, SC Headquarters (Monday through Thursday) and work fully remotely on Fridays each week. This role is not open to visa sponsorship or transfer of visa sponsorship including those on H1-B, F-1, OPT, STEM-OPT, or TN visa, nor is it available […]

Other roles you may be interested in

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Note: We update this post weekly. So make sure to bookmark this page and check back.

OpenAI’s big ChatGPT Instant Checkout plan just changed

AI shopping

OpenAI is backing away from putting checkout directly inside ChatGPT. Instead, purchases will shift to retailer apps that connect to ChatGPT, The Information reported.

Why we care. ChatGPT aims to be more than a discovery engine. Right now, though, product discovery inside ChatGPT is gaining traction faster than purchases. That suggests AI-powered shopping is only influencing the consideration stage (at least for now), not driving conversions.

What happened. OpenAI had planned to let shoppers buy products directly from listings in ChatGPT search results. Instead, an OpenAI spokesperson said that Instant Checkout is moving to Apps, where purchases happen inside connected services rather than natively in ChatGPT.

  • The company will now prioritize product search and discovery inside ChatGPT.
  • It will also keep working with Stripe on the Agentic Commerce Protocol to support app-based transactions.

What changed: OpenAI found that users research products in ChatGPT but don’t complete purchases there. Only a small number of merchants were actively using native ChatGPT checkout, according to the report.

  • In September, OpenAI positioned Instant Checkout as a big commerce opportunity. At the time, it said U.S. users could buy from Etsy sellers inside ChatGPT, with plans to expand to Shopify merchants, add multi-item carts, and roll out beyond the U.S.

Meanwhile. Shopify president Harley Finkelstein said this week that only about a dozen Shopify merchants were using AI tools, despite Shopify supporting integrations with ChatGPT, Gemini, and Copilot. That’s tiny relative to Shopify’s overall merchant base.

What to watch. Can OpenAI make ChatGPT more valuable as a shopping discovery engine without owning the final transaction? Also, how does OpenAI’s commerce strategy intersect with its advertising ambitions? If transactions stay outside ChatGPT, monetizing product discovery through ads could become even more important.

Why this is happening. Two forces are slowing agentic commerce, according to Leigh McKenzie, director of online visibility at Semrush: infrastructure and trust. Real-time catalog normalization across tens of millions of SKUs is a decade-scale problem Google already solved with Merchant Center, and consumers still default to checkout flows they trust — Apple Pay, Google Wallet, and Amazon one-click.

The report. OpenAI Scales Back Shopping Plans for ChatGPT (subscription required)

Google’s Liz Reid: Search and Gemini may converge, or diverge further

AI future paths

Google’s Liz Reid, VP and head of Search, drew a clearer line between Google Search and Gemini but said it’s still unclear whether the products will converge, diverge further, or be superseded.

The big picture. Reid said Search is an information product focused on helping people connect with the web, while Gemini is centered more on assisting with productivity and creation. She added that the boundaries are fluid, especially as AI products evolve quickly and agentic experiences reshape how people use the internet.

What she’s saying. In short, Reid said Search and Gemini share technology but have different product “north stars.” They could overlap more over time, but the eventual long-term direction is still open. Here’s what she said in an interview on Access Podcast:

  • “I don’t know the answer is the short answer.”
  • “I think what we see is some areas they’re converging more and some areas they’re diverging more, right? And like and so what are they going to net out? Like do the areas that diverged eventually all come or do the areas that diverge become even bigger over time? I think we’ll see.”
  • “So I don’t know in in all honesty, but I think we are right now at a point where depending on what angle you look at, you’d think they’re getting closer or they’re getting further apart.”
  • “Who knows, maybe agents will mean like the right product is neither of the two of them is a third product altogether that they merge into. I don’t know yet.”

Gemini vs. Search. Here’s the distinction Reid made:

  • On Gemini: “Gemini’s focus is on sort of being this assistant and so it tends to lean in more heavily on things like productivity or creation, right?”
  • On Search: “Search is more information based and it believes that often in those information use cases you also want to connect and hear from other people. And so how do you bring out the web?”

Agents and the web’s future. Reid also said Google expects a future with more agent-to-agent internet activity, not just humans browsing directly.

  • “I certainly think the there will be a world in which sort of agents are doing a lot of interaction on the internet, not just people.”
  • “I do think probably means there’s a world in which a lot of agents are talking with each other, and not just with humans going forward as we evolve.”

Google vs. ChatGPT. Reid pushed back on the idea that AI is a simple winner-take-all battle between Google and ChatGPT.

  • “I don’t know, by the way, that we’re going to end up in a world where there’s only one product, right?”
  • “I think what we’re seeing is like simultaneously people are adopting more tools and search is growing, right? because the the possibility of the tech is just allowing many more questions.”

Trusted sources. Reid also said Google wants to do more to surface sources users trust or pay for.

  • “I think one thing Google is trying to do a lot more of and we’ve taken small steps so far but want to do more. How do you help when there is that relationship?”

She pointed to Google’s Preferred Sources feature and broader subscription-aware experiences:

  • “If you love this source and you do have a relationship with it then that content should surface more easily for you on Google.”
  • “We should surface the the one that they’re paying for and not the six that they can’t get access to more.”

Why we care. Reid’s comments suggest Google hasn’t settled on Search’s long-term role in an AI-first ecosystem. So keep watching closely as AI assistants, agents, and search results evolve.

The interview. What happens to Google when AI answers everything? with Liz Reid

💾

Google’s search chief says the line between web discovery and AI assistants is still unsettled as agents and new behaviors reshape the web.

Google contacts advertisers with a mandatory EU political ads deadline

Google is reaching out directly to advertisers via email, requiring them to confirm whether their campaigns contain EU political ads — with a hard deadline of March 31st.

Why we care. This isn’t optional. EU regulation now requires Google to verify political ad status across all active campaigns, and advertisers who don’t act before the deadline could face compliance issues.

What’s happening. Google is asking every advertiser to declare whether their existing campaigns include EU political ads. The requirement applies to all current campaigns and must be completed by March 31, 2026.

How to comply: Google has outlined three ways to submit the confirmation:

  • Campaign level — Go to Campaign Settings and select “EU political ads” to confirm individual campaigns.
  • Multiple campaigns — Go to the Campaigns tab and use the “EU political ads” option to confirm several at once.
  • Account level — Confirm for all new and existing campaigns in one go. Selecting “No” at account level automatically applies that answer to every campaign, including future ones. You can still override this for individual campaigns at any time.

Between the lines. The account-level option is the most efficient route for most advertisers who are confident none of their campaigns fall under the EU political ads definition. Google has made it straightforward to reverse or adjust the selection at any point, so there’s no risk in acting early.

The bottom line. Check your inbox — Google is contacting advertisers directly. If you run campaigns targeting EU audiences, log in and complete the confirmation before March 31st to stay compliant.

First seen. This update was spotted by Paid Search expert, Arpan Banerjee, who shared the details of the comms on Linkedin.

How structured data supports local visibility across Google and AI

Why schema matters more for local SEO in the AI search era

Until a few years ago, schema helped search engines extract basic facts and display visual enhancements like star ratings and sitelinks. 

However, in the AI-driven search world, schema plays a different and fundamental role for local SEO, helping Google and other AI systems understand who you are, what you do, where you operate, and how confidently your information can be reused.

Improving rankings isn’t as relevant. Now, schema helps reduce confusion for Google and reinforces your business as a stable, trustworthy local entity across traditional search, local packs, AI Overviews, rich results, and external AI platforms.

Let’s dig into how schema helps local SEO in the AI search world.

How Google handles conflicting structured data

Google triangulates across multiple data points to understand a business and pull information into a search result:

  • On-page content.
  • Internal linking and site structure.
  • Google Business Profiles.
  • Citations and directories.
  • Reviews and reputation signals.
  • Schema markup.

When these signals align, Google’s confidence in your information increases. When they contradict each other, your correct information might not be pulled into search.

When structured data contradicts on-page content, Google Business Profile data, citations, or reviews, Google doesn’t attempt to reconcile the difference — it discounts the markup and often ignores the information altogether.

For example, consider a law firm that marks up:

  • Operating hours that differ from their GBP.
  • “Free consultation” in their schema, but not on the landing page.
  • Attorneys who are no longer listed on the “Our Team” page.

Each of these creates friction, leading to mixed signals for AI systems and search engines. One conflict may be ignored, but multiple conflicts can compound and result in lost search visibility for the whole site. 

False positives: The silent performance killer

False positives occur when schema asserts something that isn’t fully supported by other signals. 

Common examples include:

  • Marking a business as a medical provider without appropriate credentials.
  • Applying Person schema to non-professionals.
  • Using Product schema for services.

False positives are particularly damaging in AI-driven systems. AI models are conservative when confidence is low — if information appears inconsistent or exaggerated, it’s less likely to be reused or cited. 

Review and rating schema

When review markup contradicts visible content, Google doesn’t “average” the signals, it ignores the schema altogether.

If you markup “5 stars” but your Google Business Profile shows “4.2 stars,” or if you mark up reviews that aren’t visible on the page, the signal gets confused.

Note: Google strictly prohibits marking up third-party reviews, such as those from Yelp, Google Maps, or Avvo, as your own Review schema. You can only markup reviews that are first-party, or collected directly by your site, and clearly visible to the user. For details, refer to Google’s specific guidelines on Self-Serving Reviews.

How other AI platforms use schema

Google is the most prominent platform, but AI is also integrated into assistants, such as Siri or Alexa, retrieval-based platforms, such as ChatGPT search, and much more.

To pull information, they need to determine if:

  • Two references describe the same business.
  • Information is current.
  • A source is authoritative.

While external AI platforms do not necessarily parse schema the same way Google does, structured data contributes to clearer entity representation across the web. 

Importantly, these other systems tend to be less forgiving than Google when data is inconsistent. But if confidence in the entity is low, the business will be excluded from search.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

Get the newsletter search marketers rely on.


What is the search environment for local businesses now?

To understand why schema matters more now than it did five years ago, it’s important to understand how fragmented search has become. 

Local businesses no longer only surface in a single list of 10 blue links (the SERP). They appear across multiple interfaces, often simultaneously:

  • Traditional organic search results.
  • Local packs and Maps results.
  • Knowledge panels.
  • Rich results and enhanced listings.
  • AI Overviews.
  • Conversational and agent-based AI platforms.

Schema doesn’t guarantee visibility on any platform — it helps AI systems decide if your business information is reliable enough to reuse. 

For example, when Google generates an AI Overview, it synthesizes information from multiple sources. Schema helps ensure Google understands exactly who you are and how your business information connects to your services, locations, and employees, so that your target audience can find you.

New SEO metrics for local businesses

Site performance is still often measured using metrics like keyword rankings, organic traffic, and conversions. These metrics aren’t wrong, but they are incomplete. 

Local businesses now need to think about:

  • Visibility in AI Overviews and AI-generated answers.
  • Stability in the local pack over time.
  • Accuracy and persistence in knowledge panels.
  • Correct attribution when AI systems summarize local providers.
  • Reduced volatility during core and local algorithm updates.

If a local service business appears more frequently in AI-generated answers for informational and service-related queries, their brand visibility will improve, but they may see organic clicks stagnate or decline. 

But there’s no need for panic.

In reality, what is happening is a shift in how demand is being fulfilled. In these scenarios, schema doesn’t create visibility. What it does is help ensure the business is represented accurately when it’s surfaced.

Dig deeper: GEO x local SEO: What it means for the future of discovery

Types of schema for local SEO

For local service-based businesses, a limited set of schema types is all you need to give your business visibility. Implementing too many types can lead to a bloated, templated markup that introduces contradictions.

Let’s look at an example law firm and how they might implement different types of schema.

Subtype schema

Subtypes help Google and AI systems categorize businesses correctly and align them with the right expectations. A personal injury firm, a corporate law practice, and a family law mediator should not all be described the same way.

Effective LegalService schema should clearly answer four questions:

  • Who the firm is.
  • What type of law they practice.
  • Where they operate.
  • How they can be contacted.

This markup aligns directly with what users see on the page, what exists in Google Business Profiles, and what appears in legal directories like Avvo or Martindale-Hubbell.

Example: LegalService markup

{
  "@context": "https://schema.org",
  "@type": "LegalService",
  "@id": "https://www.example-law.com/locations/dallas/#location",
  "name": "Example Law Group Dallas",
  "url": "https://www.example-law.com/dallas/",
  "telephone": "+1-214-555-0100",
  "priceRange": "$$$",
  "address": {
    "@type": "PostalAddress",
    "streetAddress": "100 Main St, Suite 400",
    "addressLocality": "Dallas",
    "addressRegion": "TX",
    "postalCode": "75201",
    "addressCountry": "US"
  },
  "geo": {
    "@type": "GeoCoordinates",
    "latitude": 32.7767,
    "longitude": -96.7970
  },
  "openingHoursSpecification": [{
    "@type": "OpeningHoursSpecification",
    "dayOfWeek": ["Monday","Tuesday","Wednesday","Thursday","Friday"],
    "opens": "08:30",
    "closes": "17:30"
  }],
  "sameAs": [
    "https://www.facebook.com/examplelawdallas",
    "https://www.linkedin.com/company/example-law-group",
    "https://www.avvo.com/attorneys/example-profile"
  ]
}

You can view the full list of specific subtypes in the Schema.org LegalService definition.

Organization schema

Organization schema defines the parent entity behind locations, practitioners, and services. LocalBusiness (or LegalService) defines the physical location. This distinction becomes critical as companies scale, rebrand, or operate across multiple markets.

Without a clear Organization layer, Google may treat each location as a standalone entity. That can lead to fragmented knowledge panels, inconsistent brand attribution, and inaccurate AI citations.

Example: Graph-based hierarchy

{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "Organization",
      "@id": "https://www.example-law.com/#org",
      "name": "Example Law Group",
      "url": "https://www.example-law.com/",
      "logo": "https://www.example-law.com/logo.png",
      "knowsAbout": ["Personal Injury Law", "Medical Malpractice"]
    },
    {
      "@type": "LegalService",
      "@id": "https://www.example-law.com/locations/dallas/#location",
      "name": "Example Law Group Dallas",
      "parentOrganization": { "@id": "https://www.example-law.com/#org" },
      "address": {
        "@type": "PostalAddress",
        "streetAddress": "100 Main St, Suite 400",
        "addressLocality": "Dallas",
        "addressRegion": "TX",
        "postalCode": "75201",
        "addressCountry": "US"
      }
    }
  ]
}

Dig deeper: Schema and AI Overviews: Does structured data improve visibility?

Person schema

For legal and professional service businesses, Person schema reinforces expertise and real-world credibility (E-E-A-T). Used incorrectly, it creates false authority signals that Google will ignore.

Person schema should only be applied when:

  • The professional has a visible bio on the site
  • Bar admissions and credentials are clearly displayed
  • Their relationship to the firm is real and current

This helps Google and AI systems associate legal expertise with the firm rather than just its content. It also reduces the risk of misattribution when AI systems summarize legal advice.

Example: Attorney bio markup

{
  "@context": "https://schema.org",
  "@type": "Person",
  "@id": "https://www.example-law.com/attorneys/jane-doe/#person",
  "name": "Jane Doe, Esq.",
  "jobTitle": "Senior Partner",
  "worksFor": { "@id": "https://www.example-law.com/#org" },
  "affiliation": { "@id": "https://www.example-law.com/locations/dallas/#location" },
  "alumniOf": "Harvard Law School",
  "knowsAbout": ["Tort Law", "Civil Litigation"],
  "sameAs": [
    "https://www.linkedin.com/in/janedoe-law",
    "https://www.statebar.tx.us/member/janedoe"
  ]
}

Service and product schema

For law firms, consultants, and agencies, Service schema, particularly the OfferCatalog structure, is more appropriate and accurate than Product.

Using OfferCatalog allows you to create a “menu” of services that AI systems can parse to understand the breadth of your expertise. This helps AI systems understand what the business actually offers without overreaching.

Example: OfferCatalog for legal services

{
  "@context": "https://schema.org",
  "@type": "LegalService",
  "@id": "https://www.example-law.com/locations/dallas/#location",
  "hasOfferCatalog": {
    "@type": "OfferCatalog",
    "name": "Legal Services",
    "itemListElement": [
      {
        "@type": "Offer",
        "itemOffered": {
          "@type": "Service",
          "name": "Personal Injury Consultation",
          "description": "Free case evaluation for auto accidents and workplace injuries."
        }
      },
      {
        "@type": "Offer",
        "itemOffered": {
          "@type": "Service",
          "name": "Medical Malpractice Litigation",
          "description": "Representation for victims of surgical errors and misdiagnosis."
        }
      }
    ]
  }
}

FAQPage schema

Originally, FAQPage schema helped search engines understand common questions and answers on a page. In an AI-driven search environment, well-written FAQs help define what a business does, what it doesn’t do, and what a user should expect. It helps AI systems as they look for boundaries, clarification, and intent resolution.

Example: AI-aligned FAQ schema

{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Do I have to pay a retainer for a personal injury case?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "No. We operate on a contingency fee basis, meaning you only pay legal fees if we win a settlement or verdict for you."
      }
    }
  ]
}

In AI Overviews, these answers may be paraphrased or summarized, but schema helps ensure the underlying meaning remains intact.

Schema maintenance: Why ‘set it and forget it’ fails

Schema is often implemented during a site launch or redesign, only to be ignored afterward. 

But businesses change constantly. Hours shift, locations open or close, staff turnover occurs, and services evolve. When schema isn’t updated to reflect these changes, inconsistencies are introduced that can erode information signals over time.

A sustainable schema strategy involves two steps:

  • Quarterly audit: Set a recurring calendar reminder to audit your schema code against your live site. Check for syntax errors, broken @id references, and deprecated properties.
  • Trigger-based updates: Establish a rule that whenever a “fact” changes in your business (e.g., you update your holiday hours on your Google Business Profile, or a partner leaves the firm), the schema should be updated immediately.

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

Schema is necessary in the AI search world

Structured data now acts as a trust signal, helping search engines and AI systems determine whether business information is accurate, consistent, and reliable enough to reuse at scale.

Schema that reinforces your correct information supports visibility across traditional search, local results, and AI-driven experiences. Inaccurate or outdated schema can hurt your company’s visibility.

Break down data silos: How integrated analytics reveals marketing impact

Break down data silos- How integrated analytics reveals marketing impact

Do you think you’re able to answer the question every marketing leader dreads hearing from leadership: “Why isn’t our marketing effort doing more?”

How do you even go about answering that?

Let’s look at what I mean using a fictional location analytics company we’ll call Acme Area Analytics.

The Acme team reviews its reports. Nothing appears broken. Campaigns are running, leads are still coming in, and performance metrics are mostly stable. Yet sales momentum isn’t clearly accelerating, and it’s hard to pinpoint why.

Insights are scattered across site analytics, brand monitoring and SEO tools, CRM systems, and paid media dashboards. Each platform reflects part of the story, but none shows the full picture.

That fragmentation is exactly how well-intentioned “data-driven decisions” can go wrong. Let’s look at how that happens and how Acme, and you, can fix it.

When the data points in the wrong direction

In global, multi-channel campaigns like Acme Area Analytics’, the hardest moments are when nothing is obviously underperforming. Digital channels are running. Leads are coming in, and metrics are mostly stable, yet sales momentum is stalled and it’s unclear which lever to pull next.

At the same time, subtle signals raise concerns. Non-brand CPCs are creeping upward, and a competitor — Spotter Intelligence — is suddenly appearing more frequently in branded search.

Let’s say you’re part of the Acme marketing team. You go back to your reports and ask the question most marketers ask in this situation: Which tactic is underperforming?

When diving into the platform data, you uncover what looks like a clear answer: remarketing performance for your API has softened, conversion rates have dipped slightly, and efficiency has begun to decline.

On the surface, you have your answer. Spend should be pulled back to match demand because audiences have likely seen the creative too many times.

That decision could certainly make sense, and it’s what many teams actually end up doing. But it’s also often wrong. Why? Because you haven’t yet asked the right question.

The more useful question is harder to answer: “Is demand actually declining, or are we failing to create new interest upstream?”

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

The insight appears when you look across systems

The real issue becomes clear when you look beyond a single channel. The location analytics market still had strong growth potential, but your product was encountering a shortage of engaged audiences receptive to the message. That disconnect became clearer when you looked beyond paid media.

Site engagement trends in analytics and brand search behavior in Search Console suggested interest in your type of location AI wasn’t disappearing. It just wasn’t converting yet.

The focus had shifted from reach to engaged awareness, with a priority on attention and engagement, not just exposure. So your Acme team decided to introduce additional campaign layers, including new content designed to build relevance and trust.

Crucially, you didn’t see any improvement right away. Cost-per-lead efficiency continued to decline, and it looked worse after increased upper-funnel investment. From a platform-only view, this looked like the time to pull back.

But looking across systems changed how performance was interpreted. Engagement from awareness activity began feeding remarketing pools, but the impact wouldn’t surface immediately for a product with long sales cycles like your API.

During that gap, the Acme team maintained confidence in its strategy by sharing early signs of upstream momentum.  Only later did results begin to show up. Remarketing efficiency improved and higher sales volumes of the API were confirmed from integrated CRM data.

The takeaway for the Acme Area Analytics marketing team wasn’t just that “remarketing worked again,” or that upper funnel activity drives demand. It’s that the hardest marketing decisions are the ones you have to make — and hold — before success shows up in the metrics leadership typically trusts.  

Get the newsletter search marketers rely on.


Why the insight only appeared between dashboards

In our Acme example, each dashboard told a technically accurate story, but no single dashboard could fully articulate the whole picture.

  • Paid media dashboards reflected efficiency trends.
  • Analytics and Search Console showed shifts in engagement and demand.
  • CRM data lagged behind decisions by weeks or months.

Looking at any of those in a silo wouldn’t have allowed Acme’s marketing team to fully understand what was happening.

But we know that the insight didn’t live in any single view. When the question the team asked itself shifted to whether demand was moving effectively through the funnel, and dashboards were evaluated together in context, the decision changed.

This is what unsiloed analytics looks like in practice. It’s not about teams fighting over which touch led to the result, but recognizing that each part of a marketing plan plays a distinct and important role in creating momentum that grows demand and lifts sales.

Leadership wants proof. Pipeline and revenue might feel like the safest validation. But in complex, multi-channel programs, those are often lagging indicators of solid performance.

By the time pipeline clearly reflects demand creation, teams have often already pulled back awareness investment, cut channels that looked inefficient in isolation, and shifted budget toward short-term demand capture.

In the example above, waiting for proof would have meant that Acme reduced awareness and remarketing spend and possibly exited a market that would later show great promise.

Integrated data didn’t eliminate the risk of shifting investment from lead generation to awareness-building in a market that had declining metrics. Instead, it added credibility to the case for doing so.

Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era

The same pattern at a smaller scale

This dynamic isn’t limited to complex, multi-channel programs. You can see it even within a single platform when multiple tactics work together.

Let’s look at a scenario where Acme’s brand search impression volume increased by roughly 50% year over year while Share of Voice remained flat. That means more people have been searching for Acme as the company has invested across out-of-home and other digital campaigns. Acme’s Google campaign then harvested the demand created by other channels.

If Acme’s brand search had been evaluated only in terms of its media plan efficiency, this signal of growing demand would have been easy to miss. In context, it confirmed that Acme’s awareness efforts were working, even though attribution couldn’t perfectly assign credit to individual channels.

What changes when data is integrated

In these examples, integrated data — unsiloed data —  shifted the conversation.

Instead of Acme’s marketing teams debating budget cuts, they could monitor signs of early momentum, including longer time on site and rising brand search volume. Over time, that interest could be seen in the CRM as higher-quality leads that converted more frequently into closed deals.

The good news is that this doesn’t require new tools or perfectly stitched together data. It simply requires stepping back during planning and asking better questions about how potential customers signal interest as they consider your product.

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Seeing opportunity before it’s obvious

In my experience, the most valuable marketing insights come from understanding how different data points relate.

Unsiloing your data isn’t about proving causality or winning attribution debates. Instead, it’s about recognizing opportunity early enough to act on it and identifying which metrics suggest that demand is quietly being built in the background.

The teams that win aren’t only better at reporting results. They’re better at seeing momentum while it’s still forming and acting on it early.

‘Always be testing’ worked in 2016 — it’s risky in 2026

‘Always be testing’ worked in 2016 — it’s risky in 2026

If I hear “always be testing” one more time, I might scream. It was great advice in 2016. In 2026, it’s a great way to light your budget on fire.

That mantra made sense when budgets were loose and platforms forgave a lot of chaos. Launch five audience tests simultaneously? Sure, why not! Swap out three creative variables at once? Go for it!

But the rules have changed. Our new reality has tighter budgets, longer learning phases, and signal fragmentation everywhere. One poorly structured test can distort your performance for weeks, not days. That performance hit compounds fast.

Modern experimentation is expensive and risky. Why pay that price when we have the power of agentic AI to help? And by help, I don’t mean slapping AI onto our existing process and asking it to generate more ad variants. That would just be an expedient way to light our budgets on fire.

Instead, it’s time to use agentic AI to design smarter experimentation systems.

The real cost of unstructured testing

In an “always be testing” era, it was all too easy to throw things to test at the scale Oprah gives out cars or Taylor Swift fills auditoriums. It often led to unstructured testing where we launched ideas on a Monday and checked results on Friday hoping for a lift. There was nary a risk model, overlap detection, or strategic sequencing in sight.

The costs of that approach are now exponentially higher. Take platform disruption. Algorithms crave stability. Industry benchmarks show ad sets stuck in learning phases often see CPAs 20-40% higher than stable sets.

Every time you significantly change creative, audience, or budget, you risk resetting that learning. If you’re running three overlapping tests that each trigger resets, you’re voluntarily paying a volatility tax on your entire media spend.

Then there’s waste. The majority of A/B tests deliver no statistically significant lift. If you aren’t ruthless about what deserves to run, you’re burning budget to prove most ideas don’t matter. “Always be testing” without guardrails turns into “always be destabilizing.”

From random tests to a real experimentation engine

The shift looks like this. Old approach: “AI, write me 10 new headlines.” New approach: “AI, design the smartest next experiment within our budget, risk tolerance, and current learning state.”

The reframe from creative generation to experimentation architecture is where real leverage lives.

Here’s a practical seven-step framework to turn testing from a tactical habit into strategic infrastructure.

Step 1: Set hard guardrails (humans draw the lines)

Before you let any AI near your experiments, lock in constraints. Without them, AI lacks proper context. With them, AI becomes a disciplined strategic partner.

Define and document five hard boundaries.

  • Budget allocation: Reserve a fixed percentage (e.g., 10%) explicitly for testing.
  • Maximum volatility: “No test can increase CPA by more than 15% for more than 5 days.”
  • Learning phase sensitivity: Document reset thresholds per platform.
  • Leading indicators: Use early signals (CTR, engagement drop-offs) to kill bad tests before they damage pipeline.
  • Brand risk: Define off-limits positioning (e.g., no discount-heavy testing in enterprise segments).

Document this in a single file (e.g., experimentation-guardrails.md) to teach AI the constraints that make ideas viable. Your AI agent must reference this before proposing any test.

Step 2: Let AI audit your experiment history

Most teams have the data sitting in spreadsheets, but never extract the lessons. Feed your last six months of test results into an AI agent and have it analyze variables changed, duration, performance delta, statistical confidence, and platform resets.

Ask it to find patterns, such as:

  • Over-tested variables: CTA buttons tested eight times with zero meaningful lift? That’s not a lever.
  • False failures: Many tests are declared losers simply because they never reached statistical significance. An AI agent can quickly assess statistical power and flag inconclusive results.
  • Volatility patterns: Often, your worst CPA weeks weren’t market shifts or a single bad creative, but rather the weeks where you launched three overlapping tests.

This is how AI becomes a true analytical partner.

Step 3: Write real hypotheses

Rather than jumping straight from idea to launch, use AI to help you enforce hypothesis discipline.

  • Weak: “Let’s test a new headline.”
  • Strong: “If we emphasize ‘faster time-to-value’ over ‘ease of use,’ we expect a 10-5% lift in demo requests from mid-market companies because win/loss analysis shows speed is their top decision criterion.”

Structured hypotheses create institutional memory. Six months later, when someone suggests testing “speed messaging” again, you’ll know exactly who it worked for and why. Yes, it feels like paperwork, but this discipline can protect your budget from algorithm chaos.

Step 4: Risk-score every proposed test

Budget isn’t infinite and neither is algorithm stability. Your AI agent should evaluate each proposed test across five dimensions and assign a risk score.

  • Budget impact (e.g., <5% vs >15%).
  • Algorithm disruption level (minor refresh vs new campaign).
  • Audience overlap.
  • Brand sensitivity.
  • Learning value.

High risk + low learning = Kill it. Low risk + high insight = Green light.

Example: Testing a radical new enterprise positioning statement is high risk in a paid conversion campaign. Instead, your AI agent might suggest validating it first via organic LinkedIn content or low-budget audience polling. Low risk. High signal.

Get the newsletter search marketers rely on.


Step 5: Pre-test with synthetic audiences

This is one of the most underused applications of AI in experimentation. Synthetic testing means simulating how different personas may react to messaging before spending media dollars, and the data backs it up.

A study involving researchers from Stanford and Google DeepMind found that digital agents trained on interview data matched human survey responses with 85% accuracy and mimicked social behavior with 98% correlation. 

This makes synthetic audiences surprisingly useful for early-stage signal gathering. While they don’t replace real-world data (at least not yet), they can act as creative QA.

Here’s how it works. Define psychographic archetypes.

  • The Skeptical CMO (burned by vendors, risk-sensitive).
  • The Growth VP (speed-obsessed).
  • The CFO (margin-focused).

Feed your proposed messaging into your AI system and ask, “How would the Skeptical CMO react to this?”

You might get feedback like: “The phrase ‘All-in-One’ triggers skepticism. It signals feature bloat. Consider reframing as ‘Integrated’ or ‘Modular.’”

That kind of signal costs pennies in API calls instead of thousands in paid testing.

Step 6: Sequence tests, don’t stack them

Changing audience, creative, and landing page in the same week teaches you almost nothing. Your AI agent should act like air traffic control: scan active campaigns, flag conflicts, and recommend sequencing.

A better flow:

  • Week 1-2: Audience test.
  • Week 3-4: Creative test on the winning audience.

If overlap is unavoidable, enforce clean holdout groups so you always have a source of truth.

Step 7: Build a living knowledge base

Treat tests like disposable experiments and you lose the compounding value. Have your AI auto-summarize every completed test: 

  • Why did it win? 
  • Who did it win with? 
  • How durable was the lift? 
  • What variables interacted?

Over time, this database becomes your moat. Everyone can buy the same targeting. Few teams have 100+ validated customer truths at their fingertips.

The bigger shift: From activity to architecture

“Always be testing” was a growth-era mindset. In 2026, the winning mindset is “always be compounding intelligence.”

Rather than more tests, build your competitive advantage through structured, risk-aware, insight-driven experimentation that protects algorithm stability and ties experimentation directly to revenue.

The next time your stakeholder asks why you aren’t testing more, show them your experimentation architecture and say, “We’re not just running experiments. We’re building an intelligence engine.”

Because intelligence compounds.

Why most video ads fail — and what video metrics actually matter

Why most video ads fail — and what video metrics actually matter

Video advertising has never been easier to distribute. Platforms can deliver impressions and views at an enormous scale across YouTube, paid social, short-form video, and connected TV.

But distribution isn’t the same as effectiveness. Many campaigns generate impressive platform metrics while producing little measurable business impact.

The problem usually isn’t targeting, budget, or platform choice. It’s a deeper strategic issue: campaigns are optimized for outputs like views and impressions rather than outcomes like attention, persuasion, and action.

Most video ads fail because they misunderstand attention

Poor targeting, limited budgets, and platform choice are rarely the real problem. The bigger issue is that many video ads are still produced as if they’re television commercials.

In the early days of online video, distribution was the challenge. Getting a video seen at all felt like a win. Today, distribution is abundant. Attention isn’t.

Every major platform — YouTube, paid social, short-form video, connected TV — competes for fragments of cognitive bandwidth. Users arrive with intent, habits, and expectations that have nothing to do with your campaign. We plan for reach, while viewers respond to relevance.

I’ve sat in many meetings where success was defined by impressions delivered or views accrued. But when you look downstream — search lift, site engagement, conversion — the connection often disappears.

Platforms will reliably deliver impressions. Turning those impressions into memory, persuasion, or action requires a fundamentally different mindset.

Dig deeper: From Video Action to Demand Gen: What’s new in YouTube Ads and how to win

The first five seconds are the entire negotiation

Skippable formats changed video advertising permanently, but many advertisers still haven’t adjusted creatively.

Early in my career, I believed strongly in branding up front. Logos, product shots, music cues — everything that signaled professionalism. Those ads looked great in presentations. They underperformed in market.

A clear pattern emerged over time. Ads that opened with a recognizable problem, a provocative statement, or an unexpected visual held attention longer — even when branding appeared later. Ads that opened with branding signals were skipped almost reflexively.

View-through rate isn’t persuasion. A “view” simply means the platform’s minimum threshold was met. It doesn’t mean the message landed, the brand registered, or the viewer cared.

In multiple brand lift analyses, most measurable impact occurred before the skip button appeared. If the opening didn’t earn attention, the rest of the ad didn’t matter.

What works: treat the opening frame like a headline, not a preamble. Lead with tension, a question, or a familiar problem. Design for sound-off environments. If the first frame wouldn’t stop a scroll, nothing that follows will matter.

Higher production value often correlates with lower performance

One of the most counterintuitive lessons in modern video advertising: polished ads frequently underperform scrappier ones.

I’ve seen simple, phone-shot videos outperform meticulously produced studio spots across YouTube, paid social, and short-form platforms. Not because quality doesn’t matter — but because perceived authenticity matters more.

Audiences are exceptionally good at identifying advertising. When something looks like an ad, they disengage. When it looks like content, they give it a chance.

Algorithms reinforce this: they reward watch time, retention, rewatches, and shares. They do not reward lighting setups or production budgets.

I’ve seen brands “upgrade” social video to look more premium, only to watch performance decline. The creative looked better. The results were worse.

The goal isn’t to look amateurish. It’s to look like you belong.

Match the platform’s visual grammar. Prioritize clarity over polish. Use real people and authentic voices whenever possible.

Ads that feel native get watched. Ads that feel inserted get skipped.

Dig deeper: How to get better results from Meta ads with vertical video formats

Get the newsletter search marketers rely on.


Length is a creative decision, not a media constraint

“Shorter is better” is one of the most persistent — and misleading — rules in video advertising.

Six-second ads can work. So can 60-second ads. I’ve seen both exceed expectations, and I’ve seen both fail badly. The difference was never duration — it was justification.

Some messages can be delivered instantly. Others require context, proof, or emotional buildup. Forcing every idea into the same runtime produces predictable results: safe, bland, forgettable ads.

I’ve reviewed retention graphs where a 45-second ad held viewers longer than a 15-second version, because the story justified its length. I’ve also seen six-second ads lose half their audience in the first two seconds because they wasted the opening.

Test multiple edits, not just multiple lengths. Watch retention curves, not averages. Build modular narratives: hook, then value, then proof, then action.

The “right” length is however long it takes to make the viewer feel their time was respected.

Metrics are signals

Platforms provide more data than ever. The problem isn’t a lack of metrics. It’s confusing metrics with outcomes.

I’ve seen campaigns praised for high completion rates that produced no measurable business impact. Strong engagement coexisting with low conversion. Impressive view counts that delivered zero lift.

This happens because platforms optimize for their success metrics, not yours. If your goal is to maximize views, the platform can do that easily. If your goal is to influence consideration, preference, or action, things get more complicated.

One uncomfortable question I’ve learned to ask early: what would failure look like here? If the answer is vague, the campaign is already at risk.

Define success in business terms before launch. Tie video metrics to downstream behavior wherever possible. Use lift studies, holdouts, or assisted conversions when they’re available. If you’re running a brand-building campaign, measure brand lift. If you’re running a performance campaign, measure conversions.

Dig deeper: AI for video advertising: 5 best practices for PPC campaigns

The brief is usually where things go wrong

Creative is often blamed when video ads underperform. In reality, creative usually does exactly what it was asked to do. The problem is the brief.

Vague objectives produce generic ads. “Brand awareness” without context leads to unfocused messaging. “Make it engaging” isn’t a strategy.

Strong video ads almost always begin with clear answers to three questions: 

  • Who is this really for? 
  • What do they care about right now? 
  • What should they think, feel, or do differently after watching? 

When those answers are clear, creative decisions become easier. When they aren’t, the work is compromised before production begins.

The deeper diagnostic questions are worth keeping close: 

  • Are viewers actually paying attention, or just passively present? 
  • What are they feeling — and which specific creative choices are driving that response?
  • Will they remember the brand once the ad ends? 
  • What will they do next — share it, recommend it, search for the product, or buy?

I’ve seen entire campaigns improve simply because the brief forced alignment around audience insight rather than assumptions.

Distribution strategy is part of the creative

Another common mistake is treating creative and distribution as separate decisions. They aren’t.

The way an ad is consumed — fullscreen versus feed, sound-on versus sound-off, lean-back versus lean-forward — should shape how it’s made.

A video designed for connected TV shouldn’t simply be resized for mobile. A short-form ad shouldn’t be a truncated long-form story without rethinking the hook entirely.

I’ve seen strong ideas underperform because the creative didn’t match the placement. The concept wasn’t wrong. The context was.

Design with placement in mind from the start. Create platform-specific versions, not one-size-fits-all assets.

Accept that “reuse” often means “rethink,” not “repurpose.” Distribution constraints aren’t limitations — they’re creative inputs.

Dig deeper: How to dominate video-driven SERPs

Testing should answer questions, not just generate variants

Testing is indispensable. It’s also frequently misunderstood.

Running endless A/B tests without a hypothesis rarely produces insight. It produces noise.

The most effective testing focuses on variables that materially affect attention and comprehension: opening frames, narrative structure, on-screen text versus voiceover, proof points versus emotional appeals.

It’s also important to recognize what testing can’t do. Algorithms are excellent at optimizing toward measurable signals. They don’t understand brand equity, long-term memory, or cumulative effect. Testing should inform judgment — not replace it.

Ultimately, the only thing that matters for creative effectiveness tools is whether their predictions actually correlate to real media and sales outcomes — reliably enough to inform strategy and media decisions.

The question worth asking of any such tool is simple: How often does what it predicts will happen actually happen?

For example, I frequently cite data from DAIVID, an AI-driven creative effectiveness platform. Why? Because in independent testing, DAIVID’s predictions aligned with real-world outcomes more than 80% of the time — a meaningful foundation for making creative decisions with greater confidence before a campaign goes live.

Optimize for people

Platforms will change. Formats will evolve. Algorithms will shift in opaque and sometimes frustrating ways. But attention, curiosity, and trust remain stubbornly human.

The best video ads I’ve worked on weren’t optimized for view counts or completion rates. They were optimized for relevance. They respected the viewer’s time. They said something worth hearing.

Video ads don’t succeed because they follow platform rules. They succeed because they understand people. And that principle outlasts every algorithm update.

AI Max increases revenue 13% but drives higher CPA: Study

Google Ads dashboard concept

Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerce’s Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.

Why we care. AI Max isn’t a minor update. It’s Google’s most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, that’s both an opportunity (possible growth) and a risk (an efficiency tradeoff).

By the numbers. The result of the analysis:

  • Median revenue: +13%
  • Median CPA: +16%
  • ROAS range: +42% to -35%

Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.

Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely won’t follow, Ryan concluded

What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction — bringing PMax-style automation into classic Search. The result is three core features:

  • Search Term Matching (broad match expansion plus keywordless targeting),
  • Text Customization (dynamic ad copy), and
  • Final URL Expansion (automated landing page selection).

Four pitfalls Smarter Ecommerce identified:

  • Broad match cannibalization: Up to 63% of the time, recycling existing coverage rather than finding new queries.
  • Competitor hijacking: In one account, AI Max scaled so aggressively into competitor brand terms that it consumed 69% of total Search impressions.
  • Reporting overload: Search term and ad combination reports can run to tens of thousands of rows, making manual auditing nearly impossible without automation.
  • Search Partner Network blowouts: One campaign saw half a million monthly impressions land on SPN at a 0.07% conversion rate, versus 3.04% on standard Google Search.

Between the lines. Google’s 14% uplift stat conspicuously excludes retail — an omission Ryan flags as significant for ecommerce advertisers. There’s also a deeper irony: you’re most likely to adopt AI Max if you’re already running Broad Match, DSA, and PMax — yet Google says those accounts will see the lowest incremental benefit.

What’s next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.

Ryan recommends activating AI Max’s keywordless features in your existing Search campaigns now and beginning to wind down DSA — not migrating it to PMax.

Ryan’s verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and don’t let FOMO around AI Overviews drive your decision.

The report. The Ultimate Guide to AI Max for Google Search

New finding: ChatGPT sources 83% of its carousel products from Google Shopping via shopping query fan-outs

Shopping QFO Study – Featured image

Has OpenAI’s increasing independence from Microsoft and, by extension, Bing, become an overly dependent relationship with Google?

Our study comparing shopping query fan-outs (QFOs) in ChatGPT from both Google and Bing carousels appears to have provided at least a partial answer to that question. Let’s take a look at how this study was conceived and what we found.

Brief shopping fan-out background and technical explainer

In November 2025, a few researchers in the AI research space, including myself, detected a mysterious field in ChatGPT’s source code: id_to_token_map. But what that field revealed when decoded was even more intriguing.

This field is what’s called base64 encoded, but when we decoded it, it revealed what looked to be Google Shopping parameters, such as productid, and offerid, but also language/locale parameters. Even more interesting? This field revealed a query used to look up that particular product. 

To categorically prove this was indeed a Google Shopping link, we would have to be able to reconstruct the shopping URL solely from the extracted parameters. 

Let’s look at an example of what this looks like using the ChatGPT product carousel for the prompt “best smartphones under $500.”

If we decode the relevant field, we can recreate the Google Shopping link from the extracted parameters.

The big question was: Would this link correspond to the exact product in the ChatGPT product carousel? So we tried it:

It turns out that, in fact, yes it does!

But this decoding technique alone doesn’t answer any of these important questions:

  • Is this retrieval process uniform across diverse product categories?
  • Does ChatGPT select from a certain number of Google product positions?
  • Does ChatGPT favor higher Google Shopping product positions?
  • How common is this process at scale?
  • Was this just a fluke or, given a large enough dataset, could we match these products with any online retailer or even Bing Shopping results?

Using Peec AI data, the following study aimed to robustly prove once and for all that ChatGPT does indeed mainly source from Google Shopping. 

To do this we analyzed more than 40,000 carousel products and 200,000 organic products from each Google and Bing. By comparing the similarity of the products, we got a very clear picture of what was really happening behind the scenes. Let’s dig into our findings.

Are shopping query fan-outs really that different from normal search query fan-outs?

To answer whether shopping query fan-outs are different from normal search query fan-outs, we analyzed 1.1M shopping query fan-outs from Peec AI data and compared them to the normal search query fan-outs for the same user prompt. We found that they are almost always different:

Shopping QFO unique to user prompt99.70%
Shopping QFO unique to normal query search fan-out98.31%

To dive deeper, we explored the average word counts of both of these query fan-out types by calendar week. 

The chart below clearly shows that normal fan-outs are significantly longer — 12 vs. seven words. That makes sense since search query fan-outs are used to retrieve contextual information. This means they need to be long enough to retrieve web results that are specific to the user prompt. Vector search (or comparing embeddings) works best with more context. 

Shopping fan-outs, on the other hand, typically target a specific shopping results page and therefore do not need to be as long. It appears the main goal is to retrieve products based on the shopping fan-out. Rather than compare chunks of text, the data in this study supports the hypothesis that ChatGPT relies heavily on Google organic shopping results to populate its carousel.

Further evidence of the distinct nature of the shopping fan-outs surfaces when we look at how many are used per prompt. On average, 2.4 search fan-outs are used per prompt vs. just 1.16 for shopping fan-outs. For reasons similar to above, retrieving more contextual information often requires more search fan-outs vs. simply retrieving products. To populate an eight product carousel in ChatGPT, it seems that, for the most part, one page of Google Shopping results is enough.

How similar are ChatGPT Carousel products to Google Shopping products?

To answer this question in the fairest possible way, we extracted around 5,000 ChatGPT carousels comprising 43,000 products from the Peec AI dataset. Prompts were chosen to be as diverse as possible (see Methodology for the creation process).

We then extracted the organic shopping pages and retrieved the top 40 organic products for both Google and Bing shopping results. Paid ads and sponsored products were excluded from the analysis. 

We used a three-step matching algorithm (see Methodology for exact details) to attain a similarity score between the ChatGPT product title and the title found in organic shopping results. This is because not only is ChatGPT probabilistic, but so is, to a certain extent, Google Shopping. Product titles can be rewritten with or without certain product features and results are very sensitive to the exact proxy location where the results are retrieved. 

We counted a product as matching if it reached a threshold of 0.8 or above, effectively, if it was the same brand and product name and exhibited a very high degree of similarity.

The results are summarized in the chart below.

Impressively, across 43,000 highly diverse ChatGPT carousel products, 45.8% were found to have an exact title match in the corresponding Google top 40 organic shopping products for that exact shopping fan-out. 

For Bing, this exact match rate was just 0.48%. 

If we simply look at the percentage of strong product matches across all eight ChatGPT carousel positions, over 83% were found in the Google top 40 products, but that number drops to just under 11% for products found on Bing. This is very strong evidence that ChatGPT sources its carousel products from organic Google Shopping results.

We also see a very high number of weak matches in Bing at over 62%. This implies that the top 40 returned products for each shopping fan-out differ significantly across Google and Bing. This makes sense as there are many 1000s of possible combinations of brand and product that can be surfaced in shopping results. 

Even if Bing found around 11% of ChatGPT carousel products, how many of those products were only found by Bing? Across the 43,000 carousel products Bing only found 70 that were not found in Google Shopping, constituting just 0.16%. This means that in almost every case there was a match in Bing there was also a match in Google. 

It seems unlikely, then, that ChatGPT is also sourcing products from Bing Shopping in the vast majority of cases.

How does the ChatGPT carousel position affect the match rate?

Here we explore the most common positions (mean and median shown) of Google shopping product positions for each ChatGPT carousel position:

For example, for the first carousel position we can see that the average Google Shopping position is around five. Note that we see a sloping trendline for the carousel positions that correspond to higher Google Shopping positions. This implies that ChatGPT sources top carousel products from higher Google Shopping positions. 

Plotted another way, we can visualize the cumulative number of strong matches across organic Google Shopping positions. This chart allows us to see that 60% of the strong product matches are found in the top 10 Google shopping results alone. 

Comparing the top 20 vs. positions 21-40, ChatGPT’s favoritism for higher positions becomes clear, with an overwhelming majority of matches (almost 84%) coming from the top 20:

Finally, we explored whether the prompt being branded vs. non-branded made a difference to the product matching results.

The results show a similar high level of product matching for both branded and non-branded prompts, with only slightly higher match rates for non-branded:

Summary of findings

This study analyzed over 43,000 ChatGPT carousel products across 10 industry verticals and compared them against 200,000+ organic shopping results from both Google and Bing. The findings painted a clear picture.

ChatGPT sources its carousel products from Google Shopping, not Bing 

Over 83% of ChatGPT carousel products were found as strong matches in Google’s top 40 organic shopping results. For Bing, that figure was just 11%, and of those, only 70 products across the entire dataset (0.16%) were found exclusively in Bing. In almost every case where Bing returned a match, Google had already returned the same product.

Product retrieval and contextual retrieval are separate processes 

The data strongly supports this. Shopping query fan-outs are distinct from normal search fan-outs 98.3% of the time. They are significantly shorter (seven vs. 12 words), and ChatGPT uses far fewer of them per prompt (1.16 vs. 2.4 words). This makes sense; populating a product carousel is a fundamentally different task from gathering contextual information to construct a written answer. One is about retrieving structured product listings from a shopping index while the other is meant to retrieve web pages rich enough in context for vector search and re-ranking to work effectively.

ChatGPT favors higher Google Shopping positions 

The data shows a clear positional bias, with 60% of strong matches coming from the top 10 Google Shopping results and nearly 84% from the top 20. ChatGPT carousel position correlates with Google Shopping rank, meaning products that rank higher in Google Shopping are more likely to appear earlier in the ChatGPT carousel.

This points to systemic architectural behavior

Since these patterns hold across branded and non-branded prompts, and across all 10 verticals tested, this reinforces that this is a systematic architectural behavior rather than a category-specific or query-specific artifact.

What this means

For brands and retailers, the implication is straightforward: Your Google Shopping ranking strongly influences whether your products make it into ChatGPT’s carousel. These findings indicate that the selection set of carousel products in many cases is effectively the top 40 organic Google Shopping positions for the corresponding shopping fan-out query.

But while product ranking in Google Shopping plays a role, it doesn’t tell the full story. It is likely that other factors, such as overall product mentions and sentiment in the context sources retrieved, also factor into the final ChatGPT carousel selection and ranking. 

Understanding the full picture in terms of how your products are perceived across relevant sources, as well as how you show up on Google Shopping, could be the key to understanding ChatGPT product carousels.

For the AI research community, this study provides robust, large-scale evidence that ChatGPT’s product carousel operates as an independent retrieval pipeline for the selection set of products, separate from the contextual web search that powers the written portion of its responses. It is possible, and even likely, that for the final selection and ranking of products, ChatGPT uses contextual clues such as product sentiment from the sources retrieved by the normal search fan-outs.

As always, this represents a snapshot of current behavior. OpenAI could change its retrieval sources or methods at any time, but this behavior has been consistent in our findings for at least the last four months. 

Methodology

Objective

Measure how much product overlap there is between ChatGPT Shopping (via product carousels) and Google Shopping organic results for the same queries, across 10 industry verticals. This was contrasted to Bing shopping results as a control using an identical pipeline.

Specifically, the study evaluated:

  • How often ChatGPT recommends products that also appear in Google Shopping results
  • Where those overlapping products rank in each system

PromptSet creation

Prompts were created with the purpose of triggering ChatGPT carousels. To maximize diversity, a mixture of branded and non-branded prompts were used, as well as prompts that explicitly included a price and ones that did not.

Additionally, a diverse selection of verticals were chosen to make the findings more robust. These were: Apparel & Footwear, Baby & Kids, Beauty & Personal Care, Electronics, Home Improvement, Home & Kitchen, Office Supplies, Pet Supplies, Sports & Outdoors, Toys & Games.

Product matching 

The product matching algorithm compared ChatGPT product titles against the top 40 Google Shopping titles using a three-stage cascade approach

The goal was to find the best match between a ChatGPT product title and the corresponding Google Shopping titles. A match was determined using a cascade of three stages:

  • Stage 1: Exact match
    • Method: Case-insensitive string equality after removing whitespace
    • Score: 1.0
    • Label: exact
  • Stage 2: Near-exact match
    • Method: Uses the Python SequenceMatcher ratio on lowercased strings
    • Trigger: Activated if the best ratio across all candidates is 0.95 or higher
    • Purpose: To catch minor, trivial differences like spacing, punctuation, or different types of dashes
    • Score: The SequenceMatcher ratio (rounded to three decimal places)
    • Label: near-exact
  • Stage 3: Hybrid match
    • Method: A weighted average combining character-level similarity and token (word) overlap
    • Components and Weights:
      • SequenceMatcher Ratio (Character Similarity): 40% weight.
      • Token Overlap (Word Inclusion): 60% weight (fraction of tokens in the shorter title found in the longer one)
    • Selection: The candidate with the highest hybrid score is chosen, regardless of a specific threshold
    • Score: Calculated as (0.4 * SequenceMatcher Ratio) + (0.6 * Token Overlap) (rounded to 3 decimal places)
    • Label: hybrid

This approach was set to be fairly conservative, and 0.8 was determined as a reasonable threshold for a product match as this often corresponds very closely to the same brand and product. 

Real examples of matching thresholds from the data:

Match thresholdDescriptionChatGPT productGoogle ShoppingDifferences observed
1.0Exact string match, no differencesHot Wheels RC 1:64 Mustang GTDHot Wheels RC 1:64 Mustang GTDNone
0.95Near exact, minor differences such as hyphen, punctuation onlyLearning Resources Snap-n-Learn Matching DinosLearning Resources Snap‑n‑Learn Matching DinosThe hyphen character is different in unicode
0.9Same brand and product, additional non-crucial words allowedBlock Tech 250 Piece SetBlock Tech 250 Piece Building Blocks Set“Building” added to blocks, but product and brand are the same
.85Same product and brand, potentially slightly different word order and additional, non-crucial wordsLEGO Japanese Red Maple Bonsai TreeJapanese Red Maple Bonsai Tree LEGO BotanicalsDifferent word order and one additional word “Botanicals,” same product and brand
.8 good match threshold
Same brand, same product
Same brand and product, possibly additional descriptorsCards Game Against FRIENDS – Limited EditionCards Game Against FRIENDS – Limited Edition – Party Card Games For AdultsSame brand and product with additional descriptors that don’t affect the match
.75Same brand and product line, very minor product differences such as size or dimensionsMy Sweet Love 14-inch My Cuddly Baby DollMy Sweet Love 8-Inch MinWeBaby DollSame brand and product line but different size dimension
.7Same brand, often slightly different product, but within same categoryAdventure Force Ram Truck RC CarAdventure Force McLaren 765LT RC CarSame brand and product category but different individual product
.65Same brand, often slightly different product but within same categoryMattel 300‑Piece PuzzleMattel 80th Anniversary PuzzleSame brand and product category but different individual product
.6Typically same product category, but often different brand and product lineTell Me Without Telling Me Party Card GameElimino! Card GameDifferent brand and product line, the same overall category of “card game”
.55Similar product category but usually not either different brand and/or different productFurby Interactive Plush Toy Interactive Digital Pet ToyInteractive Digital Pet ToyDifferent brand, similar product category but different specific product

200+ AI audits reveal why some industries struggle in AI search

200+ AI audits reveal why some industries struggle in AI search

For 20 years, the web has run on a simple trade: publish content that meets a person’s needs, rank in search, earn traffic, then monetize that traffic through products, services, affiliate referrals, or ads.

Zero-click answers and AI search are rewriting that relationship. The new question is whether AI will cite you as a source — and whether that visibility can turn into revenue.

To understand who gets included and who gets routed around, I ran over 200 AI visibility audits across 10 industries.

The pattern was consistent: Most sites are easy to parse, but hard to justify citing. And the industries that rely on discovery traffic the most are often the ones making themselves the hardest to access.

How the audit was conducted

I ran 201 audits using the same rubric and captured an overall AI visibility score, plus four subscores: 

  • Freshness.
  • Structure.
  • Authority and evidence.
  • Extractability.

The dataset included 201 audits across 10 industries:

  • Coupons.
  • Affiliate reviews.
  • Travel booking.
  • Local directories.
  • Personal finance comparison.
  • Health information.
  • Legal directories.
  • Online courses.
  • Job boards.
  • Recipes.

Note that there was a page type skew — the sample is homepage-heavy (131 homepages, 13 articles, with the remainder a mix of pages). That matters because homepages tend to be marketing-heavy and evidence-light.

I also tracked access failures because “error” results are part of the story. 38 of the 201 audits (18.9%) returned an error, meaning the agent was likely blocked or couldn’t reliably access the content.

An additional eight audits were technically processed but scored 0 due to missing subscores, consistent with partial extraction or app-style rendering that yields little accessible content.

When I summarized score distributions, I focused on the successfully processed audits (163 sites), so “cannot access” didn’t get mixed with “low quality.” I treated error rate by industry as its own signal because it indicated whether AI systems could reliably use a site as a source.

Where industries stand in AI visibility

The table below shows how the industries in the dataset performed in the audits.

RankIndustryError rateMedian overallMedian authorityMedian extractabilityAt risk
1Travel booking and trip planning33.3%45.531.052.0High
2Job boards and career marketplaces40.0%64.044.074.0High
3Legal directories and lead gen35.0%63.044.074.0High
4Coupons and deals20.0%62.036.074.0High
5Local directories and lead gen5.3%64.038.074.0Medium
6Online courses and learning marketplaces30.0%67.546.580.0Medium
7Health info and symptom lookups15.0%69.052.080.0Low
8Personal finance comparison5.0%67.052.078.0Low
9Affiliate product reviews0.0%69.554.074.0Low
10Recipes and cooking content5.0%75.055.581.5Low

What the audits actually revealed

The findings show that most websites aren’t built to be cited consistently. Here are the three numbers that matter.

Access is a bigger problem than most teams think

38 of 201 sites (18.9%) returned an error. In some categories, it was far worse: job boards (40%), legal directories (35%), travel booking (33%), and course marketplaces (30%). In those spaces, a third to nearly half of the market is effectively AI-dark by default.

Legal directories had the highest AI blocking of any industry.

Most sites are stuck in the middle

Across the 163 processed audits:

  • Average overall score: 61.6
  • Median overall score: 66
  • 70.6% landed in “Inconsistent visibility” (60 to 79)
  • Only 4.9% reached “Strong foundation” (80 to 94)
  • 0% hit “Exceptional” (95 plus)

Translation: Most brands aren’t built to be reliably used and cited.

The gap is proof, not formatting

Median subscores across processed audits:

  • Structure: 92
  • Extractability: 74
  • Authority and evidence: 48
  • Freshness: 45

Most pages are easy to parse. Far fewer are easy to justify citing. Two repeated findings explain why:

  • “No last modified header detected” showed up 114 times (machine-readable freshness is missing).
  • Citations or outbound references appeared only 13 times (machine-readable proof is rare).

That should change how you think about risk. More than losing traffic, the bigger threat is being removed from the consideration set.

Dig deeper: What 4 AI search experiments reveal about attribution and buying decisions

Get the newsletter search marketers rely on.


3 ways an industry vanishes from AI search

Industries disappear for three reasons. You can think of them as three failure modes.

1. Access failure: AI can’t reliably reach your content

If agents can’t consistently access your content, the model has less to work with and will either route around you or fill in the gaps from other sources.

What access failure looks like:

  • Bot protections, rate limiting, or web application firewall (WAF) rules that treat agents as hostile.
  • App-style rendering where meaningful content never arrives in initial HTML.
  • Content gated behind prompts, popups, or scripts that don’t resolve cleanly.

Why this causes vanishing:

  • If AI systems can’t reliably extract, they can’t reliably cite.
  • The user’s intent still gets satisfied — it just gets satisfied by someone else’s crawlable content or a native AI answer.

2. Trust failure: AI can read you, but can’t justify citing you

Trust failure is quieter. The agent can access your page, parse it, and summarize it, but the page doesn’t provide enough proof for the model to confidently cite it as a source.

This was the dominant pattern in the completed audits. In plain language: Your content is readable, but it isn’t defensible.

The clearest proof of this showed up when I compared page types:

  • Median authority score on article pages: 76
  • Median authority score on homepages: 45

A polished homepage isn’t proof. If you want to be cited for anything beyond your brand name, a typical homepage alone isn’t enough. Evidence usually lives in articles, explainers, data pages, policy pages, and methodology pages.

3. Utility failure: Even if you’re visible, the click may not happen

Utility failure is the most painful. You might get included. You might get cited. But if your value is only information, AI can compress it into an answer, and the user never needs to visit your site.

Visibility determines whether you appear in the conversation. Utility determines whether appearing turns into revenue.

A practical way to think about it:

  • If your page answers the question, AI can replace the page.
  • If your product or service completes the job, AI still needs you.

Access failure gets you excluded. Trust failure gets you skipped. Utility failure gets you summarized.

Why certain industries show up as vulnerable

Once access, trust, and utility get viewed together, the vulnerable industries stop looking random.

The categories that repeatedly showed high risk in my dataset share three traits:

  • Access is inconsistent (blocking and extraction problems).
  • The content is easy to compress into a single answer.
  • The business has no next step value once the answer is delivered.

That’s why travel booking, job boards, legal directories, and coupon sites clustered as the most exposed categories in this dataset.

The bigger takeaway? Your website can be built in a way that invites exclusion, even if your business is healthy.

Dig deeper: Why every AI search study tells a different story

The point you shouldn’t miss

Some industries will feel this harder than others. A site funded primarily by high-volume informational traffic is more exposed to zero-click behavior. But even in those categories, the path forward is to stop selling information alone. 

The big mistake right now is treating AI search like a ranking update, when it’s an economic update. The audits made two things obvious:

  • Many industries are making themselves hard to access, which guarantees the model will route around them.
  • Even when the model can read a page, it often can’t justify citing it because proof is missing.

The threat is invisibility. You don’t win by hiding. You win by becoming cite-worthy and by building something the user still needs after the answer is delivered.

Trust plus utility is the new moat. Anything else is just playing from yesterday’s playbook.

How to chunk content and when it’s worth it

How to chunk content and when it’s worth it

How content is structured in an article or blog post might not seem controversial. But, apparently, Google doesn’t want you to create bite-sized chunks of content simply to please LLMs. Called “chunking,” this technique helps get your content noticed by AI models and reflects how readers actually engage with online content.

Chunking may make content more retrievable or citable in AI search, but ultimately, it improves the flow of content and makes concepts easier for people to understand. Let’s talk about how chunking works and when to use it.

What is chunking?

Chunking is the practice of organizing text into distinct, self-contained units of meaning. When content is chunked, information is segmented so each paragraph focuses on a single idea and contains everything the reader needs to understand the basics of that idea simply and quickly. 

Someone should be able to read a single paragraph and grasp the concept without having to hunt for context in the surrounding words. 

Does chunking help AI or people?

The recent criticism from Google suggests that the practice of chunking over-optimizes content, specifically so that it will show up in AI answers. The idea that people are writing specifically for AI assumes that what’s good for AI is somehow bad for human readers.

But really, chunking helps communicate ideas for both readers and search retrieval systems. When content is chunked, it doesn’t dumb down or artificially fragment ideas. It organizes information to match how people actually read online content, making articles easier to scan. 

Chunking also helps AI systems because they operate at the passage level rather than the page level. For example, when a system needs to identify an answer for “how to measure keyword cannibalization,” a heading that says exactly that, followed by a focused paragraph, would create a clear match.

In contrast, when an answer to that same question is buried in a dense paragraph covering three other topics, that information gets diluted. The AI might see relevant keywords, but if the text meanders between ideas, it will have a lower confidence that the passage definitively answers the query.

Clear structure creates clear meaning.

Chunking helps both readers to scan content and AI systems to accurately identify what your content says. 

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

When to chunk content

When writing from scratch, integrate chunking into your process from the start.

However, it may not be worth your time to edit existing content solely to chunk it. You may find that some articles already follow chunking principles, even if they weren’t explicitly planned to do so. Others may be out of date or poorly structured, requiring more substantial rewrites.

If you want to chunk existing content, prioritize pieces that:

  • Receive significant traffic but have high bounce rates or low engagement.
  • Rank well, but aren’t being cited.
  • Cover complex topics where readers need to find specific information quickly.
  • Serve bottom-of-funnel audiences making decisions based on specific details.

Skip chunking edits for content that:

  • Already performs well and receives AI citations.
  • Is scheduled for comprehensive rewrites in the near future.
  • Covers topics where narrative flow matters more than information retrieval.

If you have content that is impactful because it creates an emotional arc, chunking or breaking it down into discrete chunks could hurt the piece. If your content succeeds by carrying readers through a journey rather than letting them jump to an answer, preserve that flow.

For example:

  • Thought leadership that builds to a provocative conclusion.
  • Opinion essays that require context before the thesis lands.
  • Brand storytelling that uses prose rhythm.

Dig deeper: Chunks, passages and micro-answer engine optimization wins in Google AI Mode

Get the newsletter search marketers rely on.


How to chunk content

A chunk in a piece of content should be long enough to explain one thought. This often results in shorter paragraphs — the defining feature is a singular focus, not the word count. 

These focused paragraphs sit under clear headings. The heading tells the reader what to expect, and the chunks beneath it deliver on that expectation. 

Build chunking into your content outline

To include chunking in your writing, the most effective approach is to integrate it from the start. 

Define for yourself or other writers which ideas or concepts in a given topic constitute a chunk, focusing on paragraphs and heading descriptions.

If using content briefs, make it clear in your outlines that each H2 or H3 should cover one complete concept and the content under that heading should fully explain the concept. 

How to edit existing content into chunks

Focus your efforts on high-value pages first when editing existing content. Prioritize pages that receive traffic but struggle with engagement or pages that rank well but aren’t being cited.

  • Evaluate your heading structure: Do your H2s and H3s clearly say the information that each section contains? If not, rework the overall structure of an article first, to include the main points of the topic. Add paragraph chunks for any new subheadings.
  • Look for paragraphs that contain multiple ideas and break them apart: Each paragraph should stand on its own as a complete thought without depending on other ideas. 
  • Edit the article to delete any extra information: Make the paragraphs concise. Focus only on relevant information for each chunk.

To chunk or not to chunk?

Don’t let Google convince you that chunking is a hack. Chunking makes content work better for everyone and everything — from readers scanning for specific information to AI systems matching queries to answers. 

Dig deeper: How to build a context-first AI search optimization strategy

How the DOM affects crawling, rendering, and indexing

The DOM in technical SEO- How it affects crawling, rendering, and indexing

You’ve probably heard developers talk about the DOM. Maybe you’ve even inspected it in DevTools or seen it referenced in Google Search Console.

But what, exactly, is it? And why should SEOs care? Let’s take a look at what it is, why it’s important, and how to best optimize it.

What is the DOM?

The Document Object Model (DOM) is a browser’s live, in-memory representation of your webpage. It acts as the interface that allows programs like JavaScript to interact with your content.

The DOM is organized as a hierarchical tree, similar to a family tree:

  • The document: This is the root of the tree.
  • Elements: HTML tags like <body>, <p>, and <a> become branches (or “nodes”).
  • Relationships: Elements have parents, children, and siblings.

This hierarchy is critical because it allows the browser (and search engines) to understand the relationship between different parts of your content. For example, proper hierarchical order lets your browser understand that a specific paragraph belongs to a specific heading.

How to inspect the DOM

The DOM itself is actually a JavaScript object structure stored in memory, but browsers show it to you as markup that looks very much like HTML.

You can see this HTML representation of the DOM by right-clicking on a page and selecting Inspect > Elements. This is called the Elements panel. I’ve outlined it in the red box below: 

DevTools - Elements panel

In the Elements panel inside DevTools, you can:

  • Expand and collapse nodes to explore the structure.
  • Search for specific elements using Ctrl+F on a PC or Cmd+F on Mac within the Elements panel.
  • See which elements have been added or modified by JavaScript (they often flash briefly when changed).

Note that DevTools doesn’t necessarily show you what Googlebot sees. I’ll circle back to what that means later in this article.

How the DOM is constructed

To understand why the DOM often looks different from your HTML file, you first need to understand how the browser creates it. That begins with your browser building the DOM tree. 

Building the DOM tree

When your browser requests a page, the server sends back an HTML file. The browser reads this response line by line and translates it into “tokens” (tags like <html>, <body>, <div>).

These tokens are then converted into distinct “nodes,” which serve as the building blocks of the page. The browser links these nodes together in a parent-child hierarchy to form the tree structure.

You can visualize the process like this:

Building the DOM tree

It’s important to know that the browser simultaneously creates a tree-like structure for CSS, known as the CSS Object Model (CSSOM), which allows JavaScript to read and modify CSS dynamically. However, for SEO, the CSSOM matters far less than the DOM.

JavaScript execution

JavaScript often executes while the tree is still being built. If the browser encounters a <script> tag (without defer or async attributes, which allow for the script to load asynchronously), it pauses construction, runs the script, and then finishes building the tree.

During this execution, scripts can modify the DOM by injecting new content, removing nodes, or changing links. This is why the HTML you see in View Source often looks different from what you see in the Elements panel.

Here’s an example of what I mean. Each time I click the button below, it adds a new paragraph element to the DOM, updating what the user sees.

JavaScript execution

Your HTML is the starting point, a blueprint, if you will, but the DOM is what the browser builds from that blueprint.

Once the DOM is created, it can change dynamically without ever touching the underlying HTML file.

Dig deeper: JavaScript SEO: How to make dynamic content crawlable

Get the newsletter search marketers rely on.


Why the DOM matters for SEO

Modern search engines, such as Google, render pages using a headless browser (Chromium). This means that they evaluate the DOM rather than just the HTML response.

When Googlebot crawls a page, it first parses the HTML, then uses the Web Rendering Service to execute JavaScript and take a DOM snapshot for indexing.

The process looks like this:

Googlebot - crawling, rendering and indexing

However, there are important limitations to understand and keep in mind for your website:

  • Googlebot doesn’t interact like a human. While it builds the DOM, it doesn’t click, type, or trigger hover events, so content that appears only after user interaction may not be seen.
  • Other crawlers may not render JavaScript at all. Unlike Google, some search engines and AI crawlers only process the initial HTML response, making JavaScript-dependent content invisible.

Looking ahead to a world that’s becoming more AI-dependent, AI agents will increasingly need to interact with websites to complete tasks for users, not just crawl for indexing.

These agents will need to navigate your DOM, click elements, fill forms, and extract information to complete their tasks, making a well-structured, accessible DOM even more critical than ever.

Verifying what Google actually sees

The URL inspection tool in Google Search Console shows how Google renders your page’s DOM, also known in SEO terms as the “rendered HTML,” and highlights any issues Googlebot might have encountered. 

This tool is crucial because it reveals the version of the page Google indexes, not just what your browser renders. If Google can’t see it, it can’t index it, which could impact your SEO efforts.

In GSC, you can access this by clicking URL inspection, entering a URL, and selecting View Crawled Page.

The panel below, marked in red, displays Googlebot’s version of the rendered HTML.

GSC URL inspection tool - rendered HTML

If you don’t have access to the property, you can also use Google’s Rich Results Test, which lets you do the same thing for any webpage.

Dig deeper: Google Search Console URL Inspection tool: 7 practical SEO use cases

Shadow DOM: An advanced consideration

The shadow DOM is a web standard that allows developers to encapsulate parts of the DOM. Think of it as a separate, isolated DOM tree attached to an element, hidden from the main DOM.

The shadow tree starts with a shadow root, and elements attach to it the same way they do in the light (normal) DOM. It looks like this:

Shadow DOM

Why does this exist? It’s primarily used to keep styles, scripts, and markup self-contained. Styles defined here cannot bleed out to the rest of the page, and vice versa. For example, a chat widget or feedback form might use shadow DOM to ensure its appearance isn’t affected by the host site’s styles.

I’ve added a shadow DOM to our sample page below to show what it looks like in practice. There’s a new div in the HTML file, and JavaScript then adds a div with text inside it.

Sample page - shadow DOM

When rendering pages, Googlebot flattens both shadow DOM and light DOM and treats shadow DOM the same as other DOM content once rendered.

As you can see below, I put this page’s URL into Google’s Rich Results Test to view the rendered HTML, and you can see the paragraph text is visible.

Tested page - shadow DOM

Technical best practices for DOM optimization

Follow these practices to ensure search engines can crawl, render, and index your content effectively.

Load important content in the DOM by default

Your most important content must be in the DOM and appear without user interaction. This is imperative for proper indexing. Remember, Googlebot renders the initial state of your page but doesn’t click, type, or hover on elements.

Content that is added to the DOM only after these interactions may not be visible to crawlers. One caveat is that accordions and tabs are fine as long as the content already exists in the DOM.

As you can see in the screenshot below, the paragraph text is visible in the Elements panel even when the accordion tab has not been opened or clicked.

Paragraph text is visible in the Elements panel

Use proper <a> tags for links

As we all know, links are fundamental to SEO. Search engines look for standard <a> tags with href attributes to discover new URLs. To ensure they discover your links, ensure the DOM shows real links. Otherwise, you risk crawl dead ends.

You should also avoid using JavaScript click handlers (e.g., <button onclick="...">) for navigation, as crawlers generally won’t execute them.

Like this: 

Use semantic HTML structure

Use heading tags (<h1>, <h2>, etc.) in logical hierarchy and wrap content in semantic elements like <article>, <section>, and <nav> that correctly describe the site’s content. Search engines use this structure to understand pages.

A common issue with page builders is making DOMs full of nested <div> elements without semantic meaning. This does little to help search engines understand your page and sets up problems for you or future devs trying to maintain the code on your site.

Ensure to maintain the same semantic standards you’d follow in static HTML.

Here’s a snippet of semantic HTML as an example:

<!-- Semantic HTML -->

<nav>

  <ul>

    <li><a href="/">Home</a></li>

    <li><a href="/about">About</a></li>

  </ul>

</nav>

Here’s an example of “div soup” HTML that’s non-semantic and harder for search engines and assistive technologies to understand.

<!-- Non-Semantic HTML -->

<div class="nav">

  <div class="nav-list">

    <div class="nav-item"><a href="/">Home</a></div>

    <div class="nav-item"><a href="/about">About</a></div>

  </div>

</div>

Optimize DOM size to improve performance

Keep the DOM lean, ideally under ~ 1,500 nodes, and avoid excessive nesting. Remove unnecessary wrapper elements to reduce style recalculation, layout, and paint costs.

Here’s an example from web.dev of excessive nesting and an unnecessarily deep DOM:

<div>

  <div>

    <div>

      <div>

        <!-- Contents -->

      </div>

    </div>

  </div>

</div>

While DOM size is not a Core Web Vital itself, excessive and deeply nested DOMs can indirectly impact performance, especially on lower-end devices.

To mitigate these impacts:

  • Limit layout-affecting DOM changes after initial render to reduce Cumulative Layout Shift (CLS).
  • Render critical above-the-fold content early to improve Largest Contentful Paint (LCP).
  • Minimize JavaScript execution and long tasks to improve Interaction to Next Paint (INP).

The DOM’s importance will only continue growing

A workable understanding of the DOM can help you not only diagnose SEO issues, but also effectively communicate with developers and others on your team.

We know that the DOM impacts Core Web Vitals, crawlability, and indexing. As AI agents increasingly interact with websites, DOM optimization becomes more critical. It’s important to master these fundamentals now to stay ahead of evolving search and AI technologies.

How to use AI for SEO without losing your brand voice

How to use AI for SEO without losing your brand voice

There’s a growing problem in SEO and content marketing that doesn’t get talked about enough: everything is starting to sound the same. The same phrasing and structure, the same bland tone, the same safe language, the same robotic rhythm.

The web is filling up with perfectly optimized content that no one actually enjoys reading. And that’s the real risk. Not that AI will replace SEOs, Google will penalize AI content, or automation will destroy search.

The real danger is that brands lose their voice, their personality, and their identity in the name of efficiency.

AI should make your SEO better, not blander. Faster, not flatter. Scalable, not soulless.

Here’s how to use AI without turning your brand into beige wallpaper — and without losing what makes it worth ranking in the first place.

AI works best when it supports strategy

AI doesn’t replace a marketing plan, positioning model, or clear brand direction. It supports them. In the same way that tools like Google Analytics, Semrush, and Screaming Frog help you understand what’s happening, AI helps you work more efficiently and supports thinking.

If your SEO strategy is simply, “We use AI,” you don’t have a strategy. You have a software subscription. Without a clear understanding of your audience, what they care about, the problems they’re trying to solve, how they speak, what tone they respond to, and what your brand stands for, AI will just produce generic content at scale.

Where AI adds real SEO value

AI is genuinely good at certain parts of SEO, particularly areas that rely on scale, structure, and data processing. These include:

  • Analyzing large data sets.
  • Grouping keywords by intent.
  • Spotting patterns in SERPs.
  • Identifying content gaps.
  • Mapping topics.
  • Supporting internal linking.
  • Handling repetitive technical tasks.

This is where AI earns its place. It handles repetitive manual work, speeds up research, reduces basic human error, and helps teams operate more consistently at scale. None of that is threatening. It’s simply practical.

Used properly, AI removes friction from SEO work and gives teams more space to focus on strategy and decision-making. The problems begin when people expect AI to execute SEO work it isn’t built for, treating it as a shortcut rather than a support system. When used this way, the output inevitably falls short of expectations.

Dig deeper: How to train in-house LLMs on your brand voice

Where AI falls apart

AI struggles with the parts of marketing that build trust. Emotional intelligence, cultural awareness, tone, humor, empathy, and genuine understanding are difficult for it to replicate. It doesn’t truly grasp brand positioning, long-term thinking, or commercial judgment, and it can’t make ethical decisions in any meaningful way.

It can copy patterns, but it doesn’t understand meaning. It can recreate tone, but it doesn’t feel it. It can build structure, but it doesn’t create identity.

That’s why so much AI content feels fine but ultimately forgettable. It does the job, ticks the boxes, answers the question, follows SEO rules, and hits the word count. But it doesn’t create a connection that turns traffic into trust, and trust into customers.

The biggest risk with AI in SEO isn’t penalties or algorithm changes. It’s gradual brand dilution. Over time, content becomes more neutral, more generic, and less distinctive.

Visibility may stay the same, but identity weakens. Traffic grows, but loyalty doesn’t. Performance looks healthy, but trust doesn’t compound.

AI should handle structure, humans should handle soul

Effectively using AI in SEO requires role clarity. Let AI handle the structure and scale, but keep meaning firmly in human hands. 

AI is well-suited to researching, analyzing, clustering, outlining, drafting frameworks, data processing, repetitive optimizing, and detecting patterns. These are process-driven tasks where automation adds real value.

However, everything that defines the brand and the relationship with the audience — voice, tone, storytelling, personality, trust building, emotional connection, commercial messaging, ethical judgment, and real audience understanding — should remain a human endeavor.

AI can help you build faster, but it shouldn’t decide what you’re building. It supports the process, but the design still belongs to you.

Dig deeper: How to blend AI and human input in your content approach

Get the newsletter search marketers rely on.


Build your brand voice before you build with AI

If you don’t define your brand voice, AI will default to something neutral and generic. That doesn’t happen because the technology is broken. It happens because you haven’t given it anything clear to work with. 

Before using AI for content, clarify:

  • Who you’re speaking to.
  • How you speak.
  • The language you use and avoid.
  • The tone you adopt.
  • The personality you want to project.
  • The values you stand for.
  • The boundaries you won’t cross.

Many people assume better prompts can fix weak content. But prompts, no matter how detailed, don’t replace thinking, brand clarity, audience understanding, or positioning.

You can write the most detailed prompt in the world, but if your brand identity is fuzzy, the output will still be fuzzy. AI amplifies whatever you input, whether that’s clarity or chaos. There’s no middle ground.

Dig deeper: Content marketing in an AI era: From SEO volume to brand fame

Practical ways to use AI without losing your voice

Here’s what works in the real world and not just in tool demos.

  • Use AI for research: Let it gather data, insights, SERP patterns, questions, clusters, topics, and gaps. Then write the content yourself or heavily edit it.
  • Use AI to create frameworks: Outlines, structures, and content maps are perfect AI jobs. 
  • Train AI on your tone: Feed it examples of your writing, content, emails, site copy, and brand language. But still treat outputs as drafts and not finals.
  • Human edit everything: Your job is to brand edit. Does this sound like us? Would we say this? Would our customers recognize this voice? Does this feel human?
  • Protect your commercial pages: Blogs are one thing, but core service pages, product pages, and brand pages should always be human-led. These pages define your business identity.
  • Use AI to scale consistency, not sameness: Consistency is brand clarity. Sameness is brand death.

AI will amplify whatever your brand already is

Google doesn’t care whether content is AI-generated. It evaluates whether the content is useful, helpful, original, trustworthy, and valuable.

Low-quality human content gets punished. Low-quality AI content gets punished. High-quality content wins, regardless of who or what created it.

The myth that “AI content gets penalized” misses the point. What actually gets penalized is bad content, and AI simply makes it easier to produce bad content faster.

The brands that will lead SEO over the next few years won’t be the ones with the biggest AI tech stacks. They’ll be the ones that combine human strategy with AI efficiency, clear positioning with scalable systems, and strong brand voice with intelligent automation. They’ll use AI to move faster, but not to think for them.

Brands with clarity and identity will strengthen their position. Brands without them will simply become louder without standing out.

Dig deeper: How to balance speed and credibility in AI-assisted content creation

Accessibility can’t stop at the shelf: An $18 trillion lesson for marketers by AudioEye

 Illustration of an online storefront against a green background, featuring a digital shop window, clothing items, a “sold” sign, and icons representing growth, accessibility, and customers.

Every once in a while, a product launch doubles as a marketing masterclass. Recently, Selena Gomez’s Rare Beauty released a new fragrance, and it wasn’t just the scent that captured attention. It was the bottle. Designed with accessibility in mind, the easy-to-use packaging quickly became the story, sparking conversations and praise from accessibility advocates and consumers alike.

The takeaway for marketers is hard to miss. An inclusive design decision became the campaign itself, delivering more cultural impact than any ad spend could buy. And the lesson for marketers is equally clear: accessibility drives loyalty, enhances brand reputation, ensures compliance, and acts as a measurable growth driver.

Accessibility as a campaign strategy

Rare Beauty’s commitment to accessibility wasn’t a one-off. From packaging to pricing to its ongoing mental health advocacy, the brand has consistently embedded inclusivity into its DNA. That authenticity matters. Consumers can tell the difference between a stunt and a strategy, and they reward brands that lead with values.

And Rare Beauty isn’t alone. Across industries, leading brands are increasingly surfacing accessibility as a differentiator, not a footnote. Apple has consistently highlighted accessibility features as part of its core product storytelling, positioning them as innovation rather than accommodation. Microsoft has done the same by showcasing inclusive design in mainstream campaigns, including adaptive gaming products that reframed accessibility as a driver of creativity and connection. In fashion and retail, brands like Tommy Hilfiger and Unilever have brought adaptive design into the spotlight, integrating accessibility into product launches and brand identity rather than siloing it as a niche offering.

According to studies from Edelman and McKinsey, 73% of Gen Z choose to buy from brands they believe in, and 70% say they try to purchase products from companies they consider ethical. These aren’t fringe preferences, they’re mainstream expectations that can redefine how marketers approach building trust and growth with their audiences.

The $18 trillion market marketers overlook

More than 1.3 billion people globally live with a disability, and together with their friends and family, they control over $18 trillion in spending power, according to the Return on Disability Group. For marketers, this isn’t just about compliance. It’s about growth, reputation, and building genuine trust in one of the world’s largest and most passionate consumer groups. That passion translates to powerful advocacy. 

In discussions with AudioEye’s A11iance Team, a group of individuals with disabilities who regularly share feedback on real-world accessibility experiences, one member stated, “If I find a website that works and works very well for me, I will always recommend it to friends and family because I want people to have the same experience that I have.”

As another A11iance Team member, Maxwell Ivey, put it, “The cheapest form of advertising is word of mouth, and people with disabilities can have some of the loudest voices when we find people willing to make the effort. Because it’s that sincere effort over time that really counts with us.”

When accessibility becomes part of the customer experience, it creates something money can’t buy: trust and loyalty that scale through advocacy. But the opposite is also true. In a survey of assistive technology users, 54% said they don’t feel eCommerce companies care about earning their business.

Most brands are still competing for the same oversaturated demographics while overlooking this opportunity hiding in plain sight. In doing so, they’re leaving loyalty, advocacy, and revenue on the table.

Here’s where many brands stumble: accessibility usually stops at the shelf. Marketers invest heavily in packaging, store displays and product design, while digital experiences, the first and often primary touchpoint for customers, lag behind.

As accessibility-led design continues to earn attention, loyalty and earned media, the gap between physical product innovation and digital experience has become harder to ignore.

AudioEye’s 2025 Digital Accessibility Index found an average of 297 accessibility issues per web page detectable by automation alone. Each one represents friction in the customer journey, a conversion lost, or a compliance risk under frameworks like the Americans with Disabilities Act (ADA) and the European Accessibility Act (EAA).

Just as no campaign would launch without a brand review or legal check, no digital touchpoint should go live without an accessibility review.

Four moves marketing leaders can make

Too often, accessibility is treated as a risk to manage instead of an advantage to leverage. The marketers who win will be the ones who flip that script. Here are four actions to start with.

1. Make accessibility your campaign hook

Don’t hide it, lead with it. Brands like Rare Beauty have proved that inclusive design is the story. Build campaigns where accessibility isn’t a footnote but the differentiator that captures attention and loyalty.

2. Bake it into your brand system

Accessibility shouldn’t sit off to the side. Make Web Content Accessibility Guidelines (WCAG) alignment part of your brand guidelines, right alongside typography, logos and tone of voice. When accessibility is codified, it becomes second nature across every campaign.

3. Use data as your proof point

Marketers are storytellers, and numbers seal the story. Track accessibility improvements such as fewer user-reported barriers, higher accessibility scores and fixes like improved alt text, color contrast or form usability. Connect those metrics to existing business outcomes like conversion, reach, and sentiment to show how accessibility drives ROI, not just compliance.

4. Protect accessibility like brand safety

Just as you’d never risk brand safety in ad placements, don’t risk it in your digital touchpoints. Every update, seasonal campaign, or product drop should be monitored for accessibility. Trust and reputation are too valuable to leave exposed.

The Competitive Advantage

Rare Beauty’s fragrance launch proved something powerful: when you lead with accessibility, the story writes itself. The loyalty builds authentically, and the momentum flows naturally.

But here’s the opportunity: most brands still don’t get it. They’re treating accessibility as a compliance checkbox instead of the growth strategy it really is.

For marketers, that’s the wake-up call. Accessibility builds loyalty. It enhances brand reputation. It keeps your brand compliant. And it drives measurable growth across marketing efforts.

Rare Beauty showed how accessibility can capture attention at the shelf. The next opportunity is making sure it carries through online. Because when every touchpoint welcomes everyone, every campaign maximizes its impact.

Google AI Mode updates recipe results to better connect people with recipe creators

Google is rolling out an update to AI Mode for recipe results that it hopes will make recipe bloggers happy. Google’s Robby Stein said on X, “We’ve heard feedback on recipe results in AI Mode, and we’re making updates to better connect people with recipe creators on the web.”

The changes aim to make it easier to click over to recipe sites, though I am not 100% certain yet whether the recipe summaries turn recipes into AI slop.

“Starting today, when you search for meal ideas like “easy dinners for two,” you can tap on the dish to see links to relevant recipe sites, plus a short overview of the dish to help with inspiration,” Stein added.

What it looks like. Here is a video of it in action:

More recipe details too. Google is also adding more information to the recipe results including cook time. Google said its “testers have found useful for deciding on a recipe.”

“We know there’s more work to be done on this, so stay tuned for future updates,” Robby Stein added.

Why we care. Recipe bloggers, well, content creators in general, have not been happy with how traffic from Google’s AI experiences did not send as much traffic as the traditional search results. Here we see Google trying to make changes to encourage more searchers to click from those AI experiences to the bloggers website.

Will it make a big difference? Time will tell.

💾

These changes are based on feedback from recipe bloggers - but will it be enough?

Google Ads status dashboard flags Ad Manager reporting issue

Google Ad Manager

Google is investigating a disruption affecting Google Ad Manager, according to an update posted on the Google Ads Status Dashboard.

The incident began at 13:49 UTC on March 4. By 13:54 UTC, Google said it was reviewing reports that some users could access Ad Manager but weren’t seeing the most up-to-date data.

What’s happening. The issue appears to impact reporting consistency. Specifically, Ad Exchange match rate and Ad Exchange request values are not aligning between Ad Manager’s interactive reports and the legacy reporting query tool (now deprecated).

Why we care. Reporting discrepancies in Google Ad Manager can directly impact how you evaluate performance and optimize campaigns. If Ad Exchange match rates and request data don’t align across reporting tools, it becomes harder to trust the numbers driving pacing, forecasting and revenue decisions.

What it means. Users can still log into Ad Manager, but reporting discrepancies may affect data accuracy — at least temporarily. There’s no indication yet of a full outage, but for publishers and advertisers relying on real-time reporting, mismatched metrics could complicate performance monitoring and optimization decisions.

What’s next. Google says it’s actively investigating and will provide further updates. In the meantime, affected users are advised to monitor the status dashboard and contact support if they’re experiencing issues not listed there.

Google Merchant Center adds “build to order” for vehicle listings

Google Shopping Ads - Google Ads

Google introduced a new availability value in Google Merchant Center — built specifically for vehicle sellers who don’t carry every model on the lot. The new attribute, “build to order,” lets dealers flag vehicles that aren’t physically in inventory but can be customized and ordered by customers.

What needs to change. Sellers must update two areas: their structured data (set availability to BuildToOrder) and their Merchant Center feed (set availability to build to order). Consistency between structured data and feed submissions is critical to avoid disapprovals.

Instruction on when to use the availability [availability] attribute in GMC 

Why we care. Until now, sellers had limited ways to signal that a vehicle wasn’t available for immediate pickup. The new value better reflects how many modern automakers operate — especially direct-to-consumer brands like Tesla and Rivian, where buyers configure features before production. For dealers offering factory orders or custom builds, this means clearer expectations for shoppers — and cleaner data for Google.

The fine print Vehicles marked “build to order” must have the condition attribute set to “new.” If a listing is marked “used,” it will be disapproved — Google considers build-to-order vehicles to be newly configured, not pre-owned.

Bottom line If you sell customizable or factory-order vehicles, this update gives you a more accurate way to reflect availability — but only if your feed, structured data and condition fields are properly aligned.

First spotted. This update was shared by Google Shopping specialist Emmanuel Flossie, where he shared how to implement this update on his blog.

Dig deeper. “Availability [availability]” Google Merchant Centre help doc

❌