Reading view

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description LK Distribution is a leading distributor of several brands and products offered on both e-commerce and wholesale. Specialized in the Alternative Product category in the CBD/Hemp Industry ranging from a large category of products. We are seeking a creative and dynamic individual with experience with independent online storefronts for each of our brands […]
  • Job Description Benefits: 401(k) Paid time off Dental insurance Health insurance Vision insurance A Digital Marketing Specialist at a leading real estate company requires high-energy, creative, and data-driven team member who helps elevate our brand and our agents’ digital presence. As the Digital Marketing Specialist, you will be the “engine room” of our online strategy. […]
  • Director, Global Digital Marketing, Integrated Marketing Communication (IMC) Team Position Overview The Director of Digital Marketing is at the center of 10x Genomics’ digital marketing engine, delivering measurable business impact and innovating across channels to ensure leadership in scientific markets. This position reports to the Vice President of Integrated Marketing Communications as is responsible for […]
  • Job Description Digital Marketing Specialist OURCU is looking for a Digital Marketing Specialist who is equal parts data-driven strategist and collaborative teammate. This role is ideal for someone excited to build and optimize HubSpot from the ground up, create meaningful campaigns, and clearly demonstrate the why behind marketing performance. If you love blending creativity with […]
  • This role offers you the opportunity to deepen your SEO expertise and develop your leadership skills within a tight-knit agency team. Sr. SEO Analysts lead our client relationships and bring our outcome-driven strategies to life. They are responsible for delivering value and results to our clients through their high-quality work, commitment to building deep SEO […]
  • Job Summary We are seeking a versatile and data-driven Digital Marketing & Creative Specialist to join our team. In this multifaceted role, you will be responsible for the end-to-end marketing lifecycle, from high-quality visual content creation (design, photos, videos) to technical SEO and lead-generation strategy. In this role, creative and technical skills are blended to […]
  • About the Role We’re hiring a Senior SEO Specialist to lead strategy, identify high-impact opportunities, and serve as the organic growth lead across a portfolio of high-performing client accounts. You’ll guide partners through SEO strategic roadmaps, execute detailed audits, manage technical prioritization, and drive measurable performance across a wide range of industries and site types. […]
  • Job Description Content Marketing Manager (AI-Powered Content Engine) Location: Manhattan, NYC Job Type: Full-time: Hybrid – 3 days a week Job reports to: Director of Strategic Growth Salary Range: $110k-$125k anually About Jaan Health/Phamily Jaan Health is a strategic care transformation partner for health systems, medical groups, and large physician organizations. With over a decade […]
  • Job Description Trusted by many of the largest companies globally, Accertify is the leading digital platform assessing risk across the entire customer journey, from Account Monitoring and Payment Risk to Refund Fraud and Dispute Management. Accertify helps maximize revenues and user experience while minimizing loss and customer friction. We offer ultra-fast decision-making and precise control, […]
  • Job Description Company: E&K Contractors Website: www.ekcontractors.com Location: 1920 W Sylvania Ave, Toledo, OH 43613 Pay: $55,000–$90,000 per year Job Type: Full-time Work Location: In person Job Summary E&K Contractors is a Toledo-based construction company that has been serving Northwest Ohio since 1978. We’ve built a strong reputation over 45+ years, and we’re now looking […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Description Job Description About Us At Prenuvo, we are on a mission to flip the paradigm from reactive “sick-care” to proactive health care. Our award-winning whole body scan is fast (under 1 hour), safe (MRI has no ionizing radiation), and non-invasive (no contrast). Our unique integrated stack of optimized hardware, software, and increasingly AI, […]
  • Your Role in Helping Us Shape the Future U.S. News & World Report is a multifaceted digital media company dedicated to helping citizens, consumers, business leaders and policy officials make important decisions in their lives. We publish independent reporting, rankings, data journalism and advice that has earned the trust of our readers and users for […]
  • Overview Growth Marketing Manager (Outbound & Lead Gen) at FlexAI. This role is 70% outbound/automation initially and will progressively move to 50/50 outbound + content & paid. If the following tools are not your day to day, it might not be a good fit: Clay, Instantly/lemlist, unify/cargo, apollo, etc. FlexAI is at the forefront of […]
  • Performance Marketing, Growth and Business Operations Manager Join to apply for the Performance Marketing, Growth and Business Operations Manager role at Atlassian Performance Marketing, Growth and Business Operations Manager Join to apply for the Performance Marketing, Growth and Business Operations Manager role at Atlassian Get AI-powered advice on this job and more exclusive features. Working […]
  • THE ROLE We are seeking a Paid Search Associate Director who can confidently lead client relationships while also rolling up their sleeves to execute campaigns. This role requires a balance of strategic vision and hands‑on management. You’ll be responsible for guiding client strategy, overseeing performance, and ensuring flawless execution of paid search campaigns from end […]

Other roles you may be interested in

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Performance Marketing Manager, Hirewell (Remote)

  • Salary: $85,000 – $95,000
  • Paid Search: Lead daily execution and management of Google Ads. This is a “hands-on” role requiring deep platform expertise.
  • Multi-Channel Management: Oversee and optimize campaigns across Meta, LinkedIn, and Programmatic channels.

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Advertisers are testing ChatGPT ads — but uncertainty remains high

From scripts to agents- OpenAI’s new tools unlock the next phase of automation

OpenAI is emerging as a new advertising channel, but early advertiser sentiment is mixed as brands grapple with limited data, unclear performance, and a rapidly evolving product.

Driving the news. Two months after launching ads in ChatGPT, advertisers are experimenting — but still lack clear measurement tools and performance benchmarks.

  • Early campaigns are largely impression-based, with little insight into outcomes.
  • CPMs have reportedly been high, with initial minimum spends in the six figures.
  • Some advertisers say the product feels early and slow to mature.

The vibe check. According to Ad Age reporting, advertiser sentiment sits somewhere between cautious optimism and frustration.

  • Optimism stems from ChatGPT’s position as a leading consumer AI platform.
  • Frustration centers on lack of transparency, targeting, and reporting.

Why we care. This report this highlights both the opportunity and risk of investing in AI ad platforms early. While ChatGPT offers access to a fast-growing, high-intent audience, the lack of measurement and evolving product features make it a challenging channel to justify at scale.

It’s a signal to test thoughtfully and start building an AI strategy without overcommitting budget too soon.

The bigger picture. OpenAI’s ad push comes as it juggles multiple priorities — from AI development to enterprise growth — while facing rising competition from Google and Anthropic.

Some in the industry see OpenAI as having “cast too wide a net,” experimenting across video, commerce, and other products before refocusing. Its Instant Checkout commerce feature was quietly pulled back whilst video ambitions have also lost ground to competitors.

How ads actually show up. Early tests suggest ads may influence user journeys — but not always directly.

In one example, a sponsored retailer appeared more prominently in recommendations, even when multiple options were listed. Still, platforms maintain that ads do not directly alter core answers.

Yes, but. There’s ongoing tension between consumer trust (keeping answers unbiased), and advertiser goals (increasing visibility and influence).

That balance will likely shape how AI ads evolve.

What marketers should do now. Experts say brands don’t need to rush in. Large brands may benefit from early testing whilst others can focus on strategy development while the space matures. The priority is understanding how AI fits into broader media and search behavior.

The bottom line. ChatGPT ads are still in their infancy — promising, but unproven — leaving advertisers to experiment carefully while waiting for the platform to catch up to expectations.

Google Ads API to require multi-factor authentication

Google is tightening security across its ads ecosystem, requiring multi-factor authentication (MFA) for API users — a move that could impact how developers and advertisers access and manage accounts.

Driving the news. Google will begin rolling out mandatory MFA for the Google Ads API starting April 21, with full enforcement expected over the following weeks.

The update applies to users generating new OAuth 2.0 refresh tokens through standard authentication workflows.

What’s changing. Users will now need to verify their identity with a second factor — such as a phone or authenticator app — in addition to their password when authenticating.

  • Existing OAuth refresh tokens will continue to work without interruption.
  • New authentications will require MFA by default.
  • Users without 2-step verification enabled will be prompted to set it up.

Why we care. This change affects how you access and manage Google Ads data through APIs and connected tools. While it improves account security and reduces the risk of unauthorized access, it may also require updates to workflows, especially for teams that regularly generate new credentials. Preparing early can help avoid disruptions.

Who’s affected. The change primarily impacts apps and workflows using user-based authentication.

  • User authentication workflows: Will require MFA for new token generation.
  • Service account workflows: Not affected, and recommended for automated or offline use cases.

The requirement also extends beyond the API to tools like Google Ads Editor, Scripts, BigQuery Data Transfer, and Data Studio.

The big picture. As ad platforms handle more sensitive data and automation, security is becoming a bigger priority — especially as API access expands across teams, tools, and integrations.

Yes, but. While the update improves protection against unauthorized access, it may add friction for teams that frequently generate new credentials or rely on manual authentication flows.

The bottom line. Google is making MFA standard for Ads API access, signaling a broader shift toward stricter security across advertising tools and workflows.

OpenAI begins rolling out ads in select markets

OpenAI launches Instant Checkout in ChatGPT – bringing agentic commerce to life

OpenAI is continuing its push into ad-supported monetization — a strategy it began earlier this year — by expanding ads to more countries while keeping premium tiers ad-free.

Driving the news. OpenAI is starting to roll out ads for users on Free and Go plans in Australia, New Zealand, and Canada.

  • The rollout applies only to lower-tier plans.
  • Paid tiers — including Pro, Business, Enterprise, and Education — will remain ad-free.

Why we care. This opens up a new and rapidly growing channel to reach users inside AI-driven experiences. As OpenAI expands ads into more markets, it signals early opportunities to test and understand how advertising works in conversational interfaces. It could also shape how future search and discovery happens, making it important to get in early.

The big picture. AI platforms have largely avoided traditional advertising so far, relying instead on subscriptions and enterprise deals.

This move suggests OpenAI is:

  • testing new revenue streams,
  • exploring how ads fit into conversational interfaces,
  • and balancing monetization with user experience.

Yes, but: OpenAI is clearly drawing a line between free and paid experiences — signaling that ad-free usage will remain a premium benefit.

The bottom line: OpenAI is cautiously entering the ads business, starting with limited markets and tiers as it experiments with how advertising works inside AI-driven products.

Google Ads tests direct Google Tag Manager integration for conversion setup

Google Ads tactics to drop

Google may be streamlining one of the most error-prone parts of campaign setup — conversion tracking — by reducing the need for manual tag implementation.

Driving the news. Google Ads is testing a new “Set up in Google Tag Manager” option within its conversion setup flow, according to screenshots shared by Google Ads Specialist, Natasha Kaurra.

The feature appears alongside existing installation methods and allows advertisers to push conversion tracking setups directly into Google Tag Manager.

What’s new. Instead of copying conversion IDs and labels between platforms, advertisers can click the new button to open a pre-filled tag setup inside GTM.

That means:

  • fewer manual steps,
  • less room for implementation errors,
  • and faster deployment across accounts.

Why we care. Conversion tracking is critical to measuring performance, and this update makes it faster and less error-prone to implement. By reducing manual steps between Google Ads and Google Tag Manager, it can help ensure data is set up correctly from the start. That means more reliable reporting and better optimization decisions.

How it works. Based on early screenshots, the flow prompts users to select a GTM container and then surfaces a suggested tag configuration ready to publish.

This could be especially useful for agencies managing multiple clients, teams working across multiple containers, or advertisers with complex tagging setups.

The bottom line. It’s a small UI change with outsized impact — making it easier for advertisers to get conversion tracking right the first time.

First seen. This update was shared by PPC News Feed who credited Google Ads Specialist Natasha Kaurra for spotting it.

Why bottom-of-funnel content is winning in AI search

Why bottom-of-funnel content is winning in AI search

Google search traffic is dropping. If you’ve spent years building organic strategies, watching it happen in real time is uncomfortable. But it’s also clarifying.

I started seeing the shift across SaaS clients. Pages that had driven steady traffic for years — educational, top-of-funnel (TOFU) content — were losing ground. Not because the content got worse, but because users no longer needed to click. AI Overviews were doing the job for them.

That forced a decision: keep defending the old model or adjust the strategy. I chose to adjust.

What became clear pretty quickly is that while informational content is losing clicks, bottom-of-funnel (BOFU) content is holding up — and in many cases, driving more qualified leads.

This isn’t just a trend. It’s a shift in how value is created through search.

The pivot: Making BOFU the priority

My approach now is straightforward: 60% to 80% of output goes toward bottom- and mid-funnel content, with the remainder covering supporting TOFU topics that fill content cluster gaps or address timely industry conversations.

When I pitched this shift to clients, the conversation was easier than I expected. I put it simply: 

  • “You have a choice between traffic and leads. If you want leads, here’s how we get there, even if it means less traffic.” 

I was upfront that overall traffic might dip. But whoever shows up is more likely to convert. That framing landed. Nobody argued for traffic when the alternative was a qualified pipeline.

The most effective bottom-of-funnel pieces are comprehensive comparison and listicle-style guides targeting high-intent queries.

One of the best examples is a guide to the best time-tracking software for construction. Before writing it, I built a reusable review methodology for the client. The guide called out pros and cons honestly, including the client’s own product, because that’s what builds credibility with readers evaluating their options.

It was factual, specific, and written for someone in the middle of a purchase decision, not someone casually browsing.

Within weeks, it became our most cited article in LLM responses. It’s now a cornerstone piece, regularly appearing in conversion paths and driving qualified leads. 

That single piece delivered more pipeline impact than a dozen informational posts from the previous quarter because it answers the question a buyer is actually asking, not the one that gets the most search volume.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

TOFU isn’t dead. It just has a different job now.

I see many SEOs treating this as an either-or conversation. To be clear, I haven’t eliminated TOFU content. I’ve repositioned it.

TOFU’s job now is to build topical authority that helps BOFU pages rank. It’s the supporting structure, not the primary event. Guides and educational content:

  • Support the content cluster.
  • Establish expertise in Google’s eyes.
  • Pass internal link equity to BOFU pages.

For my clients’ content, we’ve revisited the best-performing TOFU pieces and made them work harder.

We added sections that connect the information directly to the client’s product, supported by screenshots and subject matter expert quotes. 

We also redesigned calls to action to match the context and placed them throughout the content, rather than just at the end. 

For several clients, this led to a measurable increase in visitors navigating to demo request pages, without changing the informational intent.

The key distinction: You should still produce a meaningful volume of TOFU content, but make sure it has a unique angle — something not widely known or discussed from your perspective. 

In a sea of AI-generated content, that specificity is what drives performance.

Get the newsletter search marketers rely on.


Why this works in AI-driven search

People arriving from AI platforms show up with context. They’ve already explored the problem. They’re evaluating options. This aligns with how AI Overviews are applied in search results.

AI Overviews still appear far more often for informational queries than commercial ones. Ecommerce searches trigger them far less frequently, which helps protect bottom-of-funnel content — at least for now, though coverage for commercial and transactional queries is rising quickly.

That shift in behavior changes what content performs. Informational content loses value when answers are summarized upfront, while decision-stage content becomes more useful because it helps users compare options, validate choices, and move forward.

That’s why bottom-of-funnel content holds up. It aligns with where the user is in the process, not just what they searched for.

The time tracking software comparison piece I mentioned is a clear example. It’s consistently cited when users ask about construction time tracking tools. That visibility doesn’t always show up as a click, but it appears later — in branded search, direct visits, and ultimately, leads.

The attribution problem you need to accept

Here’s the challenge: bottom-of-funnel content’s value is systematically underreported in traditional analytics.

Someone sees your solution mentioned in a ChatGPT response, researches your brand, and converts later through a direct visit or branded search. In GA4, that journey often shows up as direct traffic. It looks like SEO didn’t contribute — but it did.

That’s why I’ve shifted clients away from traffic as the primary success metric and toward a broader set of signals, including:

  • Brand search volume trends.
  • Citation frequency in LLM platforms.
  • Direct traffic movement after content publication.
  • Conversion rate changes, even when traffic stays flat.

The ROI of BOFU and LLM-optimized content is higher than what dashboards show. If you’re evaluating performance based only on immediate click attribution, you’re missing where SEO is actually creating value.

Your practical playbook for shifting to BOFU

Here’s how to turn this shift into a practical content strategy:

  • Audit your existing content for BOFU gaps: Before creating anything new, identify which high-intent, purchase-stage queries you have zero coverage on. These are often the easiest wins.
  • Build comparison content with real methodology: Create a review framework you can reuse. Be honest about pros and cons, including your client’s product. Credibility is what makes these pieces rank and get cited.
  • Retrofit your best TOFU pieces: Add product-connected sections, contextual CTAs, and subject matter expert input. Make the informational content do conversion work, too.
  • Build LLM tracking into GA4 now: A regex-based segment capturing ChatGPT, Perplexity, Claude, and other AI referrers gives you visibility into a channel most clients have zero data on.
  • Reset the success metrics conversation with clients: Traffic volume is increasingly a vanity metric. Lead quality, branded search growth, and conversion rate are what actually matter in this environment.

AI Overviews have fundamentally changed the economics of informational content.

But that disruption creates a strategic opening. Bottom-of-funnel content has always converted better. AI is simply removing the incentive to keep over-investing in content that drives traffic without driving revenue.

The window to shift strategy is still open. It won’t stay that way.

AI traffic converts better than non-AI visits for U.S. retailers: Report

AI traffic conversions grow

Traffic from AI sources increased 393% year-over-year in Q1 and 269% in March. But the real surprise? AI traffic is converting better than last year.

  • AI-driven visits converted 42% better than non-AI traffic in March. A year ago, AI traffic was 38% less likely to result in a purchase.

By the numbers. Traffic from AI sources increased engagement by 12%, time on site by 48%, and pages per visit by 13%. Adobe also surveyed consumers and found that:

  • 39% have used AI for shopping. Of those, 85% said it improved the experience.
  • 66% believe AI tools provide accurate results.

What they’re saying. According to Vivek Pandya, director of Adobe Digital Insights:

  • “Notably, AI traffic continues to convert better (visits that result in purchases) than non-AI traffic, which covers channels such as paid search and email marketing.”

Yes, but. While consumer adoption is up, and traffic, engagement, and conversions are growing, many retail sites still aren’t fully optimized for AI visibility, especially on product pages, according to Adobe.

Why we care. Until now, reports have been mixed on whether AI traffic is better, equal to, or worse than organic search traffic (see our Dig deeper resources below). That may be changing, as we expected it would. Like generative AI, AI shopping today is as bad as it will ever be, meaning this channel’s value will only increase.

About the data. Adobe’s findings are based on direct transaction data from more than 1 trillion visits to U.S. retail websites. The company also surveyed more than 5,000 U.S. consumers to understand how they use AI to shop.

The report. Adobe report: U.S. retailers see surge in AI traffic, but many websites are not entirely readable by machines.

Dig deeper.

U.S. search ad revenue reached $114.2 billion in 2025

digital advertising

Search remained the largest force in digital advertising in 2025. However, its growth slowed as total U.S. ad revenue climbed to a record $294.6 billion.

Search still dominates. Search generated $114.2 billion, accounting for 38.8% of total digital ad revenue, according to the latest IAB/PwC Internet Advertising Revenue Report. But growth slowed to 11%, down from 15.9% in 2024, as advertisers shifted more budget into faster-growing formats and as AI began reshaping how users discover information.

Overall market growth accelerated as the year went on. It climbed from 12.2% in Q1 to 15.4% in Q4. The fourth quarter alone brought in $85 billion, even without major cyclical events like the U.S. election or the Olympics, which boosted 2024.

Video, social, and programmatic all grew faster than search. Digital video revenue jumped 25.4% to $78 billion, making it the fastest-growing major format. Social rose 32.6% to $117.7 billion, while programmatic increased 20.5% to $162.4 billion — continuing the shift toward automated, performance-driven buying.

The market is more concentrated. The top 10 companies now control 84.1% of U.S. digital ad revenue, up from 80.8% a year ago, reflecting the advantages of scale, first-party data, and AI-driven platforms.

AI is no longer just a tool layered onto campaigns. AI is increasingly shaping discovery, media buying, and measurement as consumer journeys fragment across platforms.

Why we care. Search still delivers the most scale, but it’s no longer growing the fastest. More budget is flowing into video, social, and programmatic, where automation and AI are more deeply embedded. That means more competition for budget, less visibility into performance, and a greater need to prove incrementality.

About the data. The IAB/PwC report is based on U.S. internet advertising revenue data compiled across the industry.

The report. Internet Advertising Revenue Report Full-year 2025 results (PDF)

No-JavaScript fallbacks in 2026: Less critical, still necessary

No-JavaScript fallbacks in 2026- Less critical, still necessary

Google can render JavaScript. That’s no longer up for debate. But that doesn’t mean it always does — or that it does so instantly or perfectly.

Since Google’s 2024 comments suggesting it renders all HTML pages, many developers have questioned whether no-JavaScript fallbacks are still necessary. Two years later, the answer is clearer and more nuanced.

Google’s stance on JavaScript rendering

In July 2024, Google sparked debate during an episode of Search Off the Record titled “Rendering JavaScript for Google Search.” When asked how Google decides which pages to render, Martin Splitt said: 

  • “If it’s so expensive, how do we decide which page should get rendered and which one doesn’t?” 

Zoe Clifford, from Google’s rendering team, replied: 

  • “We just render all of them, as long as they’re HTML, and not other content types like PDFs.”

That comment quickly led developers, especially those building JavaScript-heavy or single-page applications, to argue that no-JavaScript fallbacks were no longer necessary.

Many SEOs weren’t convinced. The remark was informal, untested at scale, and lacking detail. It wasn’t clear:

  • How rendering fit into Googlebot’s process.
  • Whether pages were queued for later execution.
  • How the system behaved under resource constraints.
  • Whether Google might fall back to non-rendered crawling under load.

Without clarity on timing, consistency, and limits, removing fallbacks entirely still felt risky.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What Google’s documentation actually says

Google’s documentation now gives us a much clearer picture of how JavaScript rendering actually works. Let’s start with the “JavaScript SEO basics” page:

What Google says:

  • “Googlebot queues all pages with a 200 HTTP status code for rendering, unless a robots meta tag or header tells Google not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Google’s resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Google also uses the rendered HTML to index the page.”

Google clearly states that JavaScript rendering doesn’t necessarily happen on the initial crawl. Once resources allow, a headless browser is used to parse JavaScript. 

Googlebot likely won’t click on all JavaScript elements, so this probably only includes scripts that don’t require user interactions to fire.

This is important because it tells us Google may make some basic determinations before JavaScript is rendered, via subsequent execution queues. 

If content is generated behind elements (content tabs, etc.) that Google doesn’t click, it likely won’t be discovered without no-JavaScript fallbacks.

Looking at Google’s “How Search works” documentation:

The language is much simpler. Google states it will attempt, at some point, to execute any discovered JavaScript. There’s nothing here that directly contradicts what we’ve seen so far in other Google documentation.

On March 31, Google published a post titled “Inside Googlebot: demystifying crawling, fetching, and the bytes we process,” which further clarifies JavaScript crawling.

The notes on partial fetching are particularly interesting. Google will only crawl up to 2MB of HTML. If a page exceeds this, Google won’t discard it entirely, but instead examines only the first 2MB of returned code.

Google explicitly states that extreme resource bloat, including large JavaScript modules, can still be a problem for indexing and ranking. 

If your JavaScript approaches 2MB and appears at the top of the page, it may push HTML content far enough down that Google won’t see it. The 2MB limit also applies to individual resources pulled into a page. If a CSS file, image, or JavaScript module exceeds 2MB, Google will ignore it.

We’re beginning to see that Google’s claim that it renders all pages comes with important caveats. 

In practice, it seems unlikely that a page with no consideration for server-side rendering (SSR) or no-JavaScript fallbacks would be handled optimally. This highlights why it’s risky to take comments from Googlers at face value without following how the details evolve over time.

The question we opened with is also evolving. It’s less “Do I need blanket no-JavaScript fallbacks in 2026?” and more “Do I still need critical-path fallbacks and resilient HTML within my application?”

Google’s recent search documentation updates add more context:

Google has recently softened its language around JavaScript. It now says it has been rendering JavaScript for “multiple years” and has removed earlier guidance that suggested JavaScript made things harder for Search. 

It also notes that more assistive technologies now support JavaScript than in the past. 

Within that same documentation, Google still recommends pre-rendering approaches, such as server-side rendering and edge-side rendering.

So while the language is softer, Google isn’t suggesting developers can ignore how JavaScript affects SEO.

Looking again at the December 2025 updates:

Google states that non-200 pages may not receive JavaScript execution. This suggests no-JavaScript fallbacks for internal linking within custom 404 pages may still be important.

Google also notes that canonical tags are processed both before and after JavaScript rendering. If source HTML canonicals and JavaScript-modified canonicals don’t match, this can cause significant issues. Google suggests either omitting canonical directives from the source HTML so they’re only evaluated after rendering, or ensuring JavaScript doesn’t modify them.

These updates reinforce an important point: even as Google becomes more capable at rendering JavaScript, the initial HTML response and status code still play a critical role in discovery, canonical handling, and error processing.

Dig deeper: Google removes accessibility section from JavaScript SEO section

Get the newsletter search marketers rely on.


What the data shows

JavaScript rendering is introducing new inconsistencies across the web, according to recent HTTP Archive data:

We can see that since November 2024, the percentage of crawled pages with valid canonical links has dropped.

Via the HTTP Archives 2025 Almanac:

About 2-3% of rendered pages exhibit a “changed” canonical URL, something Google’s documentation explicitly states can be confusing for its indexing and ranking systems. That 2-3% doesn’t explain the larger drop in valid canonical deployment since November 2024.

Other factors are likely at play, such as the adoption of new CMS platforms that don’t properly handle canonicals. The rise of vibe-coded websites using tools like Cursor and Claude Code may also be contributing to these issues across the web.

In July 2024, Vercel published a study to help demystify Google’s JavaScript rendering process:

It analyzed more than 100,000 Googlebot fetches and found that all resulted in full-page renders, including pages with complex JavaScript. However, 100,000 fetches is a relatively small sample given Googlebot’s scale. 

The study was also limited to sites built on specific frameworks, so it’s unwise to assume Google always renders pages perfectly. It’s also unclear how deeply those renders were analyzed.

It does suggest that Google attempts to fully render most pages it encounters. Broadly speaking, Google can generate JavaScript-modified renders, but the quality of those renders is still up for debate. As noted earlier, the 2MB page and resource limits still apply.

Because this study dates to mid-2024, any contradictions with Google’s updated 2025–2026 documentation should take precedence.

Vercel also published a notable finding:

  • “Most AI crawlers don’t execute JavaScript. We tested the major ones (ChatGPT, Claude, and others), and the results were consistent: none of them render client-side content. If your Next.js site ships critical pages as JavaScript-dependent SPAs, those pages are inaccessible to the systems shaping how people discover information.”

So even if Google is far more capable with JavaScript than it used to be, that’s not true across the broader web ecosystem. Many systems still rely on HTML-first delivery. That’s why you shouldn’t rush to remove no-JavaScript fallbacks — they may still be critical to your future visibility.

Cloudflare’s 2025 review is also worth noting:

Cloudflare reported that Googlebot alone accounted for 4.5% of HTML request traffic. While this doesn’t directly explain how Google handles JavaScript, it does highlight the scale at which Google continues to crawl the web.

Dig deeper: How the DOM affects crawling, rendering, and indexing

No-JavaScript fallbacks in 2026

The question we set out to answer was whether no-JavaScript fallbacks are required in 2026.

Google is far more capable with JavaScript than in previous years. Its documentation shows that pages are queued for rendering, and that JavaScript is executed and used for indexing. For many sites, heavy reliance on JavaScript is no longer the red flag it once was.

However, the details of Google’s rendering process still matter. Rendering isn’t always immediate. There are resource constraints, and not all behaviors are supported.

At the same time, the broader web ecosystem hasn’t necessarily kept pace with Google. The risk of removing all no-JavaScript fallbacks hasn’t disappeared — it’s just changed shape.

Key takeaways:

  • Google doesn’t necessarily render JavaScript on the first crawl. There’s a rendering queue, and execution happens when resources allow.
  • Technical limits still exist, including a 2MB HTML and resource cap, and limited interaction with user-triggered elements.
  • Non-200 responses may not receive rendering treatment, which keeps basic HTML and linking important in some cases.
  • Differences between raw HTML and rendered output still exist at scale across the web.
  • Google’s guidance still leans toward SSR (server-side rendering), pre-rendering, and resilient HTML for critical content.
  • Other crawlers, especially AI-driven ones, often don’t execute JavaScript at all. As these systems become more important, the need for fallbacks may increase again.
  • Blanket, site-wide no-JavaScript fallbacks aren’t universally required in 2026, but critical content, links, and signals shouldn’t depend entirely on JavaScript. Many modern crawlers still rely on HTML-first delivery.

For now, no-JavaScript fallbacks for critical architecture, links, and content are still strongly recommended, if not required going forward.

Your ROAS looks great — but is it actually driving growth?

Your ROAS looks great — but is it actually driving growth

An ecommerce company hires your PPC agency to explore paid search. A solid plan follows, and after approval, the campaigns go live. Soon, you’re seeing stellar results: high conversion volumes and a healthy ROAS.

On the surface, the strategy is a resounding success.

But look closer.

Some of these conversions might have occurred anyway via direct or organic search traffic — meaning the campaigns may not be driving real growth. Too often, this goes unmeasured.

To truly understand performance, you need to look at incremental lift and marginal ROAS.

The truth about ROAS

Perhaps you’ve heard about eBay’s paid search experiment? They were spending heavily on brand PPC ads. Then they ran a controlled test, turning those ads off for a portion of users to measure impact.

Organic traffic picked up most of those conversions, with minimal impact on revenue. But guess what? Despite the clear results, eBay turned the branded ads back on. Fear, or smart? You tell me.

With search becoming increasingly automated, and the customer journey spreading across more surfaces than ever, attributing conversions to the right channels is harder than ever. Advertising platforms are quick to claim credit for these conversions, but be skeptical.

What most platforms report is attributed return, not causal lift. In other words, ROAS tells you how much revenue the platform says it influenced; it doesn’t tell you how much of that revenue would have happened without the ads.

When it comes to black-box automation like Performance Max and Advantage+, platforms have become exceptionally good at one thing: finding the path of least resistance to a conversion. They aren’t necessarily finding new customers. They’re often just becoming the most expensive touchpoint in a journey that was already destined to convert.

Without measuring incrementality, automation simply amplifies non-incremental signals, such as:

  • Brand search campaigns capturing existing demand.
  • Retargeting campaigns hitting users who were seconds away from purchasing.
  • Reporting that makes “safe” channels appear more valuable than they truly are.

Dig deeper: Paid media efficiency: How to cut waste and improve ROAS

Incrementality tells you whether marketing created something extra

Incrementality is causal lift — what changed because the campaign existed, typically measured by comparing exposed groups with holdout or control groups. So what did this campaign actually drive that wouldn’t have happened otherwise?

Even though you may not want to admit it, this is a much more useful lens for budget allocation than platform attribution alone.

A channel can have a fantastic in-platform ROAS and still generate a weak incremental impact. Why? Because it might be harvesting demand rather than creating it.

If you want to know whether a campaign genuinely drove growth, the better question is incrementality.

But it’s still not the full answer.

To decide what to do next, you also need marginal ROAS.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Get the newsletter search marketers rely on.


Marginal ROAS tells you what to do next

A channel may be incremental. But that still doesn’t tell you where the next $10,000 should go. That’s a marginal ROAS question.

Marginal ROAS measures the return on the next unit of spend, not the average return across all spend. Here’s how it works: the first tranche of budget often performs well, then the next performs worse.

Keep going, and the final dollars become dramatically less efficient than the average suggests. The same applies to CPA metrics: a blended CPA may look acceptable, while the last dollars spent were far less efficient, leaving many advertisers bidding beyond where they should.

Imagine you spend $10,000 and generate $50,000 in revenue (500% ROAS). You decide to scale and spend an additional $5,000. This extra spend generates only $5,000 in additional revenue.

  • Your new average ROAS: 366% 
  • Your marginal ROAS: 100% (You essentially traded $1 for $1.)

In this scenario, the last $5,000 you spent was entirely wasted, even though the total “average” performance still looks decent on your dashboard.

This is the trap of average ROAS. It makes a channel look scalable when it may only be efficient at lower spend levels, and it hides the difference between profitable core demand capture and weak incremental expansion.

To make better decisions, you need to look further. Platform ROAS helps with in-platform optimization, incrementality shows whether campaigns actually created value, and marginal ROAS tells you whether more budget should go there.

A strong ROAS can signal true efficiency, or it can mean the platform is capturing demand that would have converted anyway. That’s why you should focus more on incrementality tests.

Don’t ask whether the channel has been efficient. Ask whether the next dollar is efficient enough — that’s what determines smart scaling.

Dig deeper: The marketing measurement flywheel: A 4-step framework for proving impact

Options for incrementality testing

You don’t need a perfect measurement lab before you start. Geo tests, holdouts, audience exclusions, and controlled spend reductions can all teach you more than another month of attribution debates.

  • Geo-split testing: Divide your markets into two comparable geographic groups, keep your ads running in the “test” group, and turn them off in the “control” group. The difference in total revenue between the two regions reveals the true incremental lift of your ads.
  • Search lift tests (holdouts): Use platform tools to create holdout groups, a small percentage of users who are intentionally not shown your ads. By comparing their behavior to the exposed group, you can see the direct impact of your (for example) Search or YouTube campaigns.

Beyond these, you can also test the impact of remarketing, branding, awareness campaigns, or additional social channels.

The real shift: From reporting performance to allocating capital

Too many marketing teams still use measurement to explain what happened. The better use of measurement is to decide what should happen next.

Incrementality helps you understand whether a channel created value. Marginal ROAS helps you understand whether more investment is justified. Together, they move marketing measurement out of the reporting function and into capital allocation.

ROAS tells you who gets credit. Incrementality tells you what actually moved. Marginal ROAS tells you where the next budget should go. But be aware: incrementality is not the same as attribution. Attribution tells you who, or which channel, should get the credit, while incrementality shows you whether or not it was worth it.

Dig deeper: How to take your marketing measurement from crawl to sprint

How to optimize for keywords you can’t use

How to optimize for keywords you can’t use

Sometimes the keywords you need to rank for are the ones you’re not allowed to use. Whether it’s trademark restrictions, brand guidelines, or industry stigma, you might be asked to capture demand without using the exact terms people search. 

Here’s how to navigate that challenge, align with search behavior, and still build visibility.

When the keyword you need is off-limits

It’s a common scenario in SEO:

  • “We want to rank for (insert super competitive search term),” and, in the next breath, “Don’t use (that exact same phrase) on the page.”

My very first SEO job, over 10 years ago, set the goal of ranking in the top 3 for the term “custom koozies.” I’ve been in heated debates over the proper term for these drink coolers.

In my household, they were called “coolie cups.” The general term is “can coolers,” but search volume tells us the vast majority of the U.S. would call these products “koozies.” 

Search volume data settled the debate, but Koozie® was a registered trademark. We worked our way to the top of the search results without relying on the restricted term as the primary on-page language.

A few years later, I landed at a marketing agency that specializes in the senior living industry. There were many new terms to familiarize myself with: assisted living, independent living, skilled nursing, and continuing care retirement community (CCRC), among others.

Keyword research showed that users were searching “nursing home,” but it turned out that many of the organizations had begun to steer away from the term “nursing home” because of its negative connotations.

The problem is, they’re a nursing home, and that’s what real people call them. I felt like I was having déjà vu, with a new goal of ranking for a term that I wasn’t allowed to use.

Dig deeper: Branded search and SEO: What you need to know

How to rank for keywords you can’t use

You don’t need to use the exact keyword to rank for it, but you do need to send the right signals when a term is restricted, discouraged, or off-limits — even if it reflects how people actually search.

1. Pull the data and confirm direction

In some cases, you can get an “aha” moment just by showing the data. 

When I tell clients that “skilled nursing near me” has 4,400 monthly search volume, but “nursing home near me” gets searched 27,100 times per month, it sometimes softens their stance. 

Pulling local search volume or localized search terms can be beneficial, too. Do the research and follow the data.

It’s important to get clear on exactly how off-limits a term is. Is it acceptable to use it in non-focal page copy with a different primary term, or is it OK to use it alongside a preferred term?

Confirm what’s allowed versus unacceptable use.

2. Use all the terms around it

Create a list of terms related to your primary term, and be sure to hit on those, too. For Koozies, this would be “beer,” “drink,” “keeping your drink cold,” and common uses such as “bachelorette party” and “wedding.” These help build context for search engines.

3. Use similar terms and break down phrases

Consider whether there are similar terms to the primary term, and be sure to use those, too. For Koozies, we could use “cozies” and “coolies.”

If your primary term is more than one word, use each individual word frequently. For nursing homes, we can discuss “nursing” care and use “home” throughout the page.

4. Use the term indirectly

This tactic involves referencing the term on the page, but not necessarily directly describing your product.

In the case of nursing homes, you could say “More than a nursing home” in a header, or “Looking for a nursing home in Ann Arbor?” in the page copy.

Dig deeper: Semantic SEO: How to optimize for meaning over keywords

Get the newsletter search marketers rely on.


5. Incorporate the product that can’t be named onto the page

This was key for the Koozies situation. The non-Koozie® brand products couldn’t be called Koozies, but when we added a Koozie brand product alongside the best-selling non-Koozie brand products, suddenly we could call this category “Can Coolers & Koozies.” 

The average person wouldn’t consider these separate products, but, officially, we did.

6. Get creative with anchor text

The text that links to your page can significantly influence how search engines understand it.

Consider where you can control the anchor text, and use your primary term in both off-site and internal linking.

7. Use the term in non-visible elements

Alt text is perfect for keyword placement for those terms that the industry frowns upon but that are publicly accepted. This avoids the text appearing on the page while still describing the product. I’d use caution with trademark terms when using this tactic to avoid misleading or violating trademark guidelines.

Don’t sleep on title tags. This might be the most important tactic: find a way to get your primary term in the title tag.

For those frowned-upon terms, this can be simple if you have approval to use the term in non-focal areas. Although the title tag is the first thing someone may see in the search results, it’s not necessarily visible on the page, making it a strong opportunity to balance the language of the searcher with your brand voice.

For trademarked products, this works well with tactic five. If possible, including the trademarked products on the named page allows you to put both the trademarked term and the generic term in the title tag.

8. Add definitions

Defining terms on your website is a great way to incorporate them and clarify the relationship between your offering and the common terms. Definition-focused content is great for SEO and AI visibility.

Dig deeper: The shift to semantic SEO: What vectors mean for your strategy

Your game plan for off-limits terms

Make sure to seek legal counsel’s approval for any tactics around trademarked terms. They can help give you guidelines and rules for clarity.

Try all of these tips, or a combination of a few, to help your site start ranking for those coveted terms that you aren’t allowed to use. Gather the data, create a strategic approach, test, and refine. 

Microsoft makes it easier to import Google PMax campaigns

Microsoft Ads: How it compares to Google Ads and tips for getting started

Microsoft Advertising is rolling out a slate of updates aimed at making Performance Max campaigns easier to manage, measure, and migrate — especially for advertisers already using Google Ads.

Driving the news. Microsoft now lets advertisers import Google PMax campaigns that use new customer acquisition (NCA) goals, a feature that has been generally available in Microsoft since early this year.

The update is now live for all advertisers.

That means marketers can more easily port over campaigns designed to prioritize first-time buyers without rebuilding them from scratch.

What’s new. Microsoft says imported Google PMax campaigns with NCA goals will carry over if they don’t already exist in the advertiser’s account. Existing Microsoft NCA settings won’t be overwritten.

For audience lists:

  • Google website visitor segments will convert into Microsoft remarketing lists.
  • Google’s “all visitors” and “all converters” lists will map to Microsoft equivalents.
  • Unsupported lists, like Customer Match, will prompt advertisers to use fallback options.

Microsoft also says it takes a more conservative approach to “unknown” customers, classifying them as existing customers to avoid overcounting new customer conversions.

Why we care. This could make cross-platform campaign expansion faster and lower the friction of testing Microsoft’s PMax inventory removing the need of rebuilding campaigns from scratch. The added landing page reporting and search term visibility also give marketers better insight into what’s driving performance, which can help improve optimization and budget decisions.

More visbility for PMax. Microsoft is also adding landing page (Final URL) reporting for PMax campaigns. Advertisers can now see spend, clicks, impressions, conversion value, and ROAS by landing page.

They can also segment by campaign, asset group, and other dimensions.

Microsoft also said search term reporting is becoming more visible by default, with more transparency updates — including auction insights and added publisher URL metrics — planned later.

Other key updates:

  • Seasonality adjustments now support portfolio bid strategies, expanding a tool advertisers use for short-term events like promotions.
  • Campaign name limits are increasing from 128 to 400 characters, helping agencies and enterprise teams manage naming conventions at scale.
  • Autogenerated assets are expanding to underbuilt Responsive Search Ads to improve ad relevance and performance.
  • Merchant Center users can now update store names and domains directly without contacting support.

The bottom line. These updates make it easier to scale across platforms, save time on campaign setup, and get better visibility into what’s actually driving performance — giving advertisers more control over both efficiency and results.

ChatGPT citations reward ranking and precision over length: Study

ChatGPT citations

ChatGPT citations favor pages that rank well, match the query in their headings, and stay tightly focused, according to an AirOps study of 16,851 queries. The top retrieval result was cited 58% of the time, and pages that answered the main query more narrowly outperformed broader, more comprehensive guides.

Why we care. This study clarifies how to earn ChatGPT citations: win retrieval, mirror the query in your headings, and answer one question extremely well. In this study, that mattered more than breadth.

The findings. Retrieval rank was the strongest signal. Pages in the top search position were cited 58.4% of the time, versus 14.2% for pages in position 10.

  • Heading relevance was the strongest on-page factor. Pages with the strongest heading-query match were cited 41.0% of the time, compared with roughly 30% for weaker matches.
  • Focused pages also beat comprehensive ones. Pages that answered the main query more narrowly outperformed broader, more comprehensive guides, undercutting the usual “ultimate guide” approach.

What drove ChatGPT citations. In this study, pages that won citations usually ranked well, used headings that closely matched the query, and stayed focused on answering it.

  • Structure helped, but only slightly: Pages with JSON-LD markup posted a 38.5% citation rate versus 32.0% for pages without it, and articles with 4 to 10 subheadings performed best.
  • Beyond a certain point, length hurt performance: Pages between 500 and 2,000 words performed best, but pages longer than 5,000 words were cited less often than pages under 500 words.

Freshness helps, up to a point. Pages published 30 to 89 days earlier performed best, while pages newer than 30 days performed worse. This suggests new content may need time to build retrieval signals.

  • Pages more than 2 years old were cited less often, which suggests that content refreshes could help if you’re already ranking for the right queries.

About the data. AirOps said it scraped ChatGPT’s interface, not the API, and analyzed 50,553 responses generated from 16,851 unique queries run three times each. The dataset included 353,799 pages and more than 1.5 million fan-out detail rows across 10 verticals and four query types.

The study. The Fan-Out Effect: What Happens Between a Query and a Citation

Google AI Mode in Chrome now lets you search deeper with fewer tabs

Google announced Chrome updates that let searchers use AI Mode in a more engaging, deeper way. Chrome lets you do it all without switching tabs and potentially losing your place.

What’s new. Chrome added three new features:

  • Search side-by-side: In AI Mode on Chrome desktop, clicking a link opens the webpage next to AI Mode. That makes it easier to visit relevant sites, compare details, and ask follow-up questions without losing the context of your search. Here’s what it looks like:
  • Search across your tabs: On Chrome desktop or mobile, you can tap the new “plus” menu on the New Tab page, or the existing plus menu in AI Mode, to add recent tabs to your search. That lets AI Mode deliver more tailored responses and suggest more sites to explore.
  • Multi-input and easy tool access: You can also mix and match multiple tabs, images, or files like PDFs and bring that context into AI Mode. Tools like Canvas and image creation are also available wherever you see the new plus menu in Chrome.

Why we care. These new Chrome-specific features for U.S. users unlock more AI Mode capabilities. Again, they’re limited to Chrome users for now, but they show the direction Google is taking AI Mode.

💾

Google Chrome now has a new side-by-side mode, search across tabs, and multi-input tools.

Gemini helped Google block more than 99% of bad ads before they ran

Google is making Gemini a core part of ad enforcement, saying the AI upgrade helped catch more scams while sharply reducing mistaken suspensions of legitimate advertisers. The move shows how quickly ad safety is turning into an AI fight over speed, scale, and accuracy.

The details. In its 2025 Ads Safety Report, Google said it blocked or removed 8.3 billion ads and suspended 24.9 million advertiser accounts last year. It said more than 99% of policy-violating ads were stopped before they ran.

  • Google credited Gemini with cutting incorrect advertiser suspensions by 80%, processing 4x more user reports than the year before, and spotting scam signals faster by better understanding ad intent.
  • Scams were a major focus. Google said it removed 602 million scam-related ads and suspended 4 million scam-linked accounts.

By the numbers:

  • 602 million scam-related ads removed
  • 4 million scam-linked accounts suspended
  • 4.8 billion ads restricted
  • 480 million web pages blocked or restricted
  • 245,000+ publisher sites actioned
  • 35 policy updates made in 2025

The U.S. picture: Google said it removed 1.7 billion ads and suspended 3.3 million advertiser accounts in the U.S. in 2025. The most common violations included abuse of the ad network, misrepresentation, sexual content, personalization violations, and dating and companionship ads.

Why we care. This directly affects whether campaigns launch, stay live, or get flagged. Google is signaling that AI will play a bigger role in deciding which ads run and which accounts get stopped. For advertisers, that raises the stakes on policy compliance while also promising fewer costly false suspensions.

How it works: Google said Gemini analyzes hundreds of billions of signals, including account age, behavior patterns, and campaign activity, to detect malicious intent earlier than older systems built more heavily around keywords and rule matching.

The company also said that by the end of 2025, most Responsive Search Ads would be reviewed instantly at submission, blocking harmful ads before launch. It plans to expand that capability to more formats this year.

Yes, but. Faster automated enforcement does not always mean smoother enforcement. Some advertisers in the U.K. and U.S. have recently reported bulk ad disapproval alerts despite finding no actual policy issues. That adds pressure on Google to prove tighter AI enforcement will not create new disruptions for legitimate brands.

Bottom line: Google wants advertisers to see Gemini as both shield and filter — tougher on scams, but more precise with legitimate accounts. The real test is whether that balance holds as enforcement gets faster and more automated.

Google’s blog post. Gemini is stopping harmful ads before people ever see them

Why your website is now the source of truth in local AI search

Why your website is now the source of truth in local AI search

Open ChatGPT, then search for a local business you know has a strong online presence. Ask for a recommendation in that category. Chances are, it comes up. If you check what the AI cites as sources, you’ll almost certainly find the business’s own website in the mix.

That tells you something important: AI doesn’t conjure answers out of thin air. It pulls from whatever it can find. If your website isn’t the best, most complete, most authoritative source of information about your business, the AI will assemble its answer from scraps. You lose control of your own narrative.

That’s what’s driving a growing question among business owners and marketers: “Do I even need a website anymore? If AI answers everything, why does it matter?”

Your website isn’t just a marketing tool anymore. It’s a source document. AI treats it as an authoritative input. The real question is who gets to define your business: you or someone else. Here’s what’s changing, where conventional wisdom falls short, and what to do about it.

Zero-click doesn’t mean zero opportunity

A lot of marketers are seeing the same thing right now: impressions holding steady or rising, but clicks dropping. People get what they need without ever landing on a page, leading some to declare websites obsolete. That’s the wrong read.

Fewer clicks don’t mean less importance. They mean the nature of the click has changed. Look at where AI Overviews actually appear.

According to our analysis of Ahrefs data, of the 46 million+ keywords that trigger an AI Overview, nearly 99% are informational. Navigational keywords account for just 0.13%. Someone wanted a quick fact, got it, and moved on. Those were never high-intent visits anyway.

AI Overviews - 99% are informational

The clicks that drive revenue, the ones tied to bookings, calls, purchases, and consultations, still happen. Commercial and transactional keywords make up just 12.5% and 3.5% of AI Overview triggers, respectively. 

(Note: These percentages exceed 100% in total because keywords can carry multiple intent classifications, a single keyword can be both informational and commercial, for example.) 

Those are exactly the queries where people are closest to a decision. They just happen further down the funnel, after a recommendation has already been made. When someone is ready to decide, they validate and check the website.

Dig deeper: Your homepage matters again for SEO — here’s why

AI recommends, your customer decides. Know the difference.

When someone asks an AI assistant, “Who’s the best plumber near me?”, the AI might surface a few names. It’s pattern-matching based on reviews, location signals, website content, and business profile data. It’s offering a starting point, not a final verdict.

The AI isn’t picking up the phone or handing over a credit card. Especially for high-stakes local decisions, a contractor in your home, a doctor for your kid, a mechanic for your car, most people aren’t going to act on an algorithm’s suggestion without doing their own digging first.

What actually happens after the AI recommends? The customer: 

  • Googles the business. 
  • Reads the reviews. 
  • Looks at photos. 
  • Checks the website to see if you offer exactly what they need, and at a price they can stomach.

That validation phase is where decisions are made. And your website is at the center of it. AI might have gotten you in the door, but your website is what closes it.

Dig deeper: If you can’t say what problem your brand solves, AI won’t either

AI is actually making your website more valuable

AI systems are reading your content to determine what you do, who you serve, and how you help. They’re cross-referencing your site with your Google Business Profile, directory listings, and reviews to ensure consistency. 

When everything lines up, they gain confidence recommending you. When it doesn’t, you get skipped. This means your website is now effectively a source document for AI.

Either it provides clear, structured information, or AI fills the gaps with third-party content — a stale Yelp review from 2019, an outdated directory listing with the wrong hours, or a competitor’s blog post that happens to rank well.

I know which one I’d rather have the AI pulling from.

Dig deeper: Why local SEO is thriving in the AI-first search era

Get the newsletter search marketers rely on.


The visibility gap between traditional search and AI is enormous

If you want a sense of how selective AI is compared to traditional search, SOCi’s 2026 Local Visibility Index, which analyzed nearly 350,000 locations across 2,751 multi-location brands, puts it starkly:

  • Only 1.2% of locations were recommended by ChatGPT.
  • 11% by Gemini.
  • 7.4% by Perplexity.
  • 35.9% appeared in Google’s traditional local 3-pack.

AI is up to 30 times more selective than traditional local search. Here’s the kicker: strong performance in the local pack doesn’t guarantee AI visibility. 

SOCi found that in retail, only 45% of brands leading in traditional local search also appeared in AI recommendations. More than half were invisible to AI entirely.

The brands making it into AI recommendations? 

The ones with accurate, consistent information across platforms, strong review volume and sentiment, and well-structured website content. That last one is where most local businesses are leaving the most value on the table.

Your website is the only place you control the narrative

Everywhere else — Google, Yelp, review sites, social media, and AI summaries — you’re at the mercy of other people’s opinions and platform algorithms. You don’t get to decide what gets shown or how it’s framed.

Your website is different. You decide what to highlight, the story to tell, and the objections to address. You can showcase what makes you different and guide visitors exactly where you want them to go.

More importantly, you can feed AI the narrative you want it to use. If your site has well-structured service pages, detailed FAQs, and content that answers real questions your customers ask, AI can pull directly from that when generating responses. You’re essentially writing your own introduction.

On the flip side, if your site is thin or generic, AI fills in the blanks with whatever else it can find. You lose the ability to define yourself.

Dig deeper: Your website still matters in the age of AI

What to actually do about it

This doesn’t require a rebuild, just more intentional structure and content. Here’s where to focus.

Treat your website as a source of truth

Stop writing vague claims like “we’re the best in the business.” AI doesn’t know what to do with that. Write specific, factual, helpful content about what you do, who you serve, and what results you deliver.

Every piece of information on your website — your services, hours, location, and pricing approach — should align with what’s on your Google Business Profile and across your directory listings.

As Search Engine Land contributor Will Scott notes

  • “Disambiguation through context is critical. When they’re building their ontologies, their map of relationships of knowledge, consistency matters a lot.”

Structure your content so AI can actually read it

AI reads for structure, not just keywords. An AirOps analysis of 217,508 retrieved pages found that only 15% of the pages ChatGPT retrieves actually earn a citation in the response.

Being crawled isn’t enough. How your content is organized determines whether it gets used. That means:

  • Schema markup: Specifically LocalBusiness, FAQPage, and Service schemas. This cheat sheet tells AI and search engines exactly what your business is, what it offers, and where it’s located.
  • Clear headings and short sentences: Use H2s and H3s to break content into scannable sections, and keep your sentences tight. The AirOps research found that pages averaging 11 to 14 words per sentence had roughly a 7% higher likelihood of being cited, likely because shorter sentences are easier for AI to parse and extract cleanly. Don’t bury critical information in long paragraphs.
  • An FAQ section: Built around the actual questions you hear in emails, calls, and consultations. Write answers in natural language. This directly mirrors how people search conversationally, and AI loves it. The same research found that pages with 7 to 26 list sections were 6% to 15% more likely to earn a citation.
  • Individual service pages: Not one catch-all “Services” page. Separate pages for each service with details about what’s included, who it’s for, and what to expect. Pages with 5 to 7 statistics supporting their claims had a 20% higher likelihood of being cited, so don’t just describe your services, back them up with specific, concrete details AI can confidently pull from.

Write for your customer’s questions

Most business websites are written for the business, not the customer. Corporate speak, vague value propositions, and industry jargon nobody searched for. Customers don’t search for buzzwords. They search for questions:

  • “Do you take my insurance?”
  • “How long does the repair take?”
  • “What’s the difference between [service A] and [service B]?”
  • “Can you help with [specific problem]?”

If your website answers those questions directly and clearly, you become the best answer AI can find when someone asks. Not sure what questions your customers are actually asking? 

Check your Google Business Profile Q&A section, your customer service emails, transcripts of your calls or meetings, and your reviews. The questions are already in front of you.

Dig deeper: How to apply ‘They Ask, You Answer’ to SEO and AI visibility

Do an AI audit of your own business right now

Here’s an exercise worth doing today: open ChatGPT, Perplexity, and Google AI Mode, and ask each one about your business. Ask contextual questions a real customer might ask, such as: 

  • “What do people say about [your business]?”
  • “Is [your business] good for [specific service]?”

This is actually the first thing we do when onboarding a new client. We build a brand interpretation document. 

It’s a snapshot of what AI systems currently know about a brand, pulled from the most important third-party sources in that industry. It tells us whether what’s being said about the brand is accurate, current, and coming from the right places, or whether it’s outdated, wrong, and sourced from somewhere you’d never choose yourself.

Ask your preferred AI what it knows about your business, then have it summarize consensus from key industry sources. Pay close attention to what comes back and where it came from. 

  • Is it citing your website? 
  • Your Google Business Profile? 
  • A review platform? 
  • A third-party directory? 
  • Is any of it inaccurate or out of date?

That audit tells you exactly where your information gaps are and how to fix them.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What’s at stake if you let your site go stale

If your website is thin, outdated, or poorly structured, AI fills the gaps with whatever it can find. That content may be inaccurate, negative, or just plain wrong. Maybe an old review mentions a service you no longer offer, or a directory has the wrong phone number. AI doesn’t fact-check. It aggregates.

Beyond accuracy, there’s the positioning problem. Without a strong website, what you’re known for and what makes you different gets shaped by third-party sources. Your expertise gets undersold. Your unique value gets lost in the noise.

AI might surface your name, but your website builds the trust that turns a recommendation into a call, a booking, or a sale. That’s where the decision happens.

Dig deeper: How AI is reshaping local search and what enterprises must do now

How to fix a suspended Google Merchant Center account

How to fix a suspended Google Merchant Center account

Google has unique policies for Google Shopping that are stricter than its general advertising policies. If Google thinks you’ve violated any of them, it can suspend your Merchant Center.

That cuts off access to Google Shopping, Local Inventory Ads, product feeds in Performance Max and dynamic remarketing, and free listings for products. That means losing your highest-ROI channel overnight.

Here’s how Google’s system works — and what you can do to fix suspensions and get back online.

Case study: How we reinstated a suspended Merchant Center

A UK-based ecommerce retailer came to us after their Google Merchant Center account was suspended for “Misrepresentation,” cutting off their Shopping ads entirely.

Like many legitimate merchants, they were blindsided. Their store was real, their products were accurate, and they had no idea what Google’s specific objection was.

We started with a full compliance audit of their website and Merchant Center account, working through every area Google scrutinizes.

What we found wasn’t one big violation. It was a long list of smaller gaps that, in combination, signaled untrustworthiness to Google’s systems.

The website’s Contact Us page lacked a physical address, a domain-based email address, and clear customer service hours, all of which Google expects from a legitimate business.

Their policy pages (shipping, returns, refunds, and payment) either didn’t exist or lacked the specific detail Google looks for. Missing elements included cancellation windows, defective item procedures, and accepted payment methods.

Beyond policies, their site lacked an order tracking feature and a cookie consent mechanism (required under UK law). A bot blocker was preventing Google’s automated crawlers from crawling the site.

Inside Google Merchant Center itself, Shopify’s automatic shipping sync was creating conflicting data. 

We documented every required change in detail and handed the client a clear, prioritized action list. Once they made all the changes, we requested a review from Google.

Google approved the appeal and reinstated the account.

Key takeaway: Google evaluates the totality of your website and feed, not just individual policy pages. A successful reinstatement almost always requires fixing multiple issues across your site before submitting an appeal.

Dig deeper: Google Ads account suspensions: What advertisers need to know

Step 1: Identify the type of suspension

Google will email you the policy they believe you’ve violated.

Merchant Center suspension email

You can also find this information on the Needs attention tab in your Merchant Center.

Needs attention tab

Read the suspension notice carefully because Google’s description, vague as it often is, will be your starting point for the following audit steps.

Misrepresentation

Misrepresentation is the most common policy we see cited for Google Merchant Center suspensions.

This policy covers a wide range of problems, from inaccurate information in Merchant Center, to missing policy pages on your website, to bad reviews about your business on third-party websites.

Follow the steps outlined in this guide to focus on improving four key areas:

  • Your Merchant Center settings.
  • Your product feed.
  • Your website.
  • Your online reputation.

Counterfeit products

You’re most likely to see this suspension reason if you’re reselling products from other brands (such as Pokémon cards, Prada bags, or Nike sneakers).

Helpful actions to take:

  • Say on your website whether you have a relationship with the manufacturer.
    • Are you an authorized reseller?
    • Do you purchase directly from the manufacturer?
    • Do you purchase from third parties?
  • Explain your authentication process.
  • Don’t list prices significantly lower than the manufacturer’s suggested retail price (MSRP).

Website needs improvement

Rather than citing a specific policy violation, Google is flagging that your website doesn’t appear sufficiently complete or functional.

Website needs improvement

Use incognito mode and multiple devices to check your website for:

  • Placeholder images or text.
  • Missing policy pages.
  • Problems adding products to cart or finishing the checkout process.

Unsupported shopping content

Google has a list of things that can be advertised via “regular” Google ads, but not via Google Shopping.

Services as a whole may not be advertised, which is why you won’t see ads for lawyers, doctors, or consultants on Google Shopping.

It gets tricky when services are bundled with products (you can advertise car tires, but you can’t advertise the labor to replace the tires on your car).

Google tends to aggressively flag things as services, or unsupported digital goods, that don’t actually fall within those policies.

What to do:

  • Separate services from physical products on your website.
  • Add explanation text to product pages clearly stating that what you’re selling is a physical good and not a service.
  • Avoid keywords like ebook and PDF that could trigger Google to think you’re selling disallowed digital goods.

Healthcare and medicines

Google restricts advertising healthcare-related products. The policies are country-specific, so be sure to carefully read the policy for the country, or countries, you’re targeting.

To sell prescription and over-the-counter drugs in the U.S., advertisers must undergo third-party certification through a company such as LegitScript and a separate certification process with Google.

Google explicitly lists pharmaceuticals and supplements that aren’t allowed to be advertised. Unfortunately, this list is not comprehensive. We’ve had cases where Google support informed us that products not on this list are not allowed to be advertised.

What to do:

  • Get certified (if you meet the certification requirements).
  • Avoid making claims about the benefits of what you sell that can’t be directly verified by linking to studies from your product pages.
  • Add appropriate disclaimers to your product pages and customer testimonials.

Dig deeper: A guide to Google Ads for regulated and sensitive categories

DMCA violation

If someone reports your website for content that violates the Digital Millennium Copyright Act (DMCA), Google will suspend your Merchant Center. These reports are filed in the Lumen database, where you can see what content has been flagged and when the report was made.

What to do:

  • If you’re violating copyright, remove the content from your website.
  • If you’re not violating copyright, document how this content is original to your website and why you believe the report was wrong.
  • After requesting a review of your suspension, you will probably have to engage in back-and-forth with Google support to argue why you should be allowed back on their platform.

Step 2: Audit your Merchant Center settings

Merchant Center settings are misconfigured in almost every suspension case we work on.

Go through every single page in your Merchant Center to make sure you’ve entered as much information as possible and that everything you’ve entered is accurate and matches what’s on your website.

Business info

Business info
  • Your store name must comply with Google’s policies.
  • Your physical address needs to be exactly right (no misplaced words or numbers) and should match the physical address on your website’s Contact page.
  • You should have accurate contact information, and a link to your Contact page, and relevant social media profiles.

Shipping and returns

Shipping and returns
  • Every product in your feed needs to be covered by at least one shipping rule and a return policy.
  • The shipping methods, handling and shipping times, cost structure, return timeline, refund process, exceptions, and restocking fees need to exactly match the information on the Shipping and Returns policy pages on your website.

Step 3: Audit your product feed data quality

Think of your product feed as your ads. Just as saying inaccurate things in your ads can lead to disapprovals, providing inaccurate or insufficient product data to Google can result in item disapprovals and account suspensions.

Data sources

Item disapprovals

In addition to account-level suspensions, Google often disapproves specific products for product-level violations.

Item disapprovals

There are many things that can cause item disapprovals. Top issues include:

  • Links or images that don’t load.
  • Mismatches between pricing or availability.
  • Missing weight or shipping information.
  • Invalid GTINs.
  • Unsupported product categories like weapons, digital goods, or services.

These problems don’t necessarily cause account suspensions, but you should fix as many as possible before requesting a review. You want Google to see you as committed to sending high-quality data and not violating any of their policies.

Wrong prices and URLs

The price in your product feed must match the price shown when someone lands on that product’s page. Two common mistakes:

  • Using a parent product URL with a product variant’s price, which causes a mismatch between the price in the ad and the price on the product page.
  • Putting a sale price in the feed that is not on the product page, or vice versa.

GTINs

Global Trade Identification Numbers (GTINs) are the numbers, such as UPCs and ISBNs, that manufacturers assign to their products.

  • If your products don’t have GTINs, you can set the value of the field identifier_exists in your feed to FALSE.
  • If your products have GTINs and you have access to them, send those numbers to Google in your feed.

You don’t have to send a GTIN, but if you do, it must be accurate.

We’ve seen cases where advertisers created fake GTINs, thinking it would help their products perform better. Instead, Google suspended the entire account.

Copied product photos and descriptions

Resellers who copy product images and descriptions from manufacturers may run into problems, especially if you don’t provide the product GTINs in the feed.

Ideally, you should take your own product images and write your own product descriptions, so that everything on your website is original.

Dig deeper: Google Ads’ three-strikes system: Managing warnings, strikes, and suspension

Get the newsletter search marketers rely on.


Step 4: Audit your website

Even if your Merchant Center settings and product feed are clean, your website itself can be the reason you’re suspended.

Crawl issues

Google will suspend your account if they’re not able to crawl your website.

For example, we’ve seen clients block visits from countries from which a high volume of spam traffic was originating. This accidentally blocked Google’s robots from accessing the website and caused a suspension.

We’ve also seen mistakes with the robots.txt file accidentally excluding Google’s bots from accessing key pages, which looks to Google like you’re trying to hide something.

Missing information

You need clear and distinct policy pages on your website, including:

  • Privacy.
  • Shipping.
  • Refund and return.
  • Terms of service.
  • Order tracking.
  • Payment.

You also need accurate contact information on your Contact page and a comprehensive About page.

Inaccurate or inconsistent information

Any claims you make on your website must be true. For example, if you say you offer free shipping on orders over $25, then you have to actually give free shipping when a cart value is greater than $25.

We often see inconsistencies on websites, such as:

  • Different return windows mentioned on the Return policy page than in the Return policy pop-up on the Shopify checkout page.
  • Old phone numbers that no longer work and haven’t been removed.
  • Template language referencing other businesses or products you don’t sell that you never removed from policy pages.

Badges and awards

Adding badges and awards (such as the Better Business Bureau badge and Trustpilot review widgets) to your website is a way to demonstrate credibility.

When you add badges, awards, or “As seen on” logos to your website, make sure to hyperlink them to supporting pages, or else Google may think you’re making unsupported claims.

Step 5: Audit your digital footprint

Google wants only trusted businesses to run Google Shopping ads, so they look beyond your website and Merchant Center at your digital footprint as a whole.

Reviews

If you don’t have reviews on third-party websites like Trustpilot and BBB, or worse, if there are many negative reviews about your business, Google will view you with more suspicion.

Make a focused effort to ask your customers for reviews and respond professionally to all reviews (positive or negative), so that Google sees you’re an active, engaged business.

Social media

Google expects websites to have profiles on social media platforms like Facebook and Instagram.

There is even a place in your Merchant Center where you can directly link to your social profiles.

It can be helpful to claim profiles for your business and make sure that your business info in those profiles (domain, phone number, physical and email addresses) match what’s on your website.

Authorized resellers

If you’re an authorized reseller for another brand, establish as much of a connection to that brand online as possible. For example:

  • Ask the brand to link to your website from their social media profiles and website.
  • Post any information you’re legally allowed to share about your contract on your website so that Google sees you’re being transparent.
  • Create an authentication guide that details how you authenticate the products you sell.

Step 6: Request a review

After you have followed steps 1-5 to identify and fix as many potential problems as possible, you are ready to ask Google to review your suspension.

To request a review:

  • Log in to your Google Merchant Center account.
  • Click Products & store.
  • Click Products.
  • Click Needs attention.
  • In the box that says “Suspended account for policy violation,” click Fix.
  • Click the button labeled “I disagree with the issue.”

Google sometimes makes the button unclickable until you go through identity verification, and in some cases, it also requires a video verification process.

Google doesn’t let you write any context when you request a review. Clicking the button is your only option.

Google limits how many reviews you may request. The limit varies per account, but often is three or less. Once you’ve reached that limit, Google will tell you that it will no longer accept additional review requests, and the button will no longer be clickable.

Google will not review your appeal unless there is at least one product in your Merchant Center.

What if I’m suspended for multiple things?

Google sometimes flags Merchant Centers with multiple policy violations at the same time. Fix everything possible on your website and in your account, and then appeal the suspensions one at a time.

Start with the suspension that looks the most comprehensive. For example, misrepresentation is a more “egregious” suspension in Google’s eyes than sale of service, so start by appealing the former.

If one policy issue is a suspension and another is a warning (suspended for misrepresentation and warned for website needs improvement), appeal the warning first.

Common questions about Google Merchant Center suspensions

Why is my Google Merchant Center suspended?

Google will tell you what policy it believes you’ve violated via email, and in a notification in the “Needs Attention” tab in your Merchant Center.

These policies are usually quite broad, and narrowing down exactly why you were suspended can be difficult, which is why it’s vital that you fix as many potential problems as possible before appealing your suspension.

How long does a Google Merchant Center suspension last?

In most cases, it lasts forever unless you successfully appeal the suspension.

That said, we’ve seen cases where Google re-crawled a website after changes were made and automatically reinstated an account prior to the advertiser requesting a review (but don’t count on this happening).

Can Google Merchant Center support help me?

Sometimes, if you know how to ask the right questions, Google Merchant Center support will provide some ideas about what went wrong, or will point to specific data issues with your products.

What happens if Google rejects my appeal?

Typically, Google will put your Merchant Center into a cool-down period during which you can’t request another review.

The first cool-down period is usually seven days, and the timeline gets longer with subsequent rejections.

How many times can I appeal a Google Merchant Center suspension?

Google typically limits appeals to between one and three attempts, though exceptions exist.

Why does Google keep suspending my Merchant Center account?

It’s not uncommon for Google to accept an appeal of a Merchant Center suspension and then suspend that account again for the same policy.

This could be due to Google’s automated systems re-flagging you for something that its manual reviewers decided was not a violation.

It could also be because Google is unfortunately inconsistent with how it flags policy violations and enforces its policies.

Can I ask customers to write reviews of my business online?

You can. If you’re sending product reviews to Merchant Center, you must disclose to Google if you incentivize customers to leave reviews.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

Preventing Google Merchant Center suspensions

All of the steps outlined in this guide to fix suspensions are things you should proactively do to help prevent suspensions from happening.

Doing these things before you’re suspended can potentially save you tremendous time, frustration, and opportunity cost.

Here are a few more ideas to help stop suspensions:

  • Check your website weekly via incognito mode on mobile and desktop devices to make sure your website functions properly.
  • Get a real physical business address, and feature that address on your Contact page and in your website footer.
  • Regularly ask your clients to write reviews about you, and respond professionally to every single review.
  • Consistently read the policies on your website to make sure they are still accurate, and update them immediately if you change your processes.
  • Monitor your Merchant Center daily for disapprovals, and quickly fix anything that Google says needs attention.

Google has policies in place because it wants to protect consumers.

By following Google’s policies and showing that you’re a legitimate advertiser, you can protect your ability to use one of the most important channels available for growing an ecommerce brand.

Why log file analysis matters for AI crawlers and search visibility

Why log file analysis matters for AI crawlers and search visibility

One of the biggest challenges in AI search is that visibility is being shaped by systems you can’t directly observe.

Nothing like Google Search Console exists for ChatGPT, Claude, or Perplexity. No reporting layer showing what’s crawled, how often, or whether your content is considered at all.

Yet these systems are actively crawling the web, building datasets, powering retrieval, and generating answers that shape discovery — often without sending traffic back to the source.

This creates a gap. In traditional SEO, performance and behavior are connected. You can see impressions, clicks, indexing, and some level of crawl data. In AI search, that feedback loop doesn’t exist.

Log files are the closest thing to that missing layer. They don’t summarize or interpret activity. They record it — every request, every URL, every crawler. 

For AI systems, that raw data is often the only way to understand how your site is actually being accessed.

Some visibility is emerging — just not from AI platforms

That lack of visibility hasn’t gone entirely unaddressed. 

Bing is one of the first platforms to introduce this natively. Through Bing Webmaster Tools, Copilot-related insights are beginning to show how AI-driven systems interact with websites. It’s still early, but it’s a meaningful shift — and the first real example of an AI system exposing even part of its behavior to site owners.

Beyond that, a new category of tools is emerging. Platforms like Scrunch, Profound, and others focus on AI visibility, tracking how content appears in AI-generated responses and how different agents interact with a site. 

In some cases, they connect directly to sources like Cloudflare or other traffic layers, making it easier to monitor crawler activity without manually exporting and analyzing raw logs.

That visibility is useful, especially as AI systems evolve quickly. But it isn’t complete. 

Most of these tools operate within a defined window. Some only surface a limited timeframe of agent activity, making them effective for near-term monitoring, but less useful for understanding longer-term patterns or changes in crawl behavior.

AI crawler activity isn’t consistent. Unlike Googlebot, which crawls continuously, many AI agents appear sporadically or in bursts. Without historical data, it’s difficult to determine whether a change in activity is meaningful or normal variation.

Log files solve for that. They provide a complete, unfiltered record of crawler behavior — every request, every URL, every user agent. With continuous retention, they enable analysis of patterns over time and revisiting data when something changes.

Dig deeper: Log file analysis for SEO: Find crawl issues & fix them fast

Not all AI crawlers behave the same way

In log files, everything appears as a user agent string. On the surface, it’s easy to treat them the same, but they represent different systems with different objectives. That distinction matters, because it directly affects how they access and interact with your site.

AI-related crawlers generally fall into two groups: training and retrieval.

Training crawlers

Training crawlers, such as GPTBot, ClaudeBot, CCBot, and Google-Extended, collect content for large-scale datasets and model development.

Their activity isn’t tied to real-time queries, and they don’t behave like traditional search crawlers. You’ll typically see them less frequently, and when they do appear, their crawl patterns are broader and less targeted.

Because of that, their presence – or absence – carries a different implication. If these crawlers don’t appear in your logs at all, it’s not just a crawl issue. It raises the question of whether your content is included in the datasets that influence how AI systems understand topics over time.

At the same time, it’s important to consider how much data you’re analyzing. Training crawlers don’t operate on a continuous crawl cycle like Googlebot.

Their activity is often sporadic, which means a short log window (a few hours, or even a single day) can be misleading. You may not see them simply because they haven’t crawled within that timeframe.

That’s why analyzing log data over a longer period matters. It helps distinguish between true absence and normal variation in how these systems crawl.

Retrieval and answer crawlers

Retrieval crawlers operate differently. Agents like ChatGPT-User and PerplexityBot are more closely tied to live, or near-real-time, responses. Their activity tends to be event-driven and more targeted, often limited to a small number of URLs.

That makes their behavior less predictable and easier to misinterpret. You won’t see the same volume or consistency you would from Googlebot, but patterns still matter.

If these crawlers never reach deeper content, or consistently stop at top-level pages, it can indicate limitations in how your site is discovered or accessed.

Traditional crawlers still matter, but they’re no longer the full picture

Googlebot and Bingbot still provide the baseline. Their crawl behavior is consistent and typically gives a reliable view of how well your site can be discovered and indexed.

The difference is that AI crawlers don’t always follow the same paths. It’s common to see strong, deep crawl coverage from Googlebot alongside much lighter, or more shallow, interaction from AI systems. That gap doesn’t show up in Search Console, but becomes clear in log files.

What AI crawler behavior actually tells you

Once you isolate AI crawlers in your log files, the goal isn’t just to confirm they exist. It’s to understand how they interact with your site – and what that behavior implies about visibility.

AI systems crawl the web to train models, build retrieval indexes, and support generative answers. But unlike Googlebot, there’s very little direct visibility into how that activity plays out.

Log files make that behavior observable. There are a few key patterns to focus on.

Discovery: Are you being accessed at all?

Start by checking whether AI crawlers appear in your logs.

In many cases, they don’t — or appear far less frequently than traditional search crawlers. That doesn’t always indicate a technical issue, but highlights how differently these systems discover and access content.

If AI crawlers are completely absent, they may be blocked in robots.txt, rate-limited at the server or CDN level, or simply not discovering your site.

Presence alone is a signal. Absence is one too.

Crawl depth: How far into your site do they go?

When AI crawlers do appear, the next question is how far they get.

It’s common to see them limited to top-level pages – the homepage, primary navigation, and a small number of high-level URLs. Deeper content, including long-tail pages, or location-specific content, is often untouched.

If crawlers aren’t reaching those sections, they’re not seeing the full structure of your site. That limits how much context they can build and reduces the likelihood that deeper content is surfaced in AI-generated responses.

Crawl paths: How AI systems actually see your site

When AI crawlers access a site, they don’t build a comprehensive map the way traditional search engines do.

Their behavior is more selective and influenced by what’s immediately accessible, which means your site structure plays a larger role in what they reach.

In log files, this appears as concentrated activity around a small set of URLs. 

  • Requests are typically clustered around the homepage, primary navigation, and pages that are directly linked, or easy to discover. 
  • As you move deeper into the site, crawl activity often drops off, sometimes sharply, even when those pages are important from a business, or SEO, perspective.

The practical implication: pages buried behind JavaScript-heavy navigation, or weak internal linking, are significantly less likely to be accessed.

As a result, the version of your site AI systems interact with is often incomplete. Entire sections can be effectively invisible because they sit outside the paths these crawlers can follow. 

This is where log file analysis becomes particularly useful, because it exposes the difference between what exists and what’s actually accessed.

Crawl friction: Where access breaks down

Log files also surface where crawlers encounter issues. This includes:

  • 403 responses (blocked requests).
  • 429 responses (rate limiting).
  • Redirects and redirect chains.
  • Unexpected status codes.

For AI crawlers, these issues can have an outsized impact. Their activity is already limited, and failed requests reduce the likelihood they continue deeper into the site.

Cross-system comparison: How does this differ from Googlebot?

Comparing AI crawler behavior to Googlebot provides useful context.

Googlebot typically shows consistent, deep crawl coverage across a site. AI crawlers often behave differently – appearing less frequently, accessing fewer pages, and stopping at shallower levels.

That difference highlights where your site is accessible for traditional search, but not necessarily for AI-driven systems. As those systems become more influential in discovery, crawl accessibility becomes a multi-system concern – not just a Google one.

Get the newsletter search marketers rely on.


How to analyze AI crawler behavior with log files

You don’t need a complex setup to start getting value from log files. Most hosting platforms retain access logs by default, even if only for a short window.

You’ll find that retention varies across hosting providers, but it’s often limited to anywhere from a few hours to a few days. Kinsta, for example, typically retains logs for a short rolling window, which is enough to get started but not for long-term analysis.

Start with the logs you already have

The first step is simply to export access logs from your hosting environment.

Even a small dataset can surface useful patterns, particularly when you’re looking for presence, crawl paths, and obvious gaps. At this stage, you’re not trying to build a complete picture over time. You’re looking for directional insight into how different crawlers are interacting with your site right now.

Use a log analysis tool to make the data usable

Raw log files are difficult to work with directly, especially at scale.

Tools like Screaming Frog Log File Analyzer make it possible to process that data quickly. Logs can be uploaded in their raw format and broken down by user agent, URL, and response code, allowing you to move from raw requests to structured analysis without additional preprocessing.

This is where the data becomes usable.

Use a log analysis tool to make the data usable

Segment by crawler type

Once the logs are loaded, segmentation becomes the priority. Start by isolating user agents so you can compare AI crawlers, Googlebot, and Bingbot.

This is critical, because behavior varies significantly across systems. Without segmentation, everything blends together. With it, patterns start to emerge.

To filter your views by bot, select your bot at the top right of the Log File Analyser. This will update all subsequent analysis to the bot you’ve selected.

You can begin to see:

  • Whether AI crawlers appear at all.
  • How their activity compares to traditional search.
  • Whether their behavior aligns or diverges.

Analyze crawl behavior against your site structure

From there, shift from presence to behavior.

Look at which URLs are being accessed, how frequently they appear, and how that maps to your site structure. This is where the earlier analysis becomes practical.

You’re not just asking what was crawled. You’re asking:

  • Are crawlers reaching deeper content?
  • Which sections of the site are being skipped entirely?
  • Does this align with how your site is structured and linked?

This is where crawl paths, accessibility, and prioritization start to surface as real, observable patterns.

Use response codes to identify friction

Filtering by response code adds another layer of insight.

This helps surface where crawlers are encountering issues, including:

  • Blocked requests.
  • Rate limiting.
  • Redirect chains.
  • Unexpected responses.

For AI crawlers, these issues can have a greater impact. Their activity is already limited, so failed requests reduce the likelihood that they continue further into the site.

Cross-reference crawlable vs. crawled

One of the most valuable steps is comparing what can be crawled with what is actually being crawled.

Running a standard crawl alongside your log analysis allows you to identify this gap directly. Pages that are accessible in theory, but never appear in logs, represent missed opportunities for discovery.

Understand what your logs don’t show

As you work through log data, it’s also important to understand its limitations.

Server-level logs only capture requests that reach your origin. In environments that include a CDN, or security layer like Cloudflare, some requests may be filtered before they ever reach the site. That means certain crawler activity, particularly blocked, or rate-limited, requests, won’t appear in your logs at all.

This becomes relevant when interpreting absence. If specific AI crawlers don’t appear in your data, it doesn’t always mean they aren’t attempting to access the site. In some cases, they may be getting filtered upstream.

How to scale: Continuous log retention

Log file analysis breaks down quickly if you’re only looking at short timeframes.

A few hours of data, or even a single day, can show you what happened. It can also make it look like nothing is happening at all. With AI crawlers, that distinction matters.

Their activity isn’t continuous. Training crawlers may appear intermittently, and retrieval agents are often tied to specific events or queries. 

A short log window can easily lead you to the wrong conclusion. A crawler that doesn’t appear in your data may still be active. It just hasn’t shown up within that window.

This is where retention changes the analysis. Once you’re working with a longer dataset, you’ll see how often it appears, where it shows up, and whether that behavior is consistent over time. What looked like absence starts to resolve into patterns.

Moving beyond your hosting limits

At that point, the limitation isn’t analysis. It’s access to data over time.

Most hosting environments aren’t designed for long-term log retention. Even when logs are available, they’re typically tied to a short rolling window. That makes it difficult to revisit behavior, compare time periods, or understand how crawler activity evolves.

To get beyond that, you need to store logs outside of your hosting environment. Log storage options include: 

  • Amazon S3 is one of the most common approaches. It provides flexible, low-cost storage that allows you to retain logs continuously and query them when needed. If the goal is to build a historical view of crawler behavior, it’s a practical and widely supported option.
  • Cloudflare R2 serves a similar purpose and can be a better fit for sites already using Cloudflare. It keeps storage within the same ecosystem and simplifies how log data is handled, particularly when edge-level logging is part of the setup.

The specific platform matters less than the shift itself. You’re moving from whatever your host happened to keep to a dataset you control.

Bridging the gap with automation

Not every setup supports continuous streaming, and most teams aren’t going to build that infrastructure upfront.

If your retention window is limited, automation becomes the practical way to extend it.

Instead of manually downloading logs, you can schedule the process. Many hosting providers expose logs over SFTP, which makes it possible to pull them at regular intervals before they expire.

A scheduled SFTP job – whether built in a workflow tool like n8n, or scripted – is enough to turn a short retention window into something you can actually analyze over time. That’s often the difference between one-off analysis and something repeatable.

Getting closer to a complete view

As your dataset grows, so does the need to understand its boundaries. Log files show you what reached your site. They don’t always show you what tried to.

In environments that include a CDN, or security layer, some requests may be filtered before they reach your origin. That becomes more noticeable over time, particularly when certain crawlers appear less frequently than expected.

At that point, edge-level logging becomes a useful addition. It provides visibility into requests that are blocked or filtered upstream and helps explain gaps in origin-level data.

It’s not required to get value from log analysis, but it becomes relevant once you’re trying to build a more complete picture of crawler behavior across systems.

Log files show you what reached your site. They don’t show everything, but they’re the only place this interaction becomes visible at all.

You’re not optimizing for one crawler anymore. And the teams that start measuring this now won’t be guessing later.

Why your Google Ads results keep repeating the same outcomes

Why your Google Ads results keep repeating the same outcomes

Paid search success used to be driven by optimizations. You adjusted bids, restructured campaigns, refined match types, and added negatives. Performance moved accordingly.

That’s still how many accounts are managed. When I audit them, they often look “well optimized”: active management, no glaring structural deficiencies, and targets that match achieved ROAS. On paper, everything checks out. But performance is quietly stuck.

Google Ads no longer responds to isolated optimizations. It builds on what you’ve been rewarding. So when I hear, “That didn’t work,” it usually means the change didn’t override months of prior signals.

What most advertisers still call optimization is actually training. They’re teaching the system the wrong lessons.

Why isolated optimizations don’t move the needle anymore

Today’s Google Ads environment is dominated by Smart Bidding, Performance Max, broad match expansion/AI Max, and modeled conversions. These systems don’t reset when you make a change. They learn cumulatively.

If you raise a ROAS target this week, that action doesn’t override six months of reinforced signals. If you launch a new campaign but shut it down after 10 days, the system doesn’t “forget” that volatility was punished. If brand revenue consistently carries the account, Google learns that safe, predictable demand is the highest priority.

The platform continuously optimizes toward the behaviors that survive, get funded, hit targets, and avoid being paused.

When accounts plateau despite strong management, it’s rarely because bids are wrong. It’s because the system has been trained to avoid uncertainty, but uncertainty is where growth lives.

What training looks like in a Google Ads account

On the back end, Google Ads is constantly answering one question: What does success look like here?

It infers the answer from:

  • Which conversions you include.
  • How you value them.
  • Which campaigns are protected during volatility.
  • How quickly you react to performance swings.

Over time, those signals shape the system’s behavior:

  • Which queries it expands into.
  • Which audiences it prioritizes.
  • How aggressively it competes in auctions.
  • Whether it explores new demand or recycles existing buyers.

Training is about the direction you reinforce over months. If repeat customers hit your ROAS target easily and prospecting campaigns fluctuate, which one do you think the system will prioritize over time?

Here’s a pattern I’ve seen more than once.

  • Month 1: Non-brand drives 52% of revenue.
  • Month 6: Non-brand drives 36%.

ROAS improves, and everyone’s happy. Except new customer growth flattens. The system has simply learned that predictable revenue is more important than incremental revenue. That’s training.

How you might be training Google Ads wrong

These mistakes are subtle and are often framed as good management. That’s what makes them dangerous.

Mistake 1: Training on the easiest revenue

Branded search converts well, returning customers convert well, and promo periods convert very well — so we lean in. We scale budgets behind what works and protect it.

Over time, Google learns that predictable revenue is the safest path to success.

Here’s a simplified example (replace with real data if available):

MonthBranded cost %Account ROAS
133%$5.44
235%$5.03
340%$6.10
438%$6.69
542%$7.06
646%$7.39

ROAS improved during this period, but incremental demand declined due to the account’s conservative training. This is one of the most common ceilings we see.

Mistake 2: Punishing volatility

This one hits close to home for most teams. Short-term inefficiency is part of prospecting, but most advertisers respond to it immediately:

  • Tightening ROAS targets after one soft week.
  • Pulling budget during learning phases.
  • Pausing campaigns that explore new or expanded audiences.

From a human perspective, this feels responsible, but from a training perspective, it sends a clear message: exploration (uncertainty) is unacceptable.

The system adapts by prioritizing stability over expansion. It narrows the query mix. It leans harder into repeat purchasers. It becomes increasingly efficient, and increasingly stagnant. If everything in your account feels equally clean, you’re probably recycling demand.

Even if ROAS fluctuates, a prospecting or awareness campaign can still drive meaningful new customer lift if given time to mature, as in the example below:

The difference between plateaued accounts and growing accounts is rarely skill. It’s tolerance for controlled volatility.

Mistake 3: Pretending all purchases are equal

In most DTC setups, every purchase is treated equally, but a first-time, full-price buyer, a repeat customer, and a promo-driven order aren’t equal signals.

When every purchase sends the same signal, Google will favor the one that’s easiest to reproduce. That’s usually repeat behavior. Then we wonder why new customer acquisition gets harder.

For the client above, the implementation of lapsed customer targeting and valuation led to a 53% YoY increase in orders vs. a 12% YoY increase the three months prior.

Get the newsletter search marketers rely on.


What intentional training actually looks like

This is where many teams get uncomfortable, because it requires letting go of short-term ROAS obsession in favor of aligning Google Ads with the actual business model.

If a client’s business depends on new customer growth, but you’re optimizing purely to blended ROAS, you’ve misaligned the system from the start. If mis-training is cumulative, so is intentional training. Here’s what that looks like in practice:

Maintain efficiency lanes

Efficiency lanes exist to protect baseline revenue. They’re tightly managed. They often include brand campaigns and high-intent non-brand terms with predictable performance.

These campaigns can carry stricter ROAS or CPA targets. They stabilize cash flow. They help CEOs sleep at night. They are not your growth engine.

Build growth lanes

Growth lanes are structured differently. They often include broader match types, category expansion, new audience layering, or creative angles that introduce new use cases. They have looser yet realistic targets.

If your efficiency campaigns run at a 500% ROAS target, your growth campaigns might operate at 350%, with the explicit understanding that they exist to expand demand and acquire new customers.

Here’s the key: you don’t tighten the growth lane every time it fluctuates. You let it learn.

In one DTC account, separating these lanes and holding growth campaigns to a slightly lower ROAS threshold led to a 43% lift in YoY new customers in Q4, while blended ROAS actually improved 10%.

You can see the spend and order relationship below, where an increased investment in new drove measurable change, and the reduction on returning customers didn’t harm the bottom line. 

This controlled asymmetry is how you scale smarter.

Change signals slowly

If you adjust ROAS targets every two weeks, you’re resetting the system constantly.

Targets shouldn’t be adjusted weekly in response to noise. Campaigns shouldn’t pause during early learning unless structurally broken. Creative testing should be protected long enough to produce a clear signal.

Give it time and let data compound. In one account, simply holding ROAS targets steady for 60 days — instead of tightening them after minor dips — resulted in broader query expansion and improved non-brand impression share without increasing spend.

The performance didn’t spike overnight. It grew gradually — that’s training working.

What it means to manage a trained system

If any of the mistakes feel familiar, ask yourself:

  • Do we tighten targets faster than we loosen them?
  • Has our revenue mix shifted toward brand and repeat customers over time?
  • Do we pause exploratory campaigns within the first 2–3 weeks?
  • Have our core conversion definitions changed multiple times in the last 60 days?
  • Is query expansion flat despite budget headroom?

If the answer is often “yes,” the system isn’t failing you. It’s doing exactly what you trained it to do.

That’s the shift. Paid search used to be about making better decisions than the auction in real time. Now it’s about designing the environment the auction learns from. That’s a different job.

Automation doesn’t reward who moves fastest. It reflects what you’ve been teaching it.

Once you see the account as something you’re training, the question changes. It’s no longer “Why isn’t this working?” It’s “What have we been rewarding?”

❌