Normal view

Today — 17 April 2026Search Engine Land

Microsoft makes it easier to import Google PMax campaigns

16 April 2026 at 23:15
Microsoft Ads: How it compares to Google Ads and tips for getting started

Microsoft Advertising is rolling out a slate of updates aimed at making Performance Max campaigns easier to manage, measure, and migrate — especially for advertisers already using Google Ads.

Driving the news. Microsoft now lets advertisers import Google PMax campaigns that use new customer acquisition (NCA) goals, a feature that has been generally available in Microsoft since early this year.

The update is now live for all advertisers.

That means marketers can more easily port over campaigns designed to prioritize first-time buyers without rebuilding them from scratch.

What’s new. Microsoft says imported Google PMax campaigns with NCA goals will carry over if they don’t already exist in the advertiser’s account. Existing Microsoft NCA settings won’t be overwritten.

For audience lists:

  • Google website visitor segments will convert into Microsoft remarketing lists.
  • Google’s “all visitors” and “all converters” lists will map to Microsoft equivalents.
  • Unsupported lists, like Customer Match, will prompt advertisers to use fallback options.

Microsoft also says it takes a more conservative approach to “unknown” customers, classifying them as existing customers to avoid overcounting new customer conversions.

Why we care. This could make cross-platform campaign expansion faster and lower the friction of testing Microsoft’s PMax inventory removing the need of rebuilding campaigns from scratch. The added landing page reporting and search term visibility also give marketers better insight into what’s driving performance, which can help improve optimization and budget decisions.

More visbility for PMax. Microsoft is also adding landing page (Final URL) reporting for PMax campaigns. Advertisers can now see spend, clicks, impressions, conversion value, and ROAS by landing page.

They can also segment by campaign, asset group, and other dimensions.

Microsoft also said search term reporting is becoming more visible by default, with more transparency updates — including auction insights and added publisher URL metrics — planned later.

Other key updates:

  • Seasonality adjustments now support portfolio bid strategies, expanding a tool advertisers use for short-term events like promotions.
  • Campaign name limits are increasing from 128 to 400 characters, helping agencies and enterprise teams manage naming conventions at scale.
  • Autogenerated assets are expanding to underbuilt Responsive Search Ads to improve ad relevance and performance.
  • Merchant Center users can now update store names and domains directly without contacting support.

The bottom line. These updates make it easier to scale across platforms, save time on campaign setup, and get better visibility into what’s actually driving performance — giving advertisers more control over both efficiency and results.

Yesterday — 16 April 2026Search Engine Land

ChatGPT citations reward ranking and precision over length: Study

16 April 2026 at 21:02
ChatGPT citations

ChatGPT citations favor pages that rank well, match the query in their headings, and stay tightly focused, according to an AirOps study of 16,851 queries. The top retrieval result was cited 58% of the time, and pages that answered the main query more narrowly outperformed broader, more comprehensive guides.

Why we care. This study clarifies how to earn ChatGPT citations: win retrieval, mirror the query in your headings, and answer one question extremely well. In this study, that mattered more than breadth.

The findings. Retrieval rank was the strongest signal. Pages in the top search position were cited 58.4% of the time, versus 14.2% for pages in position 10.

  • Heading relevance was the strongest on-page factor. Pages with the strongest heading-query match were cited 41.0% of the time, compared with roughly 30% for weaker matches.
  • Focused pages also beat comprehensive ones. Pages that answered the main query more narrowly outperformed broader, more comprehensive guides, undercutting the usual “ultimate guide” approach.

What drove ChatGPT citations. In this study, pages that won citations usually ranked well, used headings that closely matched the query, and stayed focused on answering it.

  • Structure helped, but only slightly: Pages with JSON-LD markup posted a 38.5% citation rate versus 32.0% for pages without it, and articles with 4 to 10 subheadings performed best.
  • Beyond a certain point, length hurt performance: Pages between 500 and 2,000 words performed best, but pages longer than 5,000 words were cited less often than pages under 500 words.

Freshness helps, up to a point. Pages published 30 to 89 days earlier performed best, while pages newer than 30 days performed worse. This suggests new content may need time to build retrieval signals.

  • Pages more than 2 years old were cited less often, which suggests that content refreshes could help if you’re already ranking for the right queries.

About the data. AirOps said it scraped ChatGPT’s interface, not the API, and analyzed 50,553 responses generated from 16,851 unique queries run three times each. The dataset included 353,799 pages and more than 1.5 million fan-out detail rows across 10 verticals and four query types.

The study. The Fan-Out Effect: What Happens Between a Query and a Citation

Google AI Mode in Chrome now lets you search deeper with fewer tabs

16 April 2026 at 21:00

Google announced Chrome updates that let searchers use AI Mode in a more engaging, deeper way. Chrome lets you do it all without switching tabs and potentially losing your place.

What’s new. Chrome added three new features:

  • Search side-by-side: In AI Mode on Chrome desktop, clicking a link opens the webpage next to AI Mode. That makes it easier to visit relevant sites, compare details, and ask follow-up questions without losing the context of your search. Here’s what it looks like:
  • Search across your tabs: On Chrome desktop or mobile, you can tap the new “plus” menu on the New Tab page, or the existing plus menu in AI Mode, to add recent tabs to your search. That lets AI Mode deliver more tailored responses and suggest more sites to explore.
  • Multi-input and easy tool access: You can also mix and match multiple tabs, images, or files like PDFs and bring that context into AI Mode. Tools like Canvas and image creation are also available wherever you see the new plus menu in Chrome.

Why we care. These new Chrome-specific features for U.S. English users unlock more AI Mode capabilities. Again, they’re limited to Chrome users for now, but they show the direction Google is taking AI Mode.

💾

Google Chrome now has a new side-by-side mode, search across tabs, and multi-input tools.

Gemini helped Google block more than 99% of bad ads before they ran

16 April 2026 at 19:06

Google is making Gemini a core part of ad enforcement, saying the AI upgrade helped catch more scams while sharply reducing mistaken suspensions of legitimate advertisers. The move shows how quickly ad safety is turning into an AI fight over speed, scale, and accuracy.

The details. In its 2025 Ads Safety Report, Google said it blocked or removed 8.3 billion ads and suspended 24.9 million advertiser accounts last year. It said more than 99% of policy-violating ads were stopped before they ran.

  • Google credited Gemini with cutting incorrect advertiser suspensions by 80%, processing 4x more user reports than the year before, and spotting scam signals faster by better understanding ad intent.
  • Scams were a major focus. Google said it removed 602 million scam-related ads and suspended 4 million scam-linked accounts.

By the numbers:

  • 602 million scam-related ads removed
  • 4 million scam-linked accounts suspended
  • 4.8 billion ads restricted
  • 480 million web pages blocked or restricted
  • 245,000+ publisher sites actioned
  • 35 policy updates made in 2025

The U.S. picture: Google said it removed 1.7 billion ads and suspended 3.3 million advertiser accounts in the U.S. in 2025. The most common violations included abuse of the ad network, misrepresentation, sexual content, personalization violations, and dating and companionship ads.

Why we care. This directly affects whether campaigns launch, stay live, or get flagged. Google is signaling that AI will play a bigger role in deciding which ads run and which accounts get stopped. For advertisers, that raises the stakes on policy compliance while also promising fewer costly false suspensions.

How it works: Google said Gemini analyzes hundreds of billions of signals, including account age, behavior patterns, and campaign activity, to detect malicious intent earlier than older systems built more heavily around keywords and rule matching.

The company also said that by the end of 2025, most Responsive Search Ads would be reviewed instantly at submission, blocking harmful ads before launch. It plans to expand that capability to more formats this year.

Yes, but. Faster automated enforcement does not always mean smoother enforcement. Some advertisers in the U.K. and U.S. have recently reported bulk ad disapproval alerts despite finding no actual policy issues. That adds pressure on Google to prove tighter AI enforcement will not create new disruptions for legitimate brands.

Bottom line: Google wants advertisers to see Gemini as both shield and filter — tougher on scams, but more precise with legitimate accounts. The real test is whether that balance holds as enforcement gets faster and more automated.

Google’s blog post. Gemini is stopping harmful ads before people ever see them

Why your website is now the source of truth in local AI search

16 April 2026 at 19:00
Why your website is now the source of truth in local AI search

Open ChatGPT, then search for a local business you know has a strong online presence. Ask for a recommendation in that category. Chances are, it comes up. If you check what the AI cites as sources, you’ll almost certainly find the business’s own website in the mix.

That tells you something important: AI doesn’t conjure answers out of thin air. It pulls from whatever it can find. If your website isn’t the best, most complete, most authoritative source of information about your business, the AI will assemble its answer from scraps. You lose control of your own narrative.

That’s what’s driving a growing question among business owners and marketers: “Do I even need a website anymore? If AI answers everything, why does it matter?”

Your website isn’t just a marketing tool anymore. It’s a source document. AI treats it as an authoritative input. The real question is who gets to define your business: you or someone else. Here’s what’s changing, where conventional wisdom falls short, and what to do about it.

Zero-click doesn’t mean zero opportunity

A lot of marketers are seeing the same thing right now: impressions holding steady or rising, but clicks dropping. People get what they need without ever landing on a page, leading some to declare websites obsolete. That’s the wrong read.

Fewer clicks don’t mean less importance. They mean the nature of the click has changed. Look at where AI Overviews actually appear.

According to our analysis of Ahrefs data, of the 46 million+ keywords that trigger an AI Overview, nearly 99% are informational. Navigational keywords account for just 0.13%. Someone wanted a quick fact, got it, and moved on. Those were never high-intent visits anyway.

AI Overviews - 99% are informational

The clicks that drive revenue, the ones tied to bookings, calls, purchases, and consultations, still happen. Commercial and transactional keywords make up just 12.5% and 3.5% of AI Overview triggers, respectively. 

(Note: These percentages exceed 100% in total because keywords can carry multiple intent classifications, a single keyword can be both informational and commercial, for example.) 

Those are exactly the queries where people are closest to a decision. They just happen further down the funnel, after a recommendation has already been made. When someone is ready to decide, they validate and check the website.

Dig deeper: Your homepage matters again for SEO — here’s why

AI recommends, your customer decides. Know the difference.

When someone asks an AI assistant, “Who’s the best plumber near me?”, the AI might surface a few names. It’s pattern-matching based on reviews, location signals, website content, and business profile data. It’s offering a starting point, not a final verdict.

The AI isn’t picking up the phone or handing over a credit card. Especially for high-stakes local decisions, a contractor in your home, a doctor for your kid, a mechanic for your car, most people aren’t going to act on an algorithm’s suggestion without doing their own digging first.

What actually happens after the AI recommends? The customer: 

  • Googles the business. 
  • Reads the reviews. 
  • Looks at photos. 
  • Checks the website to see if you offer exactly what they need, and at a price they can stomach.

That validation phase is where decisions are made. And your website is at the center of it. AI might have gotten you in the door, but your website is what closes it.

Dig deeper: If you can’t say what problem your brand solves, AI won’t either

AI is actually making your website more valuable

AI systems are reading your content to determine what you do, who you serve, and how you help. They’re cross-referencing your site with your Google Business Profile, directory listings, and reviews to ensure consistency. 

When everything lines up, they gain confidence recommending you. When it doesn’t, you get skipped. This means your website is now effectively a source document for AI.

Either it provides clear, structured information, or AI fills the gaps with third-party content — a stale Yelp review from 2019, an outdated directory listing with the wrong hours, or a competitor’s blog post that happens to rank well.

I know which one I’d rather have the AI pulling from.

Dig deeper: Why local SEO is thriving in the AI-first search era

Get the newsletter search marketers rely on.


The visibility gap between traditional search and AI is enormous

If you want a sense of how selective AI is compared to traditional search, SOCi’s 2026 Local Visibility Index, which analyzed nearly 350,000 locations across 2,751 multi-location brands, puts it starkly:

  • Only 1.2% of locations were recommended by ChatGPT.
  • 11% by Gemini.
  • 7.4% by Perplexity.
  • 35.9% appeared in Google’s traditional local 3-pack.

AI is up to 30 times more selective than traditional local search. Here’s the kicker: strong performance in the local pack doesn’t guarantee AI visibility. 

SOCi found that in retail, only 45% of brands leading in traditional local search also appeared in AI recommendations. More than half were invisible to AI entirely.

The brands making it into AI recommendations? 

The ones with accurate, consistent information across platforms, strong review volume and sentiment, and well-structured website content. That last one is where most local businesses are leaving the most value on the table.

Your website is the only place you control the narrative

Everywhere else — Google, Yelp, review sites, social media, and AI summaries — you’re at the mercy of other people’s opinions and platform algorithms. You don’t get to decide what gets shown or how it’s framed.

Your website is different. You decide what to highlight, the story to tell, and the objections to address. You can showcase what makes you different and guide visitors exactly where you want them to go.

More importantly, you can feed AI the narrative you want it to use. If your site has well-structured service pages, detailed FAQs, and content that answers real questions your customers ask, AI can pull directly from that when generating responses. You’re essentially writing your own introduction.

On the flip side, if your site is thin or generic, AI fills in the blanks with whatever else it can find. You lose the ability to define yourself.

Dig deeper: Your website still matters in the age of AI

What to actually do about it

This doesn’t require a rebuild, just more intentional structure and content. Here’s where to focus.

Treat your website as a source of truth

Stop writing vague claims like “we’re the best in the business.” AI doesn’t know what to do with that. Write specific, factual, helpful content about what you do, who you serve, and what results you deliver.

Every piece of information on your website — your services, hours, location, and pricing approach — should align with what’s on your Google Business Profile and across your directory listings.

As Search Engine Land contributor Will Scott notes

  • “Disambiguation through context is critical. When they’re building their ontologies, their map of relationships of knowledge, consistency matters a lot.”

Structure your content so AI can actually read it

AI reads for structure, not just keywords. An AirOps analysis of 217,508 retrieved pages found that only 15% of the pages ChatGPT retrieves actually earn a citation in the response.

Being crawled isn’t enough. How your content is organized determines whether it gets used. That means:

  • Schema markup: Specifically LocalBusiness, FAQPage, and Service schemas. This cheat sheet tells AI and search engines exactly what your business is, what it offers, and where it’s located.
  • Clear headings and short sentences: Use H2s and H3s to break content into scannable sections, and keep your sentences tight. The AirOps research found that pages averaging 11 to 14 words per sentence had roughly a 7% higher likelihood of being cited, likely because shorter sentences are easier for AI to parse and extract cleanly. Don’t bury critical information in long paragraphs.
  • An FAQ section: Built around the actual questions you hear in emails, calls, and consultations. Write answers in natural language. This directly mirrors how people search conversationally, and AI loves it. The same research found that pages with 7 to 26 list sections were 6% to 15% more likely to earn a citation.
  • Individual service pages: Not one catch-all “Services” page. Separate pages for each service with details about what’s included, who it’s for, and what to expect. Pages with 5 to 7 statistics supporting their claims had a 20% higher likelihood of being cited, so don’t just describe your services, back them up with specific, concrete details AI can confidently pull from.

Write for your customer’s questions

Most business websites are written for the business, not the customer. Corporate speak, vague value propositions, and industry jargon nobody searched for. Customers don’t search for buzzwords. They search for questions:

  • “Do you take my insurance?”
  • “How long does the repair take?”
  • “What’s the difference between [service A] and [service B]?”
  • “Can you help with [specific problem]?”

If your website answers those questions directly and clearly, you become the best answer AI can find when someone asks. Not sure what questions your customers are actually asking? 

Check your Google Business Profile Q&A section, your customer service emails, transcripts of your calls or meetings, and your reviews. The questions are already in front of you.

Dig deeper: How to apply ‘They Ask, You Answer’ to SEO and AI visibility

Do an AI audit of your own business right now

Here’s an exercise worth doing today: open ChatGPT, Perplexity, and Google AI Mode, and ask each one about your business. Ask contextual questions a real customer might ask, such as: 

  • “What do people say about [your business]?”
  • “Is [your business] good for [specific service]?”

This is actually the first thing we do when onboarding a new client. We build a brand interpretation document. 

It’s a snapshot of what AI systems currently know about a brand, pulled from the most important third-party sources in that industry. It tells us whether what’s being said about the brand is accurate, current, and coming from the right places, or whether it’s outdated, wrong, and sourced from somewhere you’d never choose yourself.

Ask your preferred AI what it knows about your business, then have it summarize consensus from key industry sources. Pay close attention to what comes back and where it came from. 

  • Is it citing your website? 
  • Your Google Business Profile? 
  • A review platform? 
  • A third-party directory? 
  • Is any of it inaccurate or out of date?

That audit tells you exactly where your information gaps are and how to fix them.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What’s at stake if you let your site go stale

If your website is thin, outdated, or poorly structured, AI fills the gaps with whatever it can find. That content may be inaccurate, negative, or just plain wrong. Maybe an old review mentions a service you no longer offer, or a directory has the wrong phone number. AI doesn’t fact-check. It aggregates.

Beyond accuracy, there’s the positioning problem. Without a strong website, what you’re known for and what makes you different gets shaped by third-party sources. Your expertise gets undersold. Your unique value gets lost in the noise.

AI might surface your name, but your website builds the trust that turns a recommendation into a call, a booking, or a sale. That’s where the decision happens.

Dig deeper: How AI is reshaping local search and what enterprises must do now

How to fix a suspended Google Merchant Center account

16 April 2026 at 18:00
How to fix a suspended Google Merchant Center account

Google has unique policies for Google Shopping that are stricter than its general advertising policies. If Google thinks you’ve violated any of them, it can suspend your Merchant Center.

That cuts off access to Google Shopping, Local Inventory Ads, product feeds in Performance Max and dynamic remarketing, and free listings for products. That means losing your highest-ROI channel overnight.

Here’s how Google’s system works — and what you can do to fix suspensions and get back online.

Case study: How we reinstated a suspended Merchant Center

A UK-based ecommerce retailer came to us after their Google Merchant Center account was suspended for “Misrepresentation,” cutting off their Shopping ads entirely.

Like many legitimate merchants, they were blindsided. Their store was real, their products were accurate, and they had no idea what Google’s specific objection was.

We started with a full compliance audit of their website and Merchant Center account, working through every area Google scrutinizes.

What we found wasn’t one big violation. It was a long list of smaller gaps that, in combination, signaled untrustworthiness to Google’s systems.

The website’s Contact Us page lacked a physical address, a domain-based email address, and clear customer service hours, all of which Google expects from a legitimate business.

Their policy pages (shipping, returns, refunds, and payment) either didn’t exist or lacked the specific detail Google looks for. Missing elements included cancellation windows, defective item procedures, and accepted payment methods.

Beyond policies, their site lacked an order tracking feature and a cookie consent mechanism (required under UK law). A bot blocker was preventing Google’s automated crawlers from crawling the site.

Inside Google Merchant Center itself, Shopify’s automatic shipping sync was creating conflicting data. 

We documented every required change in detail and handed the client a clear, prioritized action list. Once they made all the changes, we requested a review from Google.

Google approved the appeal and reinstated the account.

Key takeaway: Google evaluates the totality of your website and feed, not just individual policy pages. A successful reinstatement almost always requires fixing multiple issues across your site before submitting an appeal.

Dig deeper: Google Ads account suspensions: What advertisers need to know

Step 1: Identify the type of suspension

Google will email you the policy they believe you’ve violated.

Merchant Center suspension email

You can also find this information on the Needs attention tab in your Merchant Center.

Needs attention tab

Read the suspension notice carefully because Google’s description, vague as it often is, will be your starting point for the following audit steps.

Misrepresentation

Misrepresentation is the most common policy we see cited for Google Merchant Center suspensions.

This policy covers a wide range of problems, from inaccurate information in Merchant Center, to missing policy pages on your website, to bad reviews about your business on third-party websites.

Follow the steps outlined in this guide to focus on improving four key areas:

  • Your Merchant Center settings.
  • Your product feed.
  • Your website.
  • Your online reputation.

Counterfeit products

You’re most likely to see this suspension reason if you’re reselling products from other brands (such as Pokémon cards, Prada bags, or Nike sneakers).

Helpful actions to take:

  • Say on your website whether you have a relationship with the manufacturer.
    • Are you an authorized reseller?
    • Do you purchase directly from the manufacturer?
    • Do you purchase from third parties?
  • Explain your authentication process.
  • Don’t list prices significantly lower than the manufacturer’s suggested retail price (MSRP).

Website needs improvement

Rather than citing a specific policy violation, Google is flagging that your website doesn’t appear sufficiently complete or functional.

Website needs improvement

Use incognito mode and multiple devices to check your website for:

  • Placeholder images or text.
  • Missing policy pages.
  • Problems adding products to cart or finishing the checkout process.

Unsupported shopping content

Google has a list of things that can be advertised via “regular” Google ads, but not via Google Shopping.

Services as a whole may not be advertised, which is why you won’t see ads for lawyers, doctors, or consultants on Google Shopping.

It gets tricky when services are bundled with products (you can advertise car tires, but you can’t advertise the labor to replace the tires on your car).

Google tends to aggressively flag things as services, or unsupported digital goods, that don’t actually fall within those policies.

What to do:

  • Separate services from physical products on your website.
  • Add explanation text to product pages clearly stating that what you’re selling is a physical good and not a service.
  • Avoid keywords like ebook and PDF that could trigger Google to think you’re selling disallowed digital goods.

Healthcare and medicines

Google restricts advertising healthcare-related products. The policies are country-specific, so be sure to carefully read the policy for the country, or countries, you’re targeting.

To sell prescription and over-the-counter drugs in the U.S., advertisers must undergo third-party certification through a company such as LegitScript and a separate certification process with Google.

Google explicitly lists pharmaceuticals and supplements that aren’t allowed to be advertised. Unfortunately, this list is not comprehensive. We’ve had cases where Google support informed us that products not on this list are not allowed to be advertised.

What to do:

  • Get certified (if you meet the certification requirements).
  • Avoid making claims about the benefits of what you sell that can’t be directly verified by linking to studies from your product pages.
  • Add appropriate disclaimers to your product pages and customer testimonials.

Dig deeper: A guide to Google Ads for regulated and sensitive categories

DMCA violation

If someone reports your website for content that violates the Digital Millennium Copyright Act (DMCA), Google will suspend your Merchant Center. These reports are filed in the Lumen database, where you can see what content has been flagged and when the report was made.

What to do:

  • If you’re violating copyright, remove the content from your website.
  • If you’re not violating copyright, document how this content is original to your website and why you believe the report was wrong.
  • After requesting a review of your suspension, you will probably have to engage in back-and-forth with Google support to argue why you should be allowed back on their platform.

Step 2: Audit your Merchant Center settings

Merchant Center settings are misconfigured in almost every suspension case we work on.

Go through every single page in your Merchant Center to make sure you’ve entered as much information as possible and that everything you’ve entered is accurate and matches what’s on your website.

Business info

Business info
  • Your store name must comply with Google’s policies.
  • Your physical address needs to be exactly right (no misplaced words or numbers) and should match the physical address on your website’s Contact page.
  • You should have accurate contact information, and a link to your Contact page, and relevant social media profiles.

Shipping and returns

Shipping and returns
  • Every product in your feed needs to be covered by at least one shipping rule and a return policy.
  • The shipping methods, handling and shipping times, cost structure, return timeline, refund process, exceptions, and restocking fees need to exactly match the information on the Shipping and Returns policy pages on your website.

Step 3: Audit your product feed data quality

Think of your product feed as your ads. Just as saying inaccurate things in your ads can lead to disapprovals, providing inaccurate or insufficient product data to Google can result in item disapprovals and account suspensions.

Data sources

Item disapprovals

In addition to account-level suspensions, Google often disapproves specific products for product-level violations.

Item disapprovals

There are many things that can cause item disapprovals. Top issues include:

  • Links or images that don’t load.
  • Mismatches between pricing or availability.
  • Missing weight or shipping information.
  • Invalid GTINs.
  • Unsupported product categories like weapons, digital goods, or services.

These problems don’t necessarily cause account suspensions, but you should fix as many as possible before requesting a review. You want Google to see you as committed to sending high-quality data and not violating any of their policies.

Wrong prices and URLs

The price in your product feed must match the price shown when someone lands on that product’s page. Two common mistakes:

  • Using a parent product URL with a product variant’s price, which causes a mismatch between the price in the ad and the price on the product page.
  • Putting a sale price in the feed that is not on the product page, or vice versa.

GTINs

Global Trade Identification Numbers (GTINs) are the numbers, such as UPCs and ISBNs, that manufacturers assign to their products.

  • If your products don’t have GTINs, you can set the value of the field identifier_exists in your feed to FALSE.
  • If your products have GTINs and you have access to them, send those numbers to Google in your feed.

You don’t have to send a GTIN, but if you do, it must be accurate.

We’ve seen cases where advertisers created fake GTINs, thinking it would help their products perform better. Instead, Google suspended the entire account.

Copied product photos and descriptions

Resellers who copy product images and descriptions from manufacturers may run into problems, especially if you don’t provide the product GTINs in the feed.

Ideally, you should take your own product images and write your own product descriptions, so that everything on your website is original.

Dig deeper: Google Ads’ three-strikes system: Managing warnings, strikes, and suspension

Get the newsletter search marketers rely on.


Step 4: Audit your website

Even if your Merchant Center settings and product feed are clean, your website itself can be the reason you’re suspended.

Crawl issues

Google will suspend your account if they’re not able to crawl your website.

For example, we’ve seen clients block visits from countries from which a high volume of spam traffic was originating. This accidentally blocked Google’s robots from accessing the website and caused a suspension.

We’ve also seen mistakes with the robots.txt file accidentally excluding Google’s bots from accessing key pages, which looks to Google like you’re trying to hide something.

Missing information

You need clear and distinct policy pages on your website, including:

  • Privacy.
  • Shipping.
  • Refund and return.
  • Terms of service.
  • Order tracking.
  • Payment.

You also need accurate contact information on your Contact page and a comprehensive About page.

Inaccurate or inconsistent information

Any claims you make on your website must be true. For example, if you say you offer free shipping on orders over $25, then you have to actually give free shipping when a cart value is greater than $25.

We often see inconsistencies on websites, such as:

  • Different return windows mentioned on the Return policy page than in the Return policy pop-up on the Shopify checkout page.
  • Old phone numbers that no longer work and haven’t been removed.
  • Template language referencing other businesses or products you don’t sell that you never removed from policy pages.

Badges and awards

Adding badges and awards (such as the Better Business Bureau badge and Trustpilot review widgets) to your website is a way to demonstrate credibility.

When you add badges, awards, or “As seen on” logos to your website, make sure to hyperlink them to supporting pages, or else Google may think you’re making unsupported claims.

Step 5: Audit your digital footprint

Google wants only trusted businesses to run Google Shopping ads, so they look beyond your website and Merchant Center at your digital footprint as a whole.

Reviews

If you don’t have reviews on third-party websites like Trustpilot and BBB, or worse, if there are many negative reviews about your business, Google will view you with more suspicion.

Make a focused effort to ask your customers for reviews and respond professionally to all reviews (positive or negative), so that Google sees you’re an active, engaged business.

Social media

Google expects websites to have profiles on social media platforms like Facebook and Instagram.

There is even a place in your Merchant Center where you can directly link to your social profiles.

It can be helpful to claim profiles for your business and make sure that your business info in those profiles (domain, phone number, physical and email addresses) match what’s on your website.

Authorized resellers

If you’re an authorized reseller for another brand, establish as much of a connection to that brand online as possible. For example:

  • Ask the brand to link to your website from their social media profiles and website.
  • Post any information you’re legally allowed to share about your contract on your website so that Google sees you’re being transparent.
  • Create an authentication guide that details how you authenticate the products you sell.

Step 6: Request a review

After you have followed steps 1-5 to identify and fix as many potential problems as possible, you are ready to ask Google to review your suspension.

To request a review:

  • Log in to your Google Merchant Center account.
  • Click Products & store.
  • Click Products.
  • Click Needs attention.
  • In the box that says “Suspended account for policy violation,” click Fix.
  • Click the button labeled “I disagree with the issue.”

Google sometimes makes the button unclickable until you go through identity verification, and in some cases, it also requires a video verification process.

Google doesn’t let you write any context when you request a review. Clicking the button is your only option.

Google limits how many reviews you may request. The limit varies per account, but often is three or less. Once you’ve reached that limit, Google will tell you that it will no longer accept additional review requests, and the button will no longer be clickable.

Google will not review your appeal unless there is at least one product in your Merchant Center.

What if I’m suspended for multiple things?

Google sometimes flags Merchant Centers with multiple policy violations at the same time. Fix everything possible on your website and in your account, and then appeal the suspensions one at a time.

Start with the suspension that looks the most comprehensive. For example, misrepresentation is a more “egregious” suspension in Google’s eyes than sale of service, so start by appealing the former.

If one policy issue is a suspension and another is a warning (suspended for misrepresentation and warned for website needs improvement), appeal the warning first.

Common questions about Google Merchant Center suspensions

Why is my Google Merchant Center suspended?

Google will tell you what policy it believes you’ve violated via email, and in a notification in the “Needs Attention” tab in your Merchant Center.

These policies are usually quite broad, and narrowing down exactly why you were suspended can be difficult, which is why it’s vital that you fix as many potential problems as possible before appealing your suspension.

How long does a Google Merchant Center suspension last?

In most cases, it lasts forever unless you successfully appeal the suspension.

That said, we’ve seen cases where Google re-crawled a website after changes were made and automatically reinstated an account prior to the advertiser requesting a review (but don’t count on this happening).

Can Google Merchant Center support help me?

Sometimes, if you know how to ask the right questions, Google Merchant Center support will provide some ideas about what went wrong, or will point to specific data issues with your products.

What happens if Google rejects my appeal?

Typically, Google will put your Merchant Center into a cool-down period during which you can’t request another review.

The first cool-down period is usually seven days, and the timeline gets longer with subsequent rejections.

How many times can I appeal a Google Merchant Center suspension?

Google typically limits appeals to between one and three attempts, though exceptions exist.

Why does Google keep suspending my Merchant Center account?

It’s not uncommon for Google to accept an appeal of a Merchant Center suspension and then suspend that account again for the same policy.

This could be due to Google’s automated systems re-flagging you for something that its manual reviewers decided was not a violation.

It could also be because Google is unfortunately inconsistent with how it flags policy violations and enforces its policies.

Can I ask customers to write reviews of my business online?

You can. If you’re sending product reviews to Merchant Center, you must disclose to Google if you incentivize customers to leave reviews.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

Preventing Google Merchant Center suspensions

All of the steps outlined in this guide to fix suspensions are things you should proactively do to help prevent suspensions from happening.

Doing these things before you’re suspended can potentially save you tremendous time, frustration, and opportunity cost.

Here are a few more ideas to help stop suspensions:

  • Check your website weekly via incognito mode on mobile and desktop devices to make sure your website functions properly.
  • Get a real physical business address, and feature that address on your Contact page and in your website footer.
  • Regularly ask your clients to write reviews about you, and respond professionally to every single review.
  • Consistently read the policies on your website to make sure they are still accurate, and update them immediately if you change your processes.
  • Monitor your Merchant Center daily for disapprovals, and quickly fix anything that Google says needs attention.

Google has policies in place because it wants to protect consumers.

By following Google’s policies and showing that you’re a legitimate advertiser, you can protect your ability to use one of the most important channels available for growing an ecommerce brand.

Why log file analysis matters for AI crawlers and search visibility

16 April 2026 at 17:00
Why log file analysis matters for AI crawlers and search visibility

One of the biggest challenges in AI search is that visibility is being shaped by systems you can’t directly observe.

Nothing like Google Search Console exists for ChatGPT, Claude, or Perplexity. No reporting layer showing what’s crawled, how often, or whether your content is considered at all.

Yet these systems are actively crawling the web, building datasets, powering retrieval, and generating answers that shape discovery — often without sending traffic back to the source.

This creates a gap. In traditional SEO, performance and behavior are connected. You can see impressions, clicks, indexing, and some level of crawl data. In AI search, that feedback loop doesn’t exist.

Log files are the closest thing to that missing layer. They don’t summarize or interpret activity. They record it — every request, every URL, every crawler. 

For AI systems, that raw data is often the only way to understand how your site is actually being accessed.

Some visibility is emerging — just not from AI platforms

That lack of visibility hasn’t gone entirely unaddressed. 

Bing is one of the first platforms to introduce this natively. Through Bing Webmaster Tools, Copilot-related insights are beginning to show how AI-driven systems interact with websites. It’s still early, but it’s a meaningful shift — and the first real example of an AI system exposing even part of its behavior to site owners.

Beyond that, a new category of tools is emerging. Platforms like Scrunch, Profound, and others focus on AI visibility, tracking how content appears in AI-generated responses and how different agents interact with a site. 

In some cases, they connect directly to sources like Cloudflare or other traffic layers, making it easier to monitor crawler activity without manually exporting and analyzing raw logs.

That visibility is useful, especially as AI systems evolve quickly. But it isn’t complete. 

Most of these tools operate within a defined window. Some only surface a limited timeframe of agent activity, making them effective for near-term monitoring, but less useful for understanding longer-term patterns or changes in crawl behavior.

AI crawler activity isn’t consistent. Unlike Googlebot, which crawls continuously, many AI agents appear sporadically or in bursts. Without historical data, it’s difficult to determine whether a change in activity is meaningful or normal variation.

Log files solve for that. They provide a complete, unfiltered record of crawler behavior — every request, every URL, every user agent. With continuous retention, they enable analysis of patterns over time and revisiting data when something changes.

Dig deeper: Log file analysis for SEO: Find crawl issues & fix them fast

Not all AI crawlers behave the same way

In log files, everything appears as a user agent string. On the surface, it’s easy to treat them the same, but they represent different systems with different objectives. That distinction matters, because it directly affects how they access and interact with your site.

AI-related crawlers generally fall into two groups: training and retrieval.

Training crawlers

Training crawlers, such as GPTBot, ClaudeBot, CCBot, and Google-Extended, collect content for large-scale datasets and model development.

Their activity isn’t tied to real-time queries, and they don’t behave like traditional search crawlers. You’ll typically see them less frequently, and when they do appear, their crawl patterns are broader and less targeted.

Because of that, their presence – or absence – carries a different implication. If these crawlers don’t appear in your logs at all, it’s not just a crawl issue. It raises the question of whether your content is included in the datasets that influence how AI systems understand topics over time.

At the same time, it’s important to consider how much data you’re analyzing. Training crawlers don’t operate on a continuous crawl cycle like Googlebot.

Their activity is often sporadic, which means a short log window (a few hours, or even a single day) can be misleading. You may not see them simply because they haven’t crawled within that timeframe.

That’s why analyzing log data over a longer period matters. It helps distinguish between true absence and normal variation in how these systems crawl.

Retrieval and answer crawlers

Retrieval crawlers operate differently. Agents like ChatGPT-User and PerplexityBot are more closely tied to live, or near-real-time, responses. Their activity tends to be event-driven and more targeted, often limited to a small number of URLs.

That makes their behavior less predictable and easier to misinterpret. You won’t see the same volume or consistency you would from Googlebot, but patterns still matter.

If these crawlers never reach deeper content, or consistently stop at top-level pages, it can indicate limitations in how your site is discovered or accessed.

Traditional crawlers still matter, but they’re no longer the full picture

Googlebot and Bingbot still provide the baseline. Their crawl behavior is consistent and typically gives a reliable view of how well your site can be discovered and indexed.

The difference is that AI crawlers don’t always follow the same paths. It’s common to see strong, deep crawl coverage from Googlebot alongside much lighter, or more shallow, interaction from AI systems. That gap doesn’t show up in Search Console, but becomes clear in log files.

What AI crawler behavior actually tells you

Once you isolate AI crawlers in your log files, the goal isn’t just to confirm they exist. It’s to understand how they interact with your site – and what that behavior implies about visibility.

AI systems crawl the web to train models, build retrieval indexes, and support generative answers. But unlike Googlebot, there’s very little direct visibility into how that activity plays out.

Log files make that behavior observable. There are a few key patterns to focus on.

Discovery: Are you being accessed at all?

Start by checking whether AI crawlers appear in your logs.

In many cases, they don’t — or appear far less frequently than traditional search crawlers. That doesn’t always indicate a technical issue, but highlights how differently these systems discover and access content.

If AI crawlers are completely absent, they may be blocked in robots.txt, rate-limited at the server or CDN level, or simply not discovering your site.

Presence alone is a signal. Absence is one too.

Crawl depth: How far into your site do they go?

When AI crawlers do appear, the next question is how far they get.

It’s common to see them limited to top-level pages – the homepage, primary navigation, and a small number of high-level URLs. Deeper content, including long-tail pages, or location-specific content, is often untouched.

If crawlers aren’t reaching those sections, they’re not seeing the full structure of your site. That limits how much context they can build and reduces the likelihood that deeper content is surfaced in AI-generated responses.

Crawl paths: How AI systems actually see your site

When AI crawlers access a site, they don’t build a comprehensive map the way traditional search engines do.

Their behavior is more selective and influenced by what’s immediately accessible, which means your site structure plays a larger role in what they reach.

In log files, this appears as concentrated activity around a small set of URLs. 

  • Requests are typically clustered around the homepage, primary navigation, and pages that are directly linked, or easy to discover. 
  • As you move deeper into the site, crawl activity often drops off, sometimes sharply, even when those pages are important from a business, or SEO, perspective.

The practical implication: pages buried behind JavaScript-heavy navigation, or weak internal linking, are significantly less likely to be accessed.

As a result, the version of your site AI systems interact with is often incomplete. Entire sections can be effectively invisible because they sit outside the paths these crawlers can follow. 

This is where log file analysis becomes particularly useful, because it exposes the difference between what exists and what’s actually accessed.

Crawl friction: Where access breaks down

Log files also surface where crawlers encounter issues. This includes:

  • 403 responses (blocked requests).
  • 429 responses (rate limiting).
  • Redirects and redirect chains.
  • Unexpected status codes.

For AI crawlers, these issues can have an outsized impact. Their activity is already limited, and failed requests reduce the likelihood they continue deeper into the site.

Cross-system comparison: How does this differ from Googlebot?

Comparing AI crawler behavior to Googlebot provides useful context.

Googlebot typically shows consistent, deep crawl coverage across a site. AI crawlers often behave differently – appearing less frequently, accessing fewer pages, and stopping at shallower levels.

That difference highlights where your site is accessible for traditional search, but not necessarily for AI-driven systems. As those systems become more influential in discovery, crawl accessibility becomes a multi-system concern – not just a Google one.

Get the newsletter search marketers rely on.


How to analyze AI crawler behavior with log files

You don’t need a complex setup to start getting value from log files. Most hosting platforms retain access logs by default, even if only for a short window.

You’ll find that retention varies across hosting providers, but it’s often limited to anywhere from a few hours to a few days. Kinsta, for example, typically retains logs for a short rolling window, which is enough to get started but not for long-term analysis.

Start with the logs you already have

The first step is simply to export access logs from your hosting environment.

Even a small dataset can surface useful patterns, particularly when you’re looking for presence, crawl paths, and obvious gaps. At this stage, you’re not trying to build a complete picture over time. You’re looking for directional insight into how different crawlers are interacting with your site right now.

Use a log analysis tool to make the data usable

Raw log files are difficult to work with directly, especially at scale.

Tools like Screaming Frog Log File Analyzer make it possible to process that data quickly. Logs can be uploaded in their raw format and broken down by user agent, URL, and response code, allowing you to move from raw requests to structured analysis without additional preprocessing.

This is where the data becomes usable.

Use a log analysis tool to make the data usable

Segment by crawler type

Once the logs are loaded, segmentation becomes the priority. Start by isolating user agents so you can compare AI crawlers, Googlebot, and Bingbot.

This is critical, because behavior varies significantly across systems. Without segmentation, everything blends together. With it, patterns start to emerge.

To filter your views by bot, select your bot at the top right of the Log File Analyser. This will update all subsequent analysis to the bot you’ve selected.

You can begin to see:

  • Whether AI crawlers appear at all.
  • How their activity compares to traditional search.
  • Whether their behavior aligns or diverges.

Analyze crawl behavior against your site structure

From there, shift from presence to behavior.

Look at which URLs are being accessed, how frequently they appear, and how that maps to your site structure. This is where the earlier analysis becomes practical.

You’re not just asking what was crawled. You’re asking:

  • Are crawlers reaching deeper content?
  • Which sections of the site are being skipped entirely?
  • Does this align with how your site is structured and linked?

This is where crawl paths, accessibility, and prioritization start to surface as real, observable patterns.

Use response codes to identify friction

Filtering by response code adds another layer of insight.

This helps surface where crawlers are encountering issues, including:

  • Blocked requests.
  • Rate limiting.
  • Redirect chains.
  • Unexpected responses.

For AI crawlers, these issues can have a greater impact. Their activity is already limited, so failed requests reduce the likelihood that they continue further into the site.

Cross-reference crawlable vs. crawled

One of the most valuable steps is comparing what can be crawled with what is actually being crawled.

Running a standard crawl alongside your log analysis allows you to identify this gap directly. Pages that are accessible in theory, but never appear in logs, represent missed opportunities for discovery.

Understand what your logs don’t show

As you work through log data, it’s also important to understand its limitations.

Server-level logs only capture requests that reach your origin. In environments that include a CDN, or security layer like Cloudflare, some requests may be filtered before they ever reach the site. That means certain crawler activity, particularly blocked, or rate-limited, requests, won’t appear in your logs at all.

This becomes relevant when interpreting absence. If specific AI crawlers don’t appear in your data, it doesn’t always mean they aren’t attempting to access the site. In some cases, they may be getting filtered upstream.

How to scale: Continuous log retention

Log file analysis breaks down quickly if you’re only looking at short timeframes.

A few hours of data, or even a single day, can show you what happened. It can also make it look like nothing is happening at all. With AI crawlers, that distinction matters.

Their activity isn’t continuous. Training crawlers may appear intermittently, and retrieval agents are often tied to specific events or queries. 

A short log window can easily lead you to the wrong conclusion. A crawler that doesn’t appear in your data may still be active. It just hasn’t shown up within that window.

This is where retention changes the analysis. Once you’re working with a longer dataset, you’ll see how often it appears, where it shows up, and whether that behavior is consistent over time. What looked like absence starts to resolve into patterns.

Moving beyond your hosting limits

At that point, the limitation isn’t analysis. It’s access to data over time.

Most hosting environments aren’t designed for long-term log retention. Even when logs are available, they’re typically tied to a short rolling window. That makes it difficult to revisit behavior, compare time periods, or understand how crawler activity evolves.

To get beyond that, you need to store logs outside of your hosting environment. Log storage options include: 

  • Amazon S3 is one of the most common approaches. It provides flexible, low-cost storage that allows you to retain logs continuously and query them when needed. If the goal is to build a historical view of crawler behavior, it’s a practical and widely supported option.
  • Cloudflare R2 serves a similar purpose and can be a better fit for sites already using Cloudflare. It keeps storage within the same ecosystem and simplifies how log data is handled, particularly when edge-level logging is part of the setup.

The specific platform matters less than the shift itself. You’re moving from whatever your host happened to keep to a dataset you control.

Bridging the gap with automation

Not every setup supports continuous streaming, and most teams aren’t going to build that infrastructure upfront.

If your retention window is limited, automation becomes the practical way to extend it.

Instead of manually downloading logs, you can schedule the process. Many hosting providers expose logs over SFTP, which makes it possible to pull them at regular intervals before they expire.

A scheduled SFTP job – whether built in a workflow tool like n8n, or scripted – is enough to turn a short retention window into something you can actually analyze over time. That’s often the difference between one-off analysis and something repeatable.

Getting closer to a complete view

As your dataset grows, so does the need to understand its boundaries. Log files show you what reached your site. They don’t always show you what tried to.

In environments that include a CDN, or security layer, some requests may be filtered before they reach your origin. That becomes more noticeable over time, particularly when certain crawlers appear less frequently than expected.

At that point, edge-level logging becomes a useful addition. It provides visibility into requests that are blocked or filtered upstream and helps explain gaps in origin-level data.

It’s not required to get value from log analysis, but it becomes relevant once you’re trying to build a more complete picture of crawler behavior across systems.

Log files show you what reached your site. They don’t show everything, but they’re the only place this interaction becomes visible at all.

You’re not optimizing for one crawler anymore. And the teams that start measuring this now won’t be guessing later.

Why your Google Ads results keep repeating the same outcomes

16 April 2026 at 16:00
Why your Google Ads results keep repeating the same outcomes

Paid search success used to be driven by optimizations. You adjusted bids, restructured campaigns, refined match types, and added negatives. Performance moved accordingly.

That’s still how many accounts are managed. When I audit them, they often look “well optimized”: active management, no glaring structural deficiencies, and targets that match achieved ROAS. On paper, everything checks out. But performance is quietly stuck.

Google Ads no longer responds to isolated optimizations. It builds on what you’ve been rewarding. So when I hear, “That didn’t work,” it usually means the change didn’t override months of prior signals.

What most advertisers still call optimization is actually training. They’re teaching the system the wrong lessons.

Why isolated optimizations don’t move the needle anymore

Today’s Google Ads environment is dominated by Smart Bidding, Performance Max, broad match expansion/AI Max, and modeled conversions. These systems don’t reset when you make a change. They learn cumulatively.

If you raise a ROAS target this week, that action doesn’t override six months of reinforced signals. If you launch a new campaign but shut it down after 10 days, the system doesn’t “forget” that volatility was punished. If brand revenue consistently carries the account, Google learns that safe, predictable demand is the highest priority.

The platform continuously optimizes toward the behaviors that survive, get funded, hit targets, and avoid being paused.

When accounts plateau despite strong management, it’s rarely because bids are wrong. It’s because the system has been trained to avoid uncertainty, but uncertainty is where growth lives.

What training looks like in a Google Ads account

On the back end, Google Ads is constantly answering one question: What does success look like here?

It infers the answer from:

  • Which conversions you include.
  • How you value them.
  • Which campaigns are protected during volatility.
  • How quickly you react to performance swings.

Over time, those signals shape the system’s behavior:

  • Which queries it expands into.
  • Which audiences it prioritizes.
  • How aggressively it competes in auctions.
  • Whether it explores new demand or recycles existing buyers.

Training is about the direction you reinforce over months. If repeat customers hit your ROAS target easily and prospecting campaigns fluctuate, which one do you think the system will prioritize over time?

Here’s a pattern I’ve seen more than once.

  • Month 1: Non-brand drives 52% of revenue.
  • Month 6: Non-brand drives 36%.

ROAS improves, and everyone’s happy. Except new customer growth flattens. The system has simply learned that predictable revenue is more important than incremental revenue. That’s training.

How you might be training Google Ads wrong

These mistakes are subtle and are often framed as good management. That’s what makes them dangerous.

Mistake 1: Training on the easiest revenue

Branded search converts well, returning customers convert well, and promo periods convert very well — so we lean in. We scale budgets behind what works and protect it.

Over time, Google learns that predictable revenue is the safest path to success.

Here’s a simplified example (replace with real data if available):

MonthBranded cost %Account ROAS
133%$5.44
235%$5.03
340%$6.10
438%$6.69
542%$7.06
646%$7.39

ROAS improved during this period, but incremental demand declined due to the account’s conservative training. This is one of the most common ceilings we see.

Mistake 2: Punishing volatility

This one hits close to home for most teams. Short-term inefficiency is part of prospecting, but most advertisers respond to it immediately:

  • Tightening ROAS targets after one soft week.
  • Pulling budget during learning phases.
  • Pausing campaigns that explore new or expanded audiences.

From a human perspective, this feels responsible, but from a training perspective, it sends a clear message: exploration (uncertainty) is unacceptable.

The system adapts by prioritizing stability over expansion. It narrows the query mix. It leans harder into repeat purchasers. It becomes increasingly efficient, and increasingly stagnant. If everything in your account feels equally clean, you’re probably recycling demand.

Even if ROAS fluctuates, a prospecting or awareness campaign can still drive meaningful new customer lift if given time to mature, as in the example below:

The difference between plateaued accounts and growing accounts is rarely skill. It’s tolerance for controlled volatility.

Mistake 3: Pretending all purchases are equal

In most DTC setups, every purchase is treated equally, but a first-time, full-price buyer, a repeat customer, and a promo-driven order aren’t equal signals.

When every purchase sends the same signal, Google will favor the one that’s easiest to reproduce. That’s usually repeat behavior. Then we wonder why new customer acquisition gets harder.

For the client above, the implementation of lapsed customer targeting and valuation led to a 53% YoY increase in orders vs. a 12% YoY increase the three months prior.

Get the newsletter search marketers rely on.


What intentional training actually looks like

This is where many teams get uncomfortable, because it requires letting go of short-term ROAS obsession in favor of aligning Google Ads with the actual business model.

If a client’s business depends on new customer growth, but you’re optimizing purely to blended ROAS, you’ve misaligned the system from the start. If mis-training is cumulative, so is intentional training. Here’s what that looks like in practice:

Maintain efficiency lanes

Efficiency lanes exist to protect baseline revenue. They’re tightly managed. They often include brand campaigns and high-intent non-brand terms with predictable performance.

These campaigns can carry stricter ROAS or CPA targets. They stabilize cash flow. They help CEOs sleep at night. They are not your growth engine.

Build growth lanes

Growth lanes are structured differently. They often include broader match types, category expansion, new audience layering, or creative angles that introduce new use cases. They have looser yet realistic targets.

If your efficiency campaigns run at a 500% ROAS target, your growth campaigns might operate at 350%, with the explicit understanding that they exist to expand demand and acquire new customers.

Here’s the key: you don’t tighten the growth lane every time it fluctuates. You let it learn.

In one DTC account, separating these lanes and holding growth campaigns to a slightly lower ROAS threshold led to a 43% lift in YoY new customers in Q4, while blended ROAS actually improved 10%.

You can see the spend and order relationship below, where an increased investment in new drove measurable change, and the reduction on returning customers didn’t harm the bottom line. 

This controlled asymmetry is how you scale smarter.

Change signals slowly

If you adjust ROAS targets every two weeks, you’re resetting the system constantly.

Targets shouldn’t be adjusted weekly in response to noise. Campaigns shouldn’t pause during early learning unless structurally broken. Creative testing should be protected long enough to produce a clear signal.

Give it time and let data compound. In one account, simply holding ROAS targets steady for 60 days — instead of tightening them after minor dips — resulted in broader query expansion and improved non-brand impression share without increasing spend.

The performance didn’t spike overnight. It grew gradually — that’s training working.

What it means to manage a trained system

If any of the mistakes feel familiar, ask yourself:

  • Do we tighten targets faster than we loosen them?
  • Has our revenue mix shifted toward brand and repeat customers over time?
  • Do we pause exploratory campaigns within the first 2–3 weeks?
  • Have our core conversion definitions changed multiple times in the last 60 days?
  • Is query expansion flat despite budget headroom?

If the answer is often “yes,” the system isn’t failing you. It’s doing exactly what you trained it to do.

That’s the shift. Paid search used to be about making better decisions than the auction in real time. Now it’s about designing the environment the auction learns from. That’s a different job.

Automation doesn’t reward who moves fastest. It reflects what you’ve been teaching it.

Once you see the account as something you’re training, the question changes. It’s no longer “Why isn’t this working?” It’s “What have we been rewarding?”

Before yesterdaySearch Engine Land

March 2026 Google core update more volatile than December — here’s what changed

15 April 2026 at 21:48
Google core update-volatility

The March 2026 Google core update drove far higher ranking volatility than the December 2025 core update. Nearly 80% of top-three results shifted, and almost one in four top-10 pages fell out of the top 100, according to SE Ranking data shared exclusively with Search Engine Land.

The data. Volatility increased across every ranking tier.

  • In the top 3, 79.5% of URLs changed positions, up from 66.8% in December. In the top 10, 90.7% shifted, compared to 83.1%.
  • Stability dropped sharply. Only 20.5% of top 3 URLs held their exact position, down from 33.1%. In the top 10, that fell to 9.3%, from 16.9%.
  • Churn intensified at the top. About 24.1% of pages ranking in the top 10 fell out of the top 100 entirely, versus 14.7% after the December update.

It’s (sort of) complicated. The March 2026 core update began rolling out a day after the March 2026 spam update completed. This complicated attribution, according to SE Ranking:

  • Based on historical patterns and the scale of movement, most volatility was likely driven by the core update, with the spam update amplifying disruption.
  • That overlap likely skews direct comparisons to December, though March still appeared more volatile.

More core update analysis. Meanwhile, independent analysis by Aleyda Solis, using Sistrix data from March 26 to April 11, found a consistent shift in where visibility concentrates. Rankings appeared to move from intermediary sites toward stronger destination sources. Website types gaining search visibility:

  • Official and institutional.
  • Specialist and niche.
  • Established brands.
  • Dominant platforms.

Losses were more common among aggregators, directories, and comparison-driven sites.

Winners and losers. Among the vertical shifts Solis highlighted:

  • Dictionary and language reference sites declined, while larger reference platforms and major destinations gained visibility.
  • Job aggregators like ZipRecruiter and Glassdoor lost ground, while employer sites and specialized platforms like USAJobs and Amazon.jobs surged.
  • Government and institutional domains, including Census.gov and BLS.gov, saw strong gains on fact-driven queries.
  • Travel and real estate visibility shifted away from broad discovery platforms toward stronger brands and primary destinations.
  • Health results were re-sorted. Broad consumer health sites declined, while clinical, research-driven, and specialist sources gained.
  • One exception: YouTube had the largest visibility loss in the dataset.

Why we care. The data suggests Google’s March 2026 core update raised the bar for ranking. Strong brands, owned data, and direct query value won. Intermediaries now look increasingly exposed.

SMX Now: The automation drift and how to correct course

15 April 2026 at 21:00

Automation doesn’t fail on its own — it does exactly what it’s trained to do. The problem is that when Google Ads is fed incomplete, misaligned, or overly broad signals, it can optimize toward the wrong outcome faster than most advertisers realize.

In our second installment of SMX Now, our new monthly series, Ameet Khabra of Hop Skip Media will break down a real account where a 417% jump in conversions turned out to be the wrong kind of success. She’ll use that case study to explain the four key ways automation drift enters an account: signal drift, query drift, inventory drift, and creative drift.

You’ll leave with a practical framework for diagnosing drift early, understanding where human oversight matters most, and managing automation more deliberately so it works toward real business goals — not just platform-reported wins.

Join us May 6 at noon ET.

Save your spot

Google adds campaign-level filtering to bulk ad review appeals

15 April 2026 at 19:41
Google Ads may be over-crediting your conversions- A 7-day test tells a different story

Google is giving advertisers more control when appealing disapproved ads in bulk — a small but meaningful update that could save time and reduce accidental resubmissions.

Driving the news. Google has added a new option in its bulk ad review workflow that lets advertisers select ads from specific campaigns when requesting a policy re-review.

Previously, advertisers appealing disapproved ads in bulk often had to resubmit all eligible ads across an account — including older campaigns that hadn’t been updated.

That created extra work and could clutter the review process with ads that weren’t actually fixed.

What’s new. Advertisers can now click a new “Select eligible campaigns” option on the Google Ads policy violations page when filing a bulk appeal.

That means they can:

  • send only recently fixed ads for review,
  • avoid including outdated campaigns,
  • and streamline the appeal process.

Why we care. Bulk appeals are often used after widespread disapprovals or policy issues. Being able to narrow submissions by campaign should make the process faster, more precise, and easier to manage at scale.

For agencies and large accounts, the update could also reduce the risk of confusion when handling multiple policy fixes at once.

The bottom line. This isn’t a flashy product launch, but it’s the kind of workflow improvement advertisers have been asking for — giving teams more control and less friction when fixing disapproved ads.

First spotted. This update was first spotted by Hana Kobzová of PPC News Feed.

Your homepage matters again for SEO — here’s why

15 April 2026 at 19:00
Your homepage matters again for SEO — here’s why

In the early days of the web and my career, web architecture was simple: we built “filing cabinet” websites designed around a single, grand entryway. Visitors arrived at your homepage, a.k.a. the “front door,” and navigated through the site to find what they needed.

Then SEO came along and changed everything. Suddenly, every page became a possible entrance point, and people could be dropped in directly at the page most relevant to their current need.

But today, in this AI environment, it seems that things are changing again. As users now use AI tools like Gemini, ChatGPT, and likely mass-adoption tools embedded in our mobile devices, search engines, and browsers to handle the research stage, they’re now more likely to once again land on your homepage.

Your homepage is once again becoming the most important page for SEO, and we must revisit the time-proven lessons of information architecture to ensure it can capture and convert this traffic.

How SEO inverted web design

In the early 2000s, as search engines improved and became the primary source of website traffic, those of us working in the field had to learn and adapt quickly.

We had to take what we knew about information architecture and layer over SEO thinking, which meant the standard, linear route through a site from the homepage to a destination changed.

We now had users landing much closer to where we wanted them — typically on inner pages or blog posts — and then routing them back toward the relevant product or service we wanted to promote.

Homepages were still important, but they became less of a “must be everything to everybody” battleground and could focus more on brand and more general keywords. The money terms were often mapped to more relevant, easily rankable, high-converting long-tail blogs and product pages.

In short, we stopped worrying so much about the homepage, and our attention spread across the spidery maze of deeper pages and reverse-conversion paths. But the pendulum is swinging back.

The great AI reversal 

The informational long-tail traffic that sustained those deep-link landing pages is being swallowed by AI Overviews and LLMs like Gemini, Perplexity, and ChatGPT.

AI tools now handle the heavy lifting — research, comparison, and summarization are easier than ever. When users finally visit your site, they aren’t looking for more answers — they’re looking for you.

This shift is driving a resurgence in branded search, funneling users back to your homepage. The problem is, while these users may be warmed up by their research, we now know a lot less about them when they arrive.

If your information architecture isn’t ready to greet users on your homepage and funnel them where they need to be, you’ll alienate and lose these warm users and send them swiftly into the arms of your competitors.

Fortunately, there are lessons from the past that can guide us forward. 

The problem: The erosion of the deep link

In traditional SEO thinking, nearly every page could be a landing page.

  • Your informational content is an upper-funnel landing page that can direct people to your product or service pages.
  • Your product or service pages are mid-funnel landing pages that can drive leads and sales.
  • Your case studies and testimonials are lower-funnel credibility content that can push people to make the final decision.

That approach is losing ground. Industry consensus is clear. Traditional informational click-through rates (CTR) are facing a significant decline as AI provides immediate answers in search results.

When a user asks, “What are the benefits of a headless CMS?” they get a 300-word summary from an AI. They no longer need to click your “Headless CMS – Pros & Cons” blog post.

However, once the AI has convinced them that your brand is a leader in headless CMS, they don’t search for the topic again. They search for your brand name. They arrive at your homepage — warmed up and ready, highly motivated, but we know very little about them. We lose the segmentation and context that a deeper page landing provides.

Get the newsletter search marketers rely on.


The psychology of AI: the path of least resistance

Humans are a lazy bunch, somewhat by design. If something makes our lives easier, we seek it out, and our behavior changes. This helped us as hunter-gatherers, but now, with our cars, smartphones, food delivery, and many other modern conveniences, maybe not so much.

Search engines are one of the things that made our lives easier, at least for a while, and changed our behavior as things got easier.

Then, of course, we marketers got involved, competition ramped up, and the web became littered with ads, pop-ups, remarketing, and other tactics. Frankly, seeking things online often became a bit of a drag, making much marketing as much a game of attrition as it was science, skill, or art.

But AI is now making our lives easy again. No scrolling past ads, trying to decode SERPs, avoid pop-ups, identify marketing content, and filter out noise — just clean, simple answers. The change has brought some chaos, but it’s also a much-needed reset for the web.

People now enjoy a frictionless, conversational research phase, with the heavy lifting done by AI tools. Questions are answered, advice is given, options are summarized and compared. They can then move on via a branded search, which typically brings them to this homepage entry point.

As Steve Krug famously argued in “Don’t Make Me Think” — a well-recommended book that has stood the test of time — users on the web behave like foragers. They look for the scent of information and take the path of least resistance. If they land on your homepage and can’t find their specific path, such as “pricing for enterprise” or “developer docs,” within seconds, they’ll disengage and bounce.

Things are different. Users may invest a little more time now after they’ve sunk effort into the research phase, but you can’t expect to take users from the low-friction environment of AI to a site where they have to work too hard to figure things out.

Your homepage and overall information architecture can’t fail. You must let people know they’re in the right place, that they can trust you, then segment, signpost, and steer them to their intended destination.

Solution: The filing cabinet site

To handle this influx of branded, front-door traffic, we must return to the fundamentals of information architecture.

Drawing from the definitive guide, “Information Architecture: For the Web and Beyond” (the Polar Bear Book — another great read), we must treat our site structure like a filing cabinet.

  • Logical grouping: Related content must be grouped into clear, intuitive categories. If your “Service A” and “Service B” are buried under a vague “What We Do” menu, you’re creating friction. Keep it clear, and don’t confuse people with your fancy branding.
  • Structural context: SEO may drive fewer people to your deeper pages, but AI tools still conduct queries to identify information and pull content from your site via RAG. You still need the right content structured in the right way to ensure you’re covering all the angles across SEO, AI, and PPC traffic.
  • The 3-click rule: Modern UX research, championed by the Nielsen Norman Group (NN/g), emphasizes that users should be able to reach any content within three clicks. In the AI age, this is a non-negotiable performance metric, and you should be measuring these paths in your analytics.

Remember, while users may come directly to your homepage, AI agents still conduct these deeper searches and consume your information, so traditional SEO is still important.

Implementation: The ALCHEMY framework

This is all great to know, but you also need a framework to help you put this process on rails and build a website that’s structured for humans coming via the front door, search engines indexing and categorizing, and AI crawlers hitting those deeper pages. 

The ALCHEMY website planning guide addresses this exact issue. It breaks the process down into seven strategic steps designed to bridge the gap between business strategy and technical execution:

  • Audience research: Identifying personas, segments, and jobs.
  • Learning: Deep-dive competitor and performance audits to see what’s working.
  • Clarify aim: Setting SMART goals so the site has a purpose beyond looking pretty.
  • Hierarchy: Building the visual sitemap and navigation.
  • Essential features: Defining the technical must-haves before code is written.
  • Mapping: Planning the content and goals for every single page.
  • Yield: Generating the final, battle-hardened, marketing-savvy brief for developers.

The process purposely starts with the audience — who are the audience segments that matter? And how does this inform the structure and navigation for the site? 

The process then walks you through mapping out your site to work for users, search engines, and AI.

By following this approach, you ensure that your homepage and category pages aren’t just based on the opinion of the highest-paid person in the room, but on the documented needs of your AI-driven audience.

From AI recommendation to homepage conversion

Your website’s information architecture now serves two masters — human users and AI agents. A clean, hierarchical structure with clear taxonomies helps both navigate and interpret your site with confidence. 

If an AI reads your site and sees a perfectly organized filing cabinet, it’s far more likely to recommend your brand as a structured, authoritative source. Your site needs to consider two directions of user journey:

  • Front door: Users arriving without context, finding what they’re looking for.
  • Back door(s): Users, search engines, and AI coming in directly to deeper content.

For a website to be successful in 2026 and beyond, you have to account for both. Build strong information architecture and signposting for front-door users and powerful SEO for back-door search engines and AI visits.

Don’t let your homepage be a dead end — turn it into a map.

Agentic engine optimization: Google AI director outlines new content playbook

15 April 2026 at 18:28
Agentic engine optimization

Addy Osmani, Google Cloud AI’s director of engineering, published an article April 11 on Agentic Engine Optimization (AEO). In it, Osmani said sites should restructure content for AI agents that fetch, parse, and act on pages differently than humans do.

He compared this AEO, not to be confused with Answer Engine Optimization, to SEO, but for a different consumer.

What is AEO. He defined it as the practice of structuring and serving technical content so AI agents can use it, not just render it. That includes discoverability, parsability, token efficiency, capability signaling, and access control.

The token problem. Osmani said long, bloated pages can be truncated, skipped, or chunked poorly by agents working within limited context windows, raising the odds of incomplete answers or hallucinated implementations.

How content needs to change. Token count is now a core optimization factor, Osmani said, adding his advice:

  • Keep quick starts under roughly 15,000 tokens, conceptual guides under 20,000, and individual API references under 25,000 when possible.
  • Pages should front-load the answer within the first 500 tokens because agents have “limited patience for preamble.”

Markdown over HTML. Osmani also pushed for serving clean markdown, exposing token counts, creating llms.txt as a discovery layer, and using skill.md or AGENTS.md files to help agents understand capabilities, constraints, and key docs before spending context budget on full pages.

  • He released an open-source audit tool, agentic-seo, to check for some of those signals.

Why we care. Osmani’s recommendations align with what many SEOs are already testing for AI retrieval: shorter, cleaner pages, clearer semantic signals, machine-readable formats, and content that gets to the point fast. These all affect whether your content appears in AI-driven responses.

Between the lines. To be clear, the type of AEO Osmani discussed in his article is unrelated to Google Search or organic search rankings. What his article highlights is that content may now need to work for two audiences at once: humans reading pages and agents extracting them.

The article. Agentic Engine Optimization (AEO)

Dig deeper:

The PACT framework for PPC: How to move beyond ‘it depends’

15 April 2026 at 18:00
PACT framework PPC

There’s a phrase PPC experts reach for whenever they get a tough question. At conferences, online, and on client calls. Two words, a smug smile, and absolutely zero useful information: “It depends.”

This has been bugging me for as long as I can remember. Turns out it’s not just a PPC thing, either. Aleyda Solis gave an excellent presentation calling out the exact same pattern in SEO. So we’re dealing with an industry-wide epidemic here. Two disciplines, same cop-out.

Not every question is equally hard to answer. 

  • “What’s the maximum number of RSAs per ad group?” Just look it up.
  • “Why did my CPA spike last week?” That takes data plus interpretation. 
  • “What will my ROAS look like if I increase budget by 30%?” Now you need context, too. 
  • “What bid strategy should I use?” That requires data, interpretation, context, and an understanding of someone’s priorities.

It makes sense that “It depends” clusters around the hardest questions. More variables, more context needed, more ways to be wrong. I get it. But since when is “This is hard” a reason to give up on being useful?

So I built a framework for giving useful answers instead. I call it PACT, which stands for Process, Anchors, Conditions, and Trade-offs.

The PACT framework assumes a broader audience context where you don’t have the asker’s data in front of you. If you do, great — crunching the numbers and statistical models become additional answer options.

Not all questions are created equal

If we borrow from the world of analytics, questions come in four flavors, each progressively harder to answer.

Descriptive questions: Asking what happened or how something works

“What’s my impression share?” or “How does broad match work?” 

These are answered with data and facts. You know them or look them up. Nobody says “It depends” here because nobody needs to. I’ll ignore this category for the rest of this article.

Diagnostic questions: Asking why something happened

“Why did my conversion rate drop?” 

These need data plus your interpretation of that data. “It depends” already starts creeping in here because something clearly changed, and pinpointing the cause is rarely straightforward. 

Predictive questions: Asking what will happen or what good looks like

“What if I decrease my target ROAS by 30%?” or “What’s a good CTR for my industry?” 

These are harder. You need interpretation, but you also need context about the specific business and market. This is where “It depends” starts to feel earned.

Prescriptive questions: Asking ‘What should I do?’ or ‘What’s the best solution?’ 

“What bid strategy should I use?” or “Should I consolidate my campaigns?” 

These need everything: data, interpretation, context, and an understanding of someone’s priorities. If “It depends” has a permanent home, it’s here.

The PACT framework

There are many useful answers you could offer your audience instead of “It depends,” such as explaining how it depends, outlining the trade-offs, or sharing benchmarks and flowcharts. 

I tried to categorize the answers into four concrete response types. (Whether the category names were chosen for clarity or reverse-engineered from a four-letter word is between me and my thesaurus.)

The diagram below shows which response types fit which question types. (There’s overlap, and that’s fine.)

pact-framework-replace-it-depends

Process: Give a structured path

For many diagnostic questions and for some prescriptive questions, a process is the best answer. Show your audience which steps to take, in which order, to reach their answer (and, increasingly, steps you can hand to an AI agent with a skill). 

If you work at an agency, you need good processes anyway. As David Rodnitzky would say:

  • “An agency without process is just a bunch of people running around doing things.”

Suggested formats

Flow charts: The first time I fell in love with a flow chart was in 2012, when the Rimm-Kaufman Group (now Merkle) shared a performance troubleshooting flowchart in their Dossier 3.2. It’s an excellent example of a helpful answer to the question, “Why did my CPA increase (or ROAS decrease)?” 

rimm-kaufman-performance-troubleshooting-flowchart

Decision trees: Prescriptive “Should I?” questions can also be helped with a decision tree. They can be simple, funny-but-true ones like this one from Tom Orbach:

tom-orbach-founder-marketing

Or more professional ones, like Aleyda Solis’ SEO Flowcharts for SEO Decision Making.

aleyda-solis-flowchart-should-index-product-url

Get the newsletter search marketers rely on.


Anchors: Ground it with data and examples

Anchors are the “quick and easy” evidence-based answers that are still better than “It depends.” 

Suggested formats

Benchmarks: Everybody loves a good benchmark. If you have enough data from comparable businesses, you can use it to answer “What does good look like?” questions. 

When someone asks, “What’s the average ecommerce conversion rate?” don’t say “It depends.” Say: 

  • “For health and beauty, it’s 3.3%. For electronics, it’s 1.9%.” The more specific the benchmark, the better.

Usual suspects: Think of the usual suspects as a “light version” of a process for diagnostic questions using the 80/20 Pareto principle: 80% of outcomes result from 20% of causes. 

Instead of a 25-step flowchart, you can share a ranked list of the most likely causes ordered by frequency. Basically saying:

  • “Check these five things first, because 80% of the time it’s one of them.”

Case study: When someone asks, “What will happen if I do X?”, telling them what actually happened when a similar account did X is worth more than any theoretical answer. 

  • “We consolidated 12 campaigns into four for an ecommerce account spending $50,000/month. CPA improved 20% after the learning period, but we lost visibility into product category performance.” 

The key is specificity: industry, budget range, what changed, and the trade-off. Vague case studies (“We saw great results”) are just “It depends” wearing a suit.

Conditions: Name the hidden variables

This is the most direct replacement for “It depends,” as you’ll say, “It depends on these specific things” instead.

Suggested formats

Checklist: For diagnostic questions, this could be a segmentation drill-down. Slice the data by device, geo, time of day, campaign, match type, audience, etc., until the anomaly isolates to one segment. This expands “Why did it happen?” to “Where did it happen?” which can be just as useful.

If [x] then [y]: For example, “What will happen if I double my budget?” Then you follow up with questions like: 

  • “What’s your current impression share?” 
  • “Are you budget-constrained or bid-constrained?” 
  • “How steep is the diminishing returns curve in your auction?” 

If you’re at 60% impression share and purely budget-limited, doubling your budget could get you close to 80% more conversions. If you’re already at 95% impression share, that extra budget is going to buy you mostly junk.

Reversibility test: For a quick filter on prescriptive “Should I?” questions, use one condition: reversibility. Categorize decisions by how easy they are to undo. Low-stakes reversible decisions (e.g., testing a new ad copy) get a “Just try it” answer.

High-stakes irreversible decisions (such as restructuring your entire account) get the full trade-off analysis (and move to the next category). This helps your audience judge how much thought a decision actually deserves. 

Jeff Bezos famously calls these irreversible Type 1 (one-way door) and reversible Type 2 (two-way door) decisions. He also warns us not to treat Type 2 decisions as Type 1 decisions.

Trade-offs: Surface the choices

Some questions don’t have a right answer. Instead, they involve choosing between competing priorities. 

When someone asks “What’s the best approach?”, they often don’t realize they’re asking “Which trade-off am I most comfortable with?” The fix is to make the trade-offs visible.

Suggested formats

Trade-off explanation: Replace “What’s the right answer?” with “Here’s what each option gains and sacrifices.”

For example, “Should I consolidate my campaigns into fewer, bigger ones?” Instead of “It depends on your goals,” surface the actual trade-off: 

  • “Consolidation gives you more data per campaign, which helps Smart Bidding learn faster. But it reduces your control over budget allocation and makes it harder to optimize for different segments.” 
  • “So the real question is: Do you value algorithmic learning speed more than granular control right now? That depends on whether your current structure is data-starved or if you’re already getting strong results and just want more precision.”

Now the person isn’t stuck. They have a choice to make, and they understand what’s at stake on both sides.

Calculators: If the calculator presents the trade-off as an input field, it can yield a useful answer. One of my all-time favorites is the Build vs. Buy calculator from Baremetrics, which helps you decide whether to buy a tool or build it internally.

Closer to the daily life of a PPC practitioner, we created two free calculators to determine your target CPA or target ROAS. When you enter “% of margin willing to invest in acquisition,” you’re resolving the subjective part of the trade-off yourself. The calculator just runs the math on your decision.

The ‘it depends’ cheat sheet

Next time your gut says, “It depends,” check which type of question you’re dealing with and pick the format that fits.

wijnand-meijer-pact-formats-cheatsheet

I’m not naive enough to think we’ll eradicate “It depends” overnight. But I do think we can hold ourselves to a higher standard. If you’re speaking at a conference, writing a blog post, or answering a client question, try replacing your next “It depends” with one of these four response types.

And if you find a question that genuinely can’t be answered with a process, anchor, condition, or trade-off, I’d love to hear it. I haven’t found one yet. But I’m probably not done looking.

Google to retire Dynamic Search Ads in favor of AI Max

15 April 2026 at 17:00
Google Ads logo on laptop screen.

Google is retiring legacy Search automation tools, including Dynamic Search Ads (DSA), in favor of AI Max, its broader AI-powered campaign suite. This will affect you if you use DSA, automatically created assets (ACA), or campaign-level broad match settings.

Driving the news. AI Max for Search campaigns is exiting beta after adoption by “hundreds of thousands” of advertisers globally, Google said.

  • Starting in September, eligible campaigns using DSA, ACA, or campaign-level broad match will be automatically migrated to AI Max.
  • Google will stop allowing advertisers to create new DSA campaigns through Google Ads, Ads Editor, and the Ads API once automatic upgrades begin.
  • The company expects all eligible migrations to be completed by the end of September.

Why we care. These tools are being phased out, whether you act or not. Moving early to AI Max gives you more control over targeting, creative, and landing page settings before automatic upgrades begin. It also offers potential performance gains, with Google reporting an average 7% lift in conversions or conversion value at similar efficiency.

What Google says. AI Max delivers “an average of 7% more conversions or conversion value at a similar CPA/ROAS for non-retail” when you use its full feature set — including search term matching, text customization, and final URL expansion — compared with search term matching alone.

Catch up quick. DSA has long helped advertisers capture additional traffic beyond keyword-based campaigns by dynamically generating headlines and directing users to relevant landing pages.

  • But Google says consumer search behavior is becoming more complex and less predictable.
  • AI Max is designed to go beyond website landing page signals by using broader real-time intent data.

How AI Max works:

  • Uses advertiser inputs, such as website content and existing ads.
  • Expands reach to additional relevant search queries.
  • Dynamically customizes ad copy and landing page destinations.
  • Adds more controls for advertisers, including brand, location, and text guidance settings.

What you should do now. Google is urging advertisers to upgrade before September to keep more control over setup and avoid disruption.

Phase 1: Voluntary upgrades (starting now)

  • DSA users: Google is rolling out upgrade tools this week to help move campaign history, settings, and data into standard ad groups.
  • ACA and broad match users: Advertisers will see in-platform prompts to switch to AI Max.

Phase 2: Automatic upgrades (starting September) For advertisers who don’t switch manually:

  • DSA campaigns will convert dynamic ad groups into standard ad groups, with legacy settings and URL controls preserved.
  • ACA campaigns will move to AI Max with search term matching and text customization turned on by default.
  • Broad match setting campaigns will move with search term matching enabled by default.

What Google are saying. I asked Google whether this update reduces the role of manual keyword strategy and feed-based search structures. A Google spokesperson responded that keywords remains essential and this update is to help with keyword management:

  • ‘Keywords remain an essential component of a successful campaign strategy, providing the “fuel” for our AI and for the intent signals necessary to drive performance.’ 
  • ‘Rather than reducing their role, this upgrade is designed to help advertisers simplify management and expand beyond keywords while remaining in control.’

Bottom line. Google is making AI Max the default path for Search automation, signaling a broader shift away from manual campaign management toward AI-led optimization. If you migrate early, you’ll have more time to test settings and fine-tune performance before the forced switch.

Google spam reports can trigger manual actions, may be shared with site owners

15 April 2026 at 16:19

Google may now use your search spam reports for manual actions, and the text in those reports may be sent “verbatim” to the site owner you report.

What Google said. Google wrote it has “Clarified that Google may use spam report submissions to take manual action against violations.”

The new text says:

“Ranking manipulation techniques that attempt to compromise the quality of Google’s search results violate our spam policies and can negatively impact a site’s ranking. Google may use your report to take manual action against violations. If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

Spam reports used for manual actions. Google framed this as a clarification — that it may use spam reports for manual actions. However, it seems to contradict Google’s earlier statements that it doesn’t use spam reports for manual actions. This feels like more than a clarification to me.

Your spam report text sent along. Google also said it may send the text you include in a spam report directly to the site owner. Google wrote:

  • “Send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

Google also warned that you should avoid including personal information or anything you don’t want the site owner to see.

Why we care. This appears to be a significant change from how Google previously handled spam reports. If you submit them, be aware of these changes and adjust your reports accordingly going forward.

The new PPC playbook: From media buyer to profit engineer

15 April 2026 at 16:00
New PPC playbook

Roll the clock back five, 10, or 15 years, and a PPC practitioner’s value was directly tied to tactical proficiency. Not anymore.

Today, Google and Microsoft automate much of the tactical work. Machine learning and AI manage bids, test creatives, and find audiences faster and more efficiently than any human could.

Unfortunately, this reality has left many veteran practitioners in a mid-career identity crisis. If algorithms pull the levers, what exactly are we getting paid to do? Where is our sustainable value to the business?

Here’s what that evolution looks like in practice and how the hard skills in your playbook have changed.

PPC shifted from tactical execution to designing systems

I’ve been in the paid search trenches for 24 years — long enough to witness the wild west of early Overture, the rise of Google AdWords, the shift to mobile, and now, the total “algorizing” of the ad platforms.

It used to be that if you could diligently research thousands of new keywords, methodically change bids, split-test ad copy until your eyes bled, and sculpt the perfect exact-match account structure, you were a lean, mean PPC advertising machine.

If your toolbox is still mostly tactical execution, you’re positioning yourself as a backroom lever-puller, and your days in this industry are numbered. Today’s most valuable practitioners aren’t media buyers. They’ve made the leap to become true engineers of revenue and profit.

An engineer doesn’t blindly pull levers. They design systems. Our sustainable value is in programming the coordinates and telling the machine where to go. If you want to be a revenue and profit engineer, you must:

  • Be an expert at data analysis and signaling.
  • Possess deep business acumen to understand how the company or your client makes money.
  • Cultivate your executive presence to explain your strategy confidently to the C-suite.

That intersection is your career golden ticket. The next four steps will help you achieve just that.

Dig deeper: 10 keys to a successful PPC career in the AI age

1. Map the account directly to the P&L

If you sit in an interview, client pitch, or meeting with your boss and say, “I’m going to reexamine your metrics,” you sound like every other media buyer. They’ll politely nod and move on.

But if you say, “I’m going to map your paid search program directly into your profit and loss statement so every dollar we spend is engineered for maximum margin,” you instantly become the most valuable person in the room. You’re no longer selling clicks. You’re selling an unfair business advantage.

Most PPC accounts are structured around a website’s navigation — a campaign for shoes, a campaign for shirts, etc. While not inherently wrong, this approach reflects limited thinking. You build a more nuanced, precise account structure that aligns directly with what drives the P&L, moves inventory, or generates high-value leads.

How to execute this

While every business is unique, the process to get there follows a universal framework.

  • The margin interrogation: Sit down with your client or your finance team and work to learn the profit margins on their core offerings. You will often find that the product driving the most volume has the tightest margin, while an obscure, niche service has massive profitability.
  • The architecture shift: Restructure your campaigns by margin tier and business value, not just product category. You should have completely different target ROAS (tROAS) or target CPA (tCPA) goals based on what the business can afford to spend to acquire that specific customer type. 

If you treat a low-margin conversion the same as a high-margin conversion in your account architecture, you’re risking revenue and profit leak — no matter how pretty your in-platform metrics look.

Separate the engine room from the boardroom

Once mapped, you must segregate your metrics. 

  • In the “engine room” (your daily platform optimizations), you still look at click-through rates (CTR) and cost per click (CPC). They are vital leading indicators used to steer the ship. 
  • But in the “boardroom” (leadership reporting), you never lead with them. Your conversation is strictly about the engineered outcome: “We shifted budget into the high-margin tier and successfully protected our $150 CPA target, ensuring our overall profitability remained stable.”

Dig deeper: Why PPC teams are becoming data teams

2. Master the art and science of signal engineering

This is the most critical hard skill for the modern paid search profit engineer. Algorithms are hungry, but they inherently lack intelligence and the ability to reason. They only know what you tell them. 

In our brave new world of automated bidding, properly “feeding the machine” is what separates the experts from the obsolete. If you only feed Google Ads data about who filled out a form, the machine will go find you more people who like to fill out forms — even if those people are terrible leads who never actually convert.

A massive part of your job today is understanding and analyzing first-party backend data and strategically feeding it back to the machine to get the best results. You’re no longer optimizing the bid. You’re optimizing the signal.

How to execute this

You have to move past basic pixel tracking. You must implement robust offline conversion tracking (OCT) or direct CRM integrations (like HubSpot or Salesforce into Google Ads). 

If you’re managing larger, more complex programs, leveraging enterprise tools like Search Ads 360 (SA360) or similar platforms is a massive advantage for signal engineering. These tools allow you to seamlessly ingest, weight, and share these critical business signals across multiple search engines from a single centralized hub.

For lead generation 

Stop optimizing for a generic lead. Map your client’s sales stages directly into the ad platform. Assign specific monetary values to each stage based on historical close rates. 

For example, tell the algorithm a raw lead is worth $10, a marketing-qualified lead (MQL) is worth $50, and a closed/won deal is worth $500. Then switch your bidding strategy from Maximize Conversions to value-based bidding (Target ROAS). You’re programming the AI to pursue lead quality and pipeline revenue, not just form-fill volume.

For ecommerce

Ecommerce is a distinct beast with its own complexities. Tracking top-line revenue to hit a basic ROAS target is table stakes. To truly engineer profit, you must manipulate signals around inventory, margins, and lifetime value:

  • Feed engineering: The modern ecommerce practitioner doesn’t just upload a product feed; they strategically engineer it. Use Custom Labels to segment products by business reality — such as inventory velocity (overstocked vs. low inventory) or historical return rates. If a specific apparel item has a 40% return rate, pushing it heavily destroys backend profitability, even if the in-platform ROAS looks incredible.
  • Profit margin bidding: Don’t just track gross revenue. Use custom conversion variables (or cart data integration) to pass profit margin data back into the ad platform. When the algorithm understands the difference between a $100 sale with a 10% margin and a $100 sale with a 90% margin, it fundamentally changes how it bids in the auction.
  • New customer acquisition (NCA): Algorithms gravitate toward the path of least resistance, which often means taking credit for returning brand loyalists. You must integrate your first party customer lists to differentiate a net-new buyer from a repeat buyer, allowing you to bid aggressively for market share on the former while protecting margins on the latter.

Dig deeper: Why better signals drive paid search performance

Get the newsletter search marketers rely on.


3. Debug the post-click pipeline

Because ad platforms are largely automated, your biggest performance bottlenecks rarely sit inside ad accounts. Your revenue and profit leaks happen after the click. True profit engineers don’t just throw traffic over the fence and hope for the best; they take responsibility for the entire user journey.

If your campaigns drive highly qualified traffic but the backend system is suboptimal, the business still loses money. You have to debug the pipeline.

How to execute this

Make it a quarterly habit to mystery-shop your client’s business and tear down the post-click experience.

  • Stress-test the sales handoff (lead gen): Submit a test lead through the website. How long does it take the sales team to call you back? If it takes 48 hours, it doesn’t matter how finely tuned your value-based bidding is — the sales team is letting those expensive leads go cold. You need the data to show the CEO that the leak isn’t the traffic; it’s the speed-to-lead.
  • Audit the checkout flow (ecommerce): Go through the process of buying a product from your client’s site. Is checkout a clunky, five-step ordeal? Do unexpected shipping costs appear at the end? If your drop-off from add-to-cart to purchase is massive, your ROAS isn’t suffering from a bad keyword match type. It’s suffering from UX friction.
  • Listen to the tape: Ask the client or the call center for call recordings of leads generated specifically by paid search. Are the leads complaining about pricing? Are they confused about the specific service offered?

When you walk into a boardroom and say, “I listened to 15 sales calls this week, and your team is struggling to overcome pricing objections, so I’ve updated our ad copy to explicitly pre-qualify users on price,” you instantly elevate yourself from a disposable media buyer to an indispensable business partner.

Dig deeper: How to diagnose and fix the biggest blocker to PPC growth

4. Cultivate executive presence

You can be the most brilliant revenue engineer in the world, properly weighting every CRM signal into the algorithm, but if you can’t communicate that strategy like a true business partner, the rest doesn’t matter.

You’re in a never-ending battle of misconceptions about what PPC is and what the expectations are. I’ve lost count of how many times I’ve heard from clients or in-house bosses things like: “Why aren’t we in Position 1?” or “If we increase spend by X, then we’ll get Y more leads.” How you handle that battle dictates your career trajectory.

How to execute this

Executive presence means you don’t flinch when a CEO challenges your spend in a boardroom. You don’t get defensive, you don’t blame the algorithm, and you never dive into a nervous rant about impression share.

You calmly control the room by anchoring your response in the business’s goals: 

  • “We deliberately pulled back spend on the low-margin product line to fund the enterprise push you mentioned in last month’s all-hands meeting. Top-line lead volume is down by 10%, but because we engineered our data signals to target MQLs, our projected pipeline revenue is actually up 14%.”

Adopt the “So what?” reporting model. For every metric you present, ask yourself, “So what?” and answer it before they have to. Speak the language of the boardroom: pipeline velocity, profit margin, customer acquisition cost, and lifetime value.

Dig deeper: How to deliver PPC results to executives: Get out of the weeds

Sweating the small stuff (the right way)

Years ago, I wrote that you need to “sweat the small stuff” — meaning you need to know every detail of your account. That principle remains exactly the same today, but the definition of the small stuff has changed.

Today, sweating the small stuff doesn’t mean manually adjusting a bid by three cents. It means:

  • Obsessing over data hygiene.
  • Understanding exactly how your client’s CRM tags a lead so your signal engineering doesn’t break.
  • Having the guts to tell your boss bad news — like their backend sales process is broken, and no amount of algorithmic bidding will fix it until they do.

The machines have taken many repetitive tasks off our plates. Good riddance.

Today, you have the freedom — and the obligation — to step into the role of a revenue and profit engineer. Master your data signals, stop playing in the weeds, start engineering the P&L, and watch your career take off.

Dig deeper: What 10 years of PPC testing reveals about breaking best practices

The Reddit detour distorting PPC signals

15 April 2026 at 15:00
The Reddit detour distorting PPC signals

At $50+ CPCs, Reddit beats every vendor organically 67.3% of the time across 8,566 keywords. 

The study from Ross Simmonds and his team focused on B2B SaaS, but the underlying dynamics don’t stop there. The higher the advertising competition on a term, the more likely a Reddit thread sits above every brand in organic results. 

If you’re in legal, financial services, premium home services, or insurance, those CPCs aren’t unusual territory. This study is worth your attention.

The SEO community has been talking about this for a while, and the conversation has largely stayed in SEO territory: Reddit is eating organic search, so build your glossaries and invest in content strategy. These are great suggestions, but I’m not an SEO, so I can’t speak to them. 

What I keep thinking about isn’t mentioned in the study: What does this actually do to the signal layer your PPC campaigns depend on?

The problem starts before anyone clicks your ad

When a buyer searches a high-intent term and lands on a Reddit thread instead of your page, two things happen. 

  • The buyer gets peer opinions, real comparisons, and experiences from people who’ve already been where they are.
  • Google records a behavioral signal: someone searched this query, engaged with this result, and didn’t need to go further. 

That signal feeds back into Google’s understanding of what satisfies that query, and over time, it shapes how the algorithm models relevance on that term. 

Your page didn’t just lose a click. It contributed to a pattern of signal degradation on a term you’re actively paying to compete on, originating entirely outside your account, with no report that surfaces it.

This is what makes it an automation drift problem. The algorithm is updating its model based on the behavioral data it can see, while your account operates in the dark about where that data is coming from.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

The problem continues after someone clicks

The buyer who spent three days on Reddit before clicking your ad arrives as a different person than someone who searched and converted in the same session. They’ve compared options, read real experiences, and already filtered out most of the noise. 

Smart Bidding has no idea any of that happened. It sees a $50 click and waits to see if a conversion fires within your attribution window. 

If you’re running a short window and the buyer spent several of those days in a research phase before coming back, you’re looking at 100% of the cost and none of the conversions still sitting in that detour. 

The system interprets this as underperformance and starts pulling back on the exact terms producing your most qualified buyers, not because anything went wrong inside the account, but because the signal it was given told it to.

The automation is doing exactly what it was built to do. The signal just doesn’t reflect the full picture of what’s happening.

What UCaaS gets right that others don’t

Simmonds’ study covers four verticals. In three of them, Reddit beats every vendor simultaneously on more than half of shared keywords. 

In the unified communication and contact center as a service (UCaaS) category, the vendors win. RingCentral, Nextiva, and Dialpad consistently outrank Reddit on the same terms where every other vertical loses.

It’s not because of domain authority or budget. It’s that they built informational content at scale years ago — glossaries, category explainers, how-to-choose guides — and never stopped. Google had something real to point to on those terms beyond an ad, and the behavioral signals on those queries reflect that.

That’s a content investment conversation, and a worthwhile one. But the principle connects directly to the bidding side: the algorithm makes better decisions when the signals around a term are cleaner, and cleaner signals don’t happen by accident.

Dig deeper: A smarter Reddit strategy for organic and AI search visibility

Get the newsletter search marketers rely on.


Where the fix lives

On the bidding side, offline conversion tracking is the mechanism that closes the gap. 

When you import downstream outcomes back into the algorithm — which leads qualified, which closed, and what they were actually worth — you give Smart Bidding the context it needs to understand that a longer, more research-heavy path at a higher CPC can still be the right outcome.

Google’s own data shows a median 10% lift in conversions for advertisers using first-party data alongside click IDs for offline measurement. Without it, the system keeps optimizing toward the fastest path to a conversion, which is rarely the path your most informed buyers take.

On the organic side, getting more intentional about where your business shows up in the conversations your buyers are already having is worth considering.

That might mean investing in content that actually answers the questions Reddit threads are currently answering for you, or thinking about whether your business has a presence in the communities where your buyers are doing their research.

The UCaaS vendors didn’t beat Reddit by outspending everyone. They beat it by showing up consistently in the right places with the right content, long before anyone was ready to click an ad.

The terms where you’re spending the most are the same terms where Reddit is most likely sitting between your ad and your buyer, quietly shaping the signals your automation depends on.

That’s what automation drift looks like when it starts entirely outside the account.

Dig deeper: Stop chasing Reddit and Wikipedia: What actually drives AI recommendations

Meta is on track to overtake Google in global ad revenue for the first time

14 April 2026 at 21:48
Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

A major shift is underway in digital advertising: Meta Platforms is projected to generate more ad revenue than Google in 2026, signaling how marketers are increasingly favoring automated, performance-driven platforms.

Driving the news. According to Emarketer, Meta is expected to bring in $243.46 billion in global ad revenue this year, narrowly topping Google’s projected $239.54 billion.

  • Meta is forecast to capture 26.8% of global ad spend.
  • Google is projected to take 26.4%.
  • It would be the first time Google has lost the top spot in digital ad revenue.

Why we care. Meta’s growth suggests brands are getting more value from automated, performance-focused tools, which could influence how they split budgets between Meta and Google. It’s also a reminder that platform dynamics are changing fast, so media strategies need to stay flexible.

Catch up quick: Google has long dominated digital advertising through Search ads, Display ads across the web, and YouTube.

But its core ad business is growing more slowly than in previous years.

Meanwhile, Meta has benefited from AI-powered ad automation, stronger performance measurement tools, and continued scale across Facebook, Instagram, and WhatsApp.

Why Meta is winning now. Advertisers are increasingly prioritizing platforms that can deliver both reach and measurable return.

Meta’s advantage has been its ability to automate creative and targeting faster, optimize campaigns with less manual input, and make it easier for brands to prove ROI.

That’s especially appealing in a tighter economic environment where marketers are under pressure to do more with less.

Yes, but. Google is still enormous — and still growing.

Its search business remains one of the most profitable ad engines in the world, and YouTube continues to attract brand budgets. But the company faces more pressure from, AI search disruption, antitrust scrutiny, and slowing growth in traditional search advertising.

The bottom line. Meta passing Google in ad revenue would mark more than a symbolic milestone — it reflects a broader power shift toward platforms that make advertising easier to automate, measure, and scale.

Google Ads advertisers report wave of unexplained ad disapprovals

14 April 2026 at 20:59
Google Local Services Ads vs. Search Ads- Which drives better local leads?

A growing number of advertisers say their Google Ads campaigns were suddenly hit with mass disapprovals tied to DNS and 500 server errors — even when their sites appeared to be working normally. The issue is raising fresh concerns about platform reliability and the risk of sudden performance disruptions.

Driving the news. PPC advertisers began flagging widespread problems this week across Google Ads accounts, with multiple agency leaders saying clients were affected at the same time.

  • Managing Director at Cornerhouse Media, Ryan Berry, said more than 1,500 ads were disapproved in a single account around 1:30 p.m. UTC.
  • Others said they received overnight emails warning that ads had been disapproved.

Why we care. Sudden mass disapprovals can instantly pause traffic, leads, and revenue — even if nothing is actually wrong with their website. If Google’s systems are incorrectly flagging DNS or server errors, brands could lose performance and spend valuable time troubleshooting an issue they didn’t cause. It also highlights the need for closer monitoring and faster escalation when platform glitches happen.

What advertisers are seeing:

  • DNS errors, even when internal IT teams found no website issue.
  • HTTP 500 errors, despite landing pages loading normally.
  • Repeated disapprovals across multiple accounts.

Google Ads trainer, Charlotte Osborne said she saw two separate cases this week — one tied to a DNS error and another to a 500 error — with no issues found on the client side.

Google Advertising specialist Joshua Barr said he received “lots of emails overnight” about disapproved ads and has been dealing with similar problems for weeks.

Several Paid Search experts also said they were seeing the same issue across accounts.

What’s likely happening. Google’s ad review systems use automated crawlers to test landing pages. If Googlebot encounters temporary server issues, DNS lookup failures, redirects, or timeout errors, ads can be automatically disapproved under the platform’s “destination not working” policy.

That means advertisers can be penalized even if:

  • their site is live for users,
  • the issue is temporary,
  • or the problem is on Google’s crawler side.

What to do now:

  • Check Google Ads policy manager for exact disapproval reasons.
  • Test landing pages using multiple locations and devices.
  • Review DNS uptime, redirects, and CDN/firewall settings.
  • Submit appeals for clearly incorrect disapprovals.
  • Document account-level impacts in case the issue proves platform-wide.

The bottom line. For advertisers, this is a reminder that campaign performance can be derailed by platform glitches as much as by strategy — and when Google’s systems misfire, spend and leads can disappear fast.

First spotted. The errors were first spotted by Ryan Berry in the UK and Founder Anthony Higman also spotted issues in the US.

Advertisers are gearing up to hit Google with mass arbitration claims worth billions

14 April 2026 at 20:14
Google Search court

Google’s legal troubles over its search and ad tech businesses are entering a new phase — one that could expose the company to billions in payouts from advertisers seeking damages after U.S. courts found it illegally monopolized key digital ad markets.

Driving the news. A growing group of advertisers is preparing to file mass arbitration claims against Google, according to attorney Ashley Keller, who said the first filings are expected this week.

  • Keller says he has already signed up a “significant number” of advertisers.
  • He estimates potential claims tied to online search and display advertising could exceed $218 billion, based on economic analysis his firm commissioned.
  • Similar mass arbitration cases typically take 12 to 24 months to resolve.

Catch up quick. Courts in 2024 dealt Google major antitrust blows.

Why we care. This case could open a path to recover money advertisers believe they overpaid for search and display ads due to Google’s alleged monopoly power. Mass arbitration may give businesses more leverage than individual claims and could pressure Google into settlements.

It also signals growing legal scrutiny of the digital ad market, which could eventually lead to more competition and lower costs.

Why arbitration matters. Most advertisers can’t simply sue Google in court because their contracts require disputes to go through arbitration.

That usually favors large companies when claims are handled one by one. But mass arbitration — which bundles 25 or more similar claims — can shift leverage back toward claimants.

  • It increases pressure to settle.
  • It can lower legal costs for smaller businesses.
  • It allows companies with relatively modest individual claims to pursue damages collectively.

What’s new. This case could break new ground because most mass arbitrations to date have involved consumers or workers — not corporate plaintiffs.

A large-scale advertiser action against Google would be among the first major efforts to use the strategy for business-to-business claims.

What Google says. In a recent filing, Google said it faces private damages claims tied to global antitrust cases but cannot yet estimate potential losses.

The company said it believes it has “strong arguments” and plans to defend itself aggressively.

The bottom line. Google’s antitrust losses are no longer just a regulatory problem — they are becoming a direct financial threat, with advertisers now testing whether mass arbitration can turn monopoly rulings into real payouts.

Why topical authority isn’t enough for AI search

14 April 2026 at 19:00
Why topical authority isn’t enough for AI search

Topical authority is a key concept in SEO, but it doesn’t account for how search and AI systems choose between competing sources.

The missing layer isn’t in content or structure. It’s in the signals that determine selection once a topic is understood — the difference between being eligible and being chosen.

Topical authority explains content, not selection

Topical authority is foundational for SEO and now AEO and AAO. But the framework the industry calls topical authority is incomplete. It covers semantics, content, and structure, but that’s just one part of a three-row, nine-cell model that defines topical ownership.

Topical authority describes what you’ve built. Topical ownership describes whether the system picks you.

Search and AI systems don’t reward content for existing. They reward content for winning a selection process. At Recruitment (Gate 6 in the AI engine pipeline), the system selects candidate answers from everything it has indexed.

Topical ownership has three layers: coverage, architecture, and position.

Everything in this article builds on Koray Tuğberk GÜBÜR’s foundation. He has engineered a rigorous methodology for building content architecture that signals genuine expertise to search engines, and his case studies prove it produces measurable results.

He coined “topical map” as a standard SEO deliverable, engineered the semantic content network methodology, and brought mathematical rigor to what had been vague advice about writing comprehensively. 

His own formula (topical authority equals topical coverage plus historical Data) already acknowledges the temporal dimension I’ll expand below. He’s the authority on this subject. The expanded framework names the cells he already recognized and adds the one row he hasn’t yet formalized.

Topical ownership- The nine-cell matrix
Topical authority, fully defined, is a three-by-three matrix.

As with everything in this series, the “straight C” principle applies. To compete in any algorithmic selection process, you can’t afford a failing grade in any of the criteria that are being evaluated. 

Excellence in some dimensions doesn’t compensate for absence in others. The system requires a passing grade for each criterion. The three rows aren’t equally weighted above that floor, and position is the dominant row, as we’ll see.

Row 1: Coverage is the entry ticket, not the destination

Coverage in one sentence: Go deep enough that nothing’s left to add, cover every adjacent angle, and bring a perspective nobody else has.

Coverage describes the content itself. 

  • Depth is vertical exhaustiveness and is often underestimated. 
  • Breadth is the horizontal range across subtopics and adjacent areas. GÜBÜR’s topical map concept is the engineering discipline that makes breadth systematic rather than accidental.
  • Original thought is the dimension that is almost always overlooked. Pushing the boundaries of a topic is what makes your coverage non-interchangeable.

An entity that covers a topic with perfect depth and breadth but says nothing new is an encyclopedia: comprehensive, correct, and structurally identical to any other comprehensive source. That’s an advantage that you will lose over time since it will become prior knowledge in the training data of the AI sooner or later. You’re no longer needed and won’t be cited.

Original thought is the key to retaining the attention of the AI — a new framework, a novel angle, and a perspective no one else has articulated is a good reason to come back again and again, and ultimately cite.

Importantly, original thought doesn’t require being revolutionary, nor do you need to be original on every page. Often it will be as simple as a fresh way of framing a familiar concept.

Define your brand’s specific perspective on specific vocabulary. When done properly, that’s enough.

There are two kinds of original thought, and they carry different risk profiles. 

  • Reframing connects two existing validated truths that nobody has explicitly joined before. Both components are already corroborated; the system can verify them independently, and the originality lives in the framing.
  • True invention is different. There’s nothing for the system to cross-reference and nothing that’s already established to anchor the new claim. The result is that you look fringe until the world catches up.

The window between being right and being recognized can be long and uncomfortable, and to take that risk credibly, you need absolute conviction not only that you’re right, but that you’ll be proven right, and the patience to survive looking wrong in the meantime.

The reframe carries a fraction of that risk: the source truths are already verifiable, so the connection is credible from the moment it’s published.

Row 2: All architecture decisions begin with source context

Architecture in one sentence: Write sentences clearly, make your content flow in a logical manner, and link intelligently.

The three cells in the architecture row are GÜBÜR’s terms, and I’m using them as he defined them.

Source context determines everything that follows:

  • The publisher’s angle.
  • The identity and purpose that shapes what the topical map should contain. 
  • How the semantic network should be constructed. 

GÜBÜR’s insight that a casino affiliate and a casino technology provider need fundamentally different topical maps for the same subject captures the principle: structure follows identity.

Topical map is the structural design of the content: core sections and outer sections, which attributes become standalone pages and which merge together, the direction of internal linking, and the identification and elimination of information gaps.

Semantic network is the interconnected execution that makes the structure machine-readable: contextual flow between sentences and paragraphs, semantic distance minimized between related concepts, and cost of retrieval optimized so that the system can extract facts without unnecessary computational effort.

Good architecture makes coverage legible to the system. You can have thorough coverage that the algorithm can’t parse, and the result is the same as not having the content at all. Architecture is the bridge between what exists and what the system understands.

Where architecture falls short as a complete model is that it’s entirely within what you control. It describes how to organize your own house. It doesn’t address who the neighborhood knows you as.

Row 3: Position is why two equally thorough sources produce different results

Position in one sentence: Be first to stake the claim, be recognized by others as the best at what you do, and do things that ensure you are the person everyone refers to when they talk about your topic.

Position is the competitive layer. It’s the only row that describes the entity rather than the content. That distinction makes it the dominant row, for the same structural reason links were the dominant signal in traditional SEO: external validation at the entity level breaks ties that content quality alone can’t.

Because you’re building entity reputation, the position row requires the greatest investment of resources and must be maintained over time. Because most brands are looking for quick, easy wins and are unwilling to commit to long-term investment in their position, this is where your competitive advantage lies and where you’ll see a real difference.

Two entities can have identical coverage and architecture, and yet one will be treated as the authority and the other won’t. The current definition of topical authority can’t explain why. Position is the huge missing piece.

Position- earned, not claimed

Temporal position is about when you said it. The source that established a claim, coined a term, or described a mechanism before anyone else has a structurally different relationship to that topic than a source that repeated it later. 

GÜBÜR’s formula already acknowledges this: “Historical data” in his equation is the accumulated proof of chronological priority. First-mover advantage in knowledge graphs is an architectural phenomenon we see over and over in our data.

Hierarchical position is about dominance: being recognized by others as the top voice on the topic. Primary sources, practitioners who work in the field, researchers who run studies, and experts who generate knowledge. This isn’t self-declared. Others assign it. When Matt Diggity describes GÜBÜR as “one of the most knowledgeable people” in semantic SEO, that’s a hierarchical position being conferred by a peer.

Narrative position is about centrality: being the person everyone refers to when they talk about the topic. The journalist credits you, the researcher cites you, and the conference features you as the reference voice. 

All roads lead to Rome, and you’re Rome. The system reads these co-citation patterns and builds a picture of where you sit in the source landscape. 

Narrative position can’t be manufactured with first-party content. It’s earned by doing things in the world that others find worth referencing.

Get the newsletter search marketers rely on.


Topical authority, N-E-E-A-T-T, and topical ownership

N-E-E-A-T-T — Google’s experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) framework, extended with notability and transparency — describes the credibility signals that drive algorithmic confidence and are rightly a huge focus of the industry.

N-E-E-A-T-T describes inputs, not structure. Those signals don’t exist in a vacuum. They attach to an entity that the system has already understood.

I made this argument in a Semrush webinar with Lily Ray, Nik Ranger, and Andrea Volpini in 2020, when we were still talking about E-A-T: entity understanding is a prerequisite to leveraging credibility signals, not an optional layer on top.

The nine-cell matrix shows where each signal lands.

  • The coverage row provides the source material for AI to evaluate your knowledge on your claimed topic. 
  • The architecture row is where your content gets classified and positioned relative to a topic. 
  • The position row is where strong N-E-E-A-T-T signals translate into a competitive advantage because N-E-E-A-T-T is an entity framework: it measures the publisher and author, not the content. Position is the entity row.

Note on the diagram: It could be argued that the four gaps in the diagram are partially covered by inference. 

  • Expertise implies the knowledge to build a topical map and the depth that produces original thought.
  • Experience implies the first-hand involvement that creates temporal priority.
  • Transparency implies the clear structural identity that shapes a semantic network. 

Those arguments aren’t wrong. N-E-E-A-T-T evaluates the person primarily — what they built is an indirect signal.

Where N-E-E-A-T-T signals land

N-E-E-A-T-T maps onto two of the three position dimensions. 

  • Hierarchical position is, in structural terms, what Authoritativeness and expertise measure — your level of knowledge and peer recognition of your standing on a topic. 
  • Narrative position is what notability captures. The co-citation patterns that tell the system you’re the reference voice.

Temporal position sits outside N-E-E-A-T-T. No credibility signal changes just because you said something first. 

Original thought sits outside it, too. The framework that’s supposed to reward quality has no mechanism for recognizing originality — at least not in the short term. It can reward reframing immediately, because both source truths are already verifiable. 

True invention only registers retroactively, once corroboration has accumulated to the point where assertion becomes position.

That structural gap points to a practical problem. Most practitioners build N-E-E-A-T-T credibility as a general brand exercise — demonstrate expertise, earn trust, and accumulate signals. However, credibility without topical position is a credential without context. The fix is to audit all nine dimensions and focus your work on building N-E-E-A-T-T credibility to improve your weakest.

My own situation is a good example of the difficulties of original thought:

  • Temporal position is well-documented. Brand SERP in 2012, Entity home in 2015, answer engine Optimization in 2017, the algorithmic trinity and untrained salesforce in 2024, and now assistive agent optimization in 2025. The chronological priority is established and verifiable. 
  • Hierarchical position has partial coverage. I’m recognized within specific circles as the reference voice on brand SERPs and algorithmic brand optimization, but not yet broadly enough to call it dominance.
  • Narrative position is the biggest gap. Many people use the terms I coined, but few third-party sources cite me unprompted, and more articles on my own properties won’t change that. The fix I am implementing is doing things in the world that others find worth referencing: keynotes, independent collaborations, corroboration with partners, and articles like this one.

This is why crediting GÜBÜR for source context, topical map, and semantic network is intentional. Accurate attribution from a credible source builds the narrative position of the person being credited (GÜBÜR), and giving credit accurately signals to the system that my own claims are likely to be equally well-founded. 

Crediting well is a position signal, and it’s one most practitioners consistently underuse. My take is that citing the original source is the same as linking out. People resisted for years to protect the mysterious “link juice,” but it’s now accepted that linking out to provide supporting evidence is worth more than the PageRank cost. The same logic applies to citations: the value it brings you is greater than the loss.

This article is itself a demonstration. 

  • GÜBÜR’s architecture framework is validated and extensively corroborated.
  • The AI engine pipeline argument runs across the previous eight articles in this series.
  • The nine-cell connection is new. 

For the original thought in this article, I’m using the safer form of original thought: the reframe-cite-and-add technique. I invite you to do the same.

Recruitment (Gate 6) is where position determines the winner

Article 8 in this series covered annotation (Gate 5) — the gate where you’re alone with the machine, where the system classifies your content based on your signals alone, and with no competitor in the frame. Annotation is the last absolute gate. From recruitment onward, you’re always being compared with your competition.

So, recruitment (Gate 6) is where the game changes. Every source that reaches recruitment has cleared the infrastructure gates and survived annotation (hopefully in a healthy, competition-ready state). Now the system is selecting between candidates, and it’s selecting based on relative standing, not absolute quality.

This is the moment the entire matrix resolves into a single question: when the algorithm culls candidates at the recruitment gate, is your entity’s position strong enough to be one of the survivors in that selection? 

In my three-by-three topical ownership grid, coverage gets you into the candidate pool, architecture makes the system confident it understands your content, and position determines whether it picks you ahead of the competition.

Coverage and architecture are content rows. They describe what you published. Position is the entity row. It describes who published it.

At recruitment, the system evaluates the content, and selection is heavily influenced by its assessment of the entity in the context of the topic. You can rewrite the content, but you can’t quickly rewrite who you are.

Darwin described natural selection as the mechanism by which organisms best adapted to their environment survive. An entity that occupies a strong position is an entity best adapted to the system’s selection criteria: temporal priority, hierarchical standing, and narrative centrality.

 The system isn’t being arbitrary when it selects one well-structured, comprehensive source over another equally well-structured, equally comprehensive one. It’s selecting the entity best adapted to the query’s requirements, and best adapted means best positioned, not best written.

The signals behind each row have never been equally weighted, and entity is the clearest illustration of that. In traditional SEO, inbound links were the dominant signal. They could sometimes overcome very weak criteria and were almost a guarantee of victory when all other signals were roughly equal.

That dominance gradually diminished as links became one signal among many, table stakes rather than differentiator. Entity has followed the inverse trajectory. It began as a minor signal with the introduction of the knowledge graph and knowledge panels, and has grown steadily in structural importance ever since. 

N-E-E-A-T-T attaches to an entity. Topical ownership attaches to an entity. Agential behavior requires a resolvable entity to function. Co-citation and co-occurrence patterns are only meaningful when the system has an entity to attach them to. 

The AI engine pipeline stalls at the annotation stage (Gate 5) without a resolved entity. That gate is entity classification, and everything downstream depends on it. Brand SERPs, Knowledge panels, and AI résumés are entity constructs. Without a resolved entity, they don’t exist in a meaningful way. 

The future will be more entity-dependent, not less, and the gap between brands that have invested in their entity and those that haven’t will compound. Entity is no longer simply a signal. It’s the substrate that other signals require to operate, and the most important single investment you can make in your long-term search and AI strategy.

To update a common saying: the best time to start was 10 years ago, the next best time is today, and the time it won’t be worth starting is tomorrow.

Topical ownership requires all nine cells, all three rows

Topical ownership is the state where an entity dominates all nine cells of the matrix for a given topic. Not just comprehensive, not just well-structured, but the entity others reference when they write about the subject — ideally the one that got there first, and the one peers defer to by name.

  • Coverage tells the system you’re eligible.
  • Architecture tells the system you’re legible.
  • Position tells the system you’re the right answer.

The industry has been actively optimizing for six of those nine cells. 

Understandability work builds the entity. N-E-E-A-T-T builds credibility. But the position row — the one that determines who wins at recruitment — has been built largely without intent. Practitioners accumulate N-E-E-A-T-T signals as a general credibility exercise and assume that covers the entity layer. 

Position requires deliberate engineering of temporal, hierarchical, and narrative standing on specific topics. Being intentional about all nine, knowing which row each piece of work serves and why, is where the competitive advantage lives now. 

Simply becoming conscious of the grid and the three rows will make your topical ownership, SEO, and N-E-E-A-T-T work more purposeful across all nine cells, because you will implement each signal with specific intent rather than general ambition.

The brands AI consistently recommends aren’t just covering their topics well. They own them.


This is the ninth piece in my AI authority series. 

Claude Skills for PPC: How to turn one-off prompts into scalable systems

14 April 2026 at 18:00
Claude Skills for PPC- How to turn one-off prompts into scalable systems

Despite all the shiny new capabilities at our disposal, many professionals seem stuck in a cycle of “AI Groundhog Day.” 

You open a chat window, carefully craft a prompt, paste in your context, and get a great result. An hour later, you do it all over again. If this is how you use AI to automate, you’re still doing manual work — you’re just doing it in a chat box.

To move from using AI to building with it, you need to shift from a human doer to a true human orchestrator. That means stopping one-off prompts and starting to build systems. In this new phase of AI automation, what you really need are AI skills.

I explore this shift in my new book, “The AI Amplified Marketer,” where I look at how the human element of marketing remains vital even as new AI tools and shifting expectations evolve at a breakneck pace.

Below, I’ll show how to use Skills, a newer AI capability, to make you more efficient when managing PPC.

What’s a Claude Skill?

While many marketers have used ChatGPT’s Custom Instructions to set a general approach for how their AI works, a Skill is a more rigorous definition of how the AI needs to do things. These instructions can help it deliver more predictable outcomes that fit your expectations.

For example, I recently used a standard chat to rate search terms. While the AI’s logic was sound, the output was inconsistent: one session returned letter grades, another gave a percentage out of 100, and a third used a 1-10 scale.

In a professional setting, this inconsistency is a problem. It makes it difficult to integrate that prompt into a larger workflow where unpredictable grading might confuse other tools or team members.

A Skill solves this by providing a reusable set of instructions. It defines which tools and logic to use for a complex task and ensures the results are formatted exactly the same way every time.

It’s what turns the AI from a temperamental assistant into a reliable professional teammate.

And thanks to more recent agentic capabilities in Claude, a Skill is like turning your best multi-step PPC playbook into something an AI can execute on demand by delegating the various tasks to the right tools and subagents.

Whether it’s your agency’s proprietary account audit checklist or your framework for mining search query reports, a Skill encodes that process. It turns your PPC expertise into a scalable system that anyone on your team can use with their AI.

Dig deeper: Agentic AI and vibe coding: The next evolution of PPC management

How to build your first AI Skill

Creating a Skill is more straightforward than it might sound and you can do it through a simple chat session with your AI. Provide an account audit checklist, a standard operating procedure (SOP) from your team, or a blueprint to Claude. You can then ask it to convert that process into the formal structure of a Skill.

Interestingly, when you ask Claude to help build a Skill, it uses a specialized Skill-building protocol. This ensures your final output is structured correctly, follows best practices, and remains consistent with Anthropic’s underlying architecture.

Technically, a Skill is saved as a Markdown (.md) file that contains the playbook for the task at hand.

This file can be stored locally on your computer if you’re concerned about data privacy. Alternatively, you can share it in a central cloud repository. This makes it easy for your team to update and deploy best practices across your entire organization.

You don’t have to start from zero. Many pre-built Skills are available on platforms like GitHub. You can find examples for various marketing tasks, download them, and adapt them to fit your specific needs and workflows.

How to use a Skill in PPC

To use a skill, first make sure there are some available in your account.

Then, just tell the AI the task you want to do.

The AI will look through connected Skills and, if it finds one that matches the task, it will use those instructions to perform the work.

Sidenote: This means it is important not to have competing skills in your account. Imagine what could go wrong if you did: with two skills that both do Google Ads audits, you lose the predictability a Skill was supposed to give you in the first place, because it may randomly pick a different one and do the work in different ways as a result.

Dig deeper: Agentic PPC: What performance marketing could look like in 2030

PPC Skills need real-time data

A Skill provides powerful logic, but without access to live account data, it remains theoretical.

A Skill can define an analysis, such as “review search terms from the last 14 days with costs over $50 and zero conversions.” However, it doesn’t know how to pull that data from Google Ads on its own.

In the past, the workaround was to manually download static data, like a CSV from the Google Ads interface or a Google Ads Editor file. You would then feed this file to the AI as context. This works, but it’s slow, manual, and the data is outdated the moment you download it.

A more modern approach uses a Model Context Protocol (MCP) to connect your AI and its Skills to other systems, such as live data sources. For example, using the Optmyzr MCP, your Skill can dynamically pull the exact Google Ads data it needs, when it needs it. This connection turns a static set of instructions into a living, responsive tool. (Disclosure: I’m the cofounder and CEO of Optmyzr.)

How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably
How Skills tell AI how to do things, and how tools and MCP enable it to do those things more reliably

Dig deeper: From scripts to agents: OpenAI’s new tools unlock the next phase of automation

Get the newsletter search marketers rely on.


From grunt work to system oversight

Combining a Skill with a tool like an MCP is where the real transformation happens. Your AI moves from being an assistant that requires constant direction to a system that can manage a process. It transitions from giving you ideas to executing your vision.

Let’s look at a common PPC task:

  • Task: Search Term Analysis to Eliminate Irrelevant Clicks
  • A Skill without tools is a task-oriented assistant: It might instruct you: “Paste in your search term report as a CSV, and I will identify potential negative keywords.” You’re still the one doing the grunt work of retrieving data and implementing the findings.
  • A Skill with tools acts as a junior manager for that specific process: It can be configured to: “Pull the search term report for the last 7 days via the MCP, identify terms with high spend and no conversions, and apply them as exact match negatives to the appropriate campaign.” The entire workflow is handled, and your role shifts to one of oversight.

When you combine structured logic (Skills) with live data and execution capabilities (tools), you’re building more than a chatbot; you’re building a reliable teammate. It’s a grounded, practical system that handles defined tasks, freeing you up to be the orchestrator of your strategy.

Dig deeper: Scaling PPC with AI automation: Scripts, data, and custom tools

4 PPC Skills you can build today

To move from theory to practice, let’s look at four concrete examples of PPC Skills. In each case, notice how connecting these Skills to live tools transforms the AI from a passive analyst into an active participant.

1. Search term mining

This Skill’s logic guides the AI to analyze a search query report to find wasted spend and opportunities.

  • Without tools: You provide a CSV. The Skill returns a structured list of recommended negative keywords and new keyword ideas. You have to implement them manually.
  • With tools (MCP): The Skill automatically pulls the latest search query report data, identifies the negative keywords, and uses a tool function to apply them directly to your Google Ads account.

2. Ad copy generation

This Skill takes a landing page URL and target keywords to generate ad copy variations based on value propositions and user intent.

  • Without tools: The Skill produces headlines and descriptions in a text format. You copy and paste them into Google Ads.
  • With tools (MCP): The Skill finds underperforming ad assets in your account, and then generates the ad copy and pushes the new ads directly into the correct ad groups, potentially even setting up a new ad experiment.

3. Account auditing

This Skill runs a predefined checklist against an account, looking for issues like missing ad extensions, campaigns limited by budget, or ad groups with low CTR.

  • Without tools: The Skill generates a report that lists all the problems it found. You then have to log in to the account and fix each one.
  • With tools (MCP): The Skill not only identifies that an ad group is missing a callout extension but can also apply a relevant, pre-approved extension from extensions used elsewhere in the account. It doesn’t just report the problem; it fixes it.

4. Budget reallocation

This Skill analyzes campaign performance data to find opportunities to shift budget from underperforming campaigns to those with higher potential returns.

  • Without tools: The Skill provides a recommendation, such as: “Decrease Campaign A’s budget by 20% and increase Campaign B’s budget by 15%.”
  • With tTools (MCP): The Skill performs a dynamic analysis, pulling in exactly the right data with the appropriate lookback and time segmentation, and then executes the budget change directly, ensuring budgets are optimized as soon as the opportunity is identified.

The future of your role: From PPC doer to PPC designer

The combination of Skills and tools enables you to move from playing with AI to having AI do meaningful work. For years, AI has been good at generating ideas but weak at executing them inside the ad platforms. This solves the “last mile problem” by giving AI the logic, data, and permissions to act.

This also signals a change in the role of the PPC professional. Your job will shift from doing the repetitive work to designing the systems that do the work. Instead of manually analyzing reports and making changes, you will spend more time designing Skills, defining the rules and guardrails for automation, and reviewing the outcomes.

We’re at a point where the large language models are capable, the tools for connecting them to platforms are available, and the interfaces make it possible for non-developers to build. It’s time to rethink your processes and get AI to be a real teammate.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

The end of endless prompting

The cycle of endless prompting is a dead end. It keeps you in the role of a manual operator when you should be a systems designer. By embracing Claude Skills, you’re doing more than just working faster; you’re changing the very nature of your job. You’re moving from “doing PPC work” to “designing the PPC systems” that perform that work with predictability and at scale.

This is the ultimate expression of the AI-amplified marketer: building a true partner that codifies your expertise into a reliable, efficient engine.

The first step is to look at your daily tasks through the lens of a designer. What repetitive process is ready to be turned into your first Skill?

Google Ask Maps is moving from listings to recommendations

14 April 2026 at 17:00
Google Ask Maps is moving from listings to recommendations

Google’s Ask Maps feature does more than help users find nearby businesses.

Based on hands-on testing of local service queries for plumbers, electricians, and HVAC companies, Ask Maps often narrows the field, interprets user intent, and frames businesses around qualities such as responsiveness, specialization, honesty, and repair-first thinking.

In more complex prompts, it sometimes provides guidance before recommending businesses. This shows Google Maps moving beyond simple local retrieval and toward a more recommendation-driven experience.

To evaluate that shift, we tested Ask Maps across five levels of local intent — starting with simple category searches and progressing toward conversational prompts involving uncertainty, trust, and decision-making.

A clear pattern emerged. As query nuance increased, Ask Maps shifted from listing businesses to interpreting which businesses fit and why.

This article draws from hands-on testing across a limited set of local service queries in one geographic area. Treat these findings as an early directional view, not a comprehensive representation across all markets or query types.

The testing framework

To evaluate progression, we built a five-level intent model based on how homeowners and local service customers actually search. Instead of organizing around traditional keyword categories, we structured the framework from simple retrieval toward conversational decision-making.

  • Level 1 focused on basic requests with minimal context.
    • Example: “Looking for an HVAC company near me.” 
  • Level 2 introduced more service specificity.
    • Example: “I need an electrician to upgrade my panel in an older home.” 
  • Level 3 moved into situational queries, where the user described a problem.
    • Example: “My furnace is making a loud banging noise and I’m not sure if it needs to be replaced or repaired.” 
  • Level 4 introduced trust and decision concerns.
    • Example: “I think my furnace might need to be replaced, but I don’t want to get overcharged. Who is honest about that?” 
  • Level 5 combined those elements into fully conversational prompts asking for guidance, validation, and recommendations in the same search.
    • Example: “I was told I need a full furnace replacement, but it feels expensive. How do I know if that’s actually necessary, and who should I call for a second opinion in my area?”

This framework allowed us to evaluate:

  • Which businesses appeared.
  • How Ask Maps interpreted prompts.
  • What attributes it emphasized.
  • When results started to resemble guided recommendations rather than search results.
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Ask Maps narrows the field and adds interpretation

One of the clearest patterns across the testing was that Ask Maps consistently returned a relatively small set of businesses while increasing the amount of interpretation as the user’s search intent became more complex.

At Level 1, the average number of businesses shown was 3.6. Level 2 rose to 4.3. Level 3 dropped slightly to 3.3. Level 4 averaged 5, and Level 5 averaged 4.6. Across the full set, the range remained fairly tight, generally between three and eight businesses.

That’s a different experience from traditional Maps, where a user can scroll through a much broader set of options and do more of the evaluation work themselves.

Ask Maps narrows choices early and spends more effort explaining why those businesses fit the prompt, but stops short of being fully action-oriented. Even when a phone number is shown, there’s no clickable call button directly in the Ask Maps response. 

To call or access the full set of contact options, the user still has to click into the business’s Google Business Profile. That matters because while Ask Maps is becoming more interpretive, the underlying GBP is still where action happens.

As prompts become more nuanced, uncertain, or trust-sensitive, Ask Maps draws on a broader range of sources. It shows fewer businesses, replacing breadth with interpretation.

Dig deeper: How to build FAQs that power AI-driven local search

Basic queries already go beyond simple listings

Even the simplest queries don’t behave like a traditional Maps result.

Basic queries already go beyond simple listings

At the baseline level, Ask Maps still relies heavily on Google Business Profile data, including: 

  • Business descriptions.
  • Review content.
  • Ratings.
  • Hours.
  • In some cases, posts. 

Website influence is minimal here, and there’s little evidence of outside sourcing. But even within that mostly closed ecosystem, it goes beyond listing nearby businesses.

Instead of just showing names, ratings, and locations, Ask Maps:

  • Generates narrative summaries based on information in the Google Business Profile. 
  • Describes businesses in terms of responsiveness, experience, specialization, or the kinds of situations they seem well-suited for. 
  • Draws on reviews when framing businesses.

Even at the most basic level, Ask Maps isn’t neutral. It’s beginning to interpret businesses for the user.

As queries become more specific, Ask Maps starts matching capability

Once the prompt shifts from a general service search to a specific type of job, Ask Maps becomes more selective in how it matches businesses to the request.

  • A query about an electrical panel upgrade doesn’t behave the same way as a query about urgent AC repair. 
  • Replacement-oriented prompts emphasize installation and system expertise. 
  • Repair-oriented prompts emphasize speed, availability, and responsiveness. 
  • Queries tied to older homes or higher-risk work call for more evidence of specialization.

At this level, Google Business Profile and reviews still carry much of the weight, but websites matter more when the job is more complex or costly. A panel upgrade query produces stronger external link usage than a more straightforward AC repair prompt.

That doesn’t mean websites are always heavily used. It shows more selectivity. As decisions become more complex, Google looks for more supporting evidence before recommending businesses.

Situational queries push Ask Maps toward interpretation

The more noticeable shift begins once the prompts move from service categories to real-world scenarios.

At Level 3, the user is no longer looking for a plumber, electrician, or HVAC company. Instead, they’re describing a problem, such as a loud banging furnace, outdated electrical in an older home, or an AC unit that has stopped working during extreme heat. In those cases, Ask Maps increasingly interprets the problem before introducing businesses.

Some responses provide guidance or context first. Others identify the provider and clarify the work before making recommendations. The businesses that follow aren’t framed as generic providers. They’re framed as possible solutions to the situation.

Review content becomes important here. Rather than simply supporting a business’s credibility, reviews act as evidence that the company has handled similar situations before. Fast arrival times, experience with older homes, communication during stressful repairs, and problem-solving ability all become more meaningful when describing businesses.

This is the point where Ask Maps moves more clearly from retrieval to interpretation.

Dig deeper: 7 local SEO wins you get from keyword-rich Google reviews

Trust-oriented queries change what gets emphasized

When the prompts introduce fear, skepticism, or concern about making the wrong decision, Ask Maps changes again.

At Level 4, the focus is less on the service need itself and more on the emotional context around it. The user is worried about being overcharged, being pushed into unnecessary replacement, or hiring someone who would cut corners. 

Ask Maps doesn’t just return businesses capable of doing the work. It organizes businesses around trust-related qualities such as honesty, transparency, careful workmanship, fairness, and second-opinion value.

This is one of the strongest patterns in the research. At this stage, review language is the primary signal shaping how businesses are framed. Specific phrases and anecdotes matter, elevating businesses that explain options clearly, don’t upsell, offer honest assessments, or deliver careful, professional work.

External sources become more relevant here. In addition to GBP information and reviews, Ask Maps shows more willingness to pull from company websites, testimonials, third-party platforms, and educational resources when the user’s concern involves decision risk rather than just service need.

Once the query becomes trust-driven, the recommendation no longer appears to be based only on who can do the job. It reflects who is most likely to handle the situation in a way that the user feels good about.

Get the newsletter search marketers rely on.


Advisory queries show the clearest shift

The strongest example of this progression came at Level 5. These are prompts where the user combines a problem, uncertainty, and a request for recommendations in a single query. 

For example, someone might say they were told they needed a full furnace replacement but were unsure whether that was really necessary and wanted to know who to call for a second opinion. In these cases, Ask Maps moves most clearly into a decision-support role.

Instead of leading with local businesses, it often starts with an explanation, introducing frameworks, safety context, or ways to think about the decision. 

Only after that does it recommend businesses, and those businesses are often grouped not just by rating or proximity, but by approach. Some are framed as repair-first options. Others are framed as second-opinion experts or safety-focused specialists.

This is where Ask Maps feels least like a directory and most like an advisor. The structure of the response looks more like a guided decision process than a traditional local search result.

That doesn’t mean the system is flawless or that every answer is equally strong. But it does suggest that when a prompt includes uncertainty and a need for validation, Ask Maps is trying to do more than match a category. It’s trying to help the user think through what to do next.

Dig deeper: New Google Maps features: Local Guides redesign, AI captions, photo sharing

Where Ask Maps gets its information

Across the testing, several source patterns appear repeatedly, and the mix appears to shift depending on the type of query.

Where Ask Maps seems to get its info

At the foundation, Google Business Profile does much of the early work. Business categories, service descriptions, hours, ratings, and review counts help determine which businesses are eligible to appear and how they are initially framed. In some cases, Ask Maps also pulls from GBP services and products, business descriptions, and occasionally posts when those help reinforce what the business does.

Reviews seem to be one of the most important inputs across nearly every query type. Not just in ratings, but in how review language shapes the summary. 

Ask Maps often draws on review themes tied to:

  • Responsiveness.
  • Honesty.
  • Professionalism.
  • Fast arrival times.
  • Work on older homes.
  • Repair-versus-replace situations.
  • Whether customers feel the company explains options clearly or avoids unnecessary upselling.

In other words, reviews support reputation and help define how a business is positioned in the response.

Business websites matter more once the query becomes more specific, higher-stakes, or more tied to decision-making. In those cases, Ask Maps seems more likely to pull in service pages, testimonial pages, or other on-site business information that helps reinforce specialization, repair-first positioning, second-opinion value, or experience with a particular type of job. 

That’s more noticeable in queries tied to things like panel upgrades, replacement decisions, or older-home electrical concerns than in simpler “near me” searches.

External sources are the most selective layer, but they become more visible when the query involves safety, diagnosis, pricing uncertainty, or broader decision support. 

In those cases, Ask Maps pulls in:

  • Educational content around issues like repair-versus-replace decisions, quote validation, and electrical safety. 
  • Third-party review and directory platforms such as Angi, HomeAdvisor, YouTube, and Facebook.
  • Other publicly available business information, when it helps reinforce trust, workmanship, or reputation. 

In some of the trust-oriented electrician queries in particular, this outside sourcing is more prominent than in simpler local lookups, suggesting Google may broaden its evidence base when evaluating how a business is likely to operate, not just what services it offers.

How Ask Maps mixes sources based on query

Ask Maps isn’t relying on a single source of truth. It appears to be constructing an answer from a mix of Google Business Profile data, review language, business website content, and selectively chosen outside sources, with the balance shifting based on what the user is actually asking.

What this may mean for local visibility

If Ask Maps continues to develop in this direction, it could have meaningful implications for local visibility in Google Maps.

  • Inclusion alone may matter less than interpretation. If Ask Maps is consistently showing a smaller set of businesses and adding more explanation around them, the question is no longer just whether a business appears. It’s also how that business is framed and whether Google has enough confidence to position it as a good fit for the situation.
  • Review content is becoming more important than many businesses realize. The language within reviews appears to influence not just credibility, but the actual way a business is described and recommended.
  • Website content plays a more targeted role than many local businesses assume. It may not be equally important for every prompt, but it matters more when the service is complex, expensive, or tied to greater uncertainty.

More broadly, Ask Maps points toward a version of local search in which retrieval, evaluation, and decision support occur much more closely together. Instead of searching, comparing, researching, and then deciding across several steps, the user may increasingly be guided through much of that process within a single AI-mediated Maps experience.

What businesses and SEOs should tighten up now

If Ask Maps continues moving in this direction, the practical response isn’t to chase a new tactic or treat it like a separate channel. It’s to make the business easier for Google to understand and easier for customers to trust.

What businesses should tighten up now

Keep the Google Business Profile current and specific

A Google Business Profile may play a bigger role when Ask Maps is trying to decide what a business does, what kinds of jobs it handles, and whether it fits a more nuanced prompt.

  • Review primary and secondary categories to make sure they reflect the core work accurately.
  • Tighten the business description so it clearly explains the services offered, the types of jobs handled, and any specialties or areas of focus.
  • Make sure hours, service areas, and contact details are complete and current.
  • Add photos that reinforce the kinds of jobs the business wants to be associated with.
  • Treat posts and profile updates as another way to reinforce services and activity, not just as optional extras.
  • Use the Services and Products sections fully, adding clear descriptions that reflect the specific jobs, specialties, and situations the business wants to be known for.

Pay closer attention to review language

If Ask Maps uses review language to shape how businesses are positioned, then the wording in reviews may matter more than many businesses realize.

  • Look beyond review volume and average rating.
  • Pay attention to whether reviews naturally mention specific jobs, customer concerns, and outcomes.
  • Watch for language around responsiveness, honesty, professionalism, repair-first thinking, and clear communication.
  • Encourage reviews that reflect real experiences rather than generic praise.
  • Use review trends to understand how the business is likely being framed by Google.

Revisit website content for higher-consideration services

Website content appears more likely to matter when the query is more complex, more expensive, or tied to more uncertainty.

  • Strengthen service pages for the higher-value or higher-risk work the business wants to be known for.
  • Add FAQs that address real decision points, not just basic definitions.
  • Include examples of the kinds of jobs handled, especially where context matters.
  • Reinforce trust signals such as experience, process, reviews, and proof of work.
  • Use language that helps explain situations like repair versus replace, older-home work, or second-opinion scenarios.

Think beyond ranking for a phrase

There’s a broader strategic shift here for local SEO. The question may no longer be only whether a business can rank for a phrase. It may also be whether Google has enough evidence to recommend that business in response to a real-world question.

  • Evaluate whether the business is easy to understand across GBP, reviews, website content, and broader digital mentions.
  • Look at whether the business is clearly associated with the jobs and situations it wants to win.
  • Think about trust and decision support, not just service relevance.
  • Focus on making the business more legible to both Google and potential customers.
  • Treat local optimization less like keyword matching alone and more like building a clear, consistent business profile across sources.

Dig deeper: If your local rankings are off, your map pin may be the reason

The direction of Ask Maps is becoming clearer

The main question behind this research was when Ask Maps stops behaving like a directory and starts behaving more like a recommendation engine. Based on this testing, that shift starts earlier than many might expect.

Even at the most basic level, Ask Maps narrows, summarizes, and interprets. As prompts become more specific, situational, and trust-driven, they move further toward guided recommendations. At the highest level of complexity, it begins to look less like traditional local search and more like a system designed to help users make decisions.

That doesn’t mean Google Maps has fully changed into something else. But it does suggest the direction is becoming clearer. For local businesses and the people who support them, that makes this worth watching closely. Visibility inside Maps may increasingly depend not just on being present, but on being understood well enough for Google to explain why the business fits the user’s needs.

Google Ads MCC hacked? Here’s what to do immediately

14 April 2026 at 16:00
Google Ads MCC hacked? Here’s what to do immediately

At midnight on Jan. 5, hackers took over our Google Ads Manager Account (MCC). We weren’t alone. While it’s hard to get an exact count, hundreds, if not thousands, of agencies have been affected by the hacks, in turn affecting tens of thousands of accounts

While I wouldn’t wish this experience on our worst enemy, having been through it, I have some insights that I hope can help you prevent the same experience from happening to your MCC account.

How we were hacked

Despite having two-factor authentication (2FA) and allowed domains enabled, the hackers were able to get into our account via an employee’s email address. It was clearly a targeted hack: the night of the hack, the hackers tried to get in via two other email accounts at our company before they succeeded with the third.

While phishing or compromised passwords may have originally gotten them into the system — we still don’t know which — we later learned that the account the hackers used had been compromised for months and that they had created their own 2FA that they had been using all along.

Once they gained access to our account, the hackers removed everyone else’s access to the MCC. They then changed the allowed domain to Gmail and granted access to over a dozen people. The hackers then created a new MCC in our company’s name and invited most of our clients. Luckily, none of them accepted.

In the few hours they were in the MCC, the hackers proceeded to create chaos. They removed all the users from some accounts and changed the payment method in others. They launched new campaigns on only a few accounts, yet somehow also attempted half-million-dollar credit card charges on two others (despite not running any ads in those accounts).

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What happened after the hack

We were very lucky. The hackers were locked out within eight hours, and we regained access in just over a week. They spent only about $100 across the MCC. Neither crazy credit card charge went through. We were fully recovered from the hack within two weeks. How did we do this? Let’s take a look at the steps we took.

Step 1: We contacted Google

When we were hacked, we immediately contacted our reps at Google. We’re incredibly lucky to have wonderful Google reps with whom we’ve built longstanding relationships, including one we’ve worked with for over three years. 

These long-term relationships helped, and our reps went to bat for us. They continued to put pressure on the support cases until they were resolved and helped connect us to the resources we needed. Not everyone has their own reps, but you can also take these steps on your own.

Step 2: Fill out the forms

Our Google reps immediately directed us to their “What to do if your account is compromised” resource. From there, we filed Account Takeover Forms, alerting Google to the hack. We were directed to file a form for each of our accounts that had been hacked.

We first filed one for our MCC, even though the form, at the time, said not to use it for MCCs. It looks like that language has since been changed, which is great — don’t skip this step. Getting back into the MCC makes it easier to resolve all issues, rather than having to file tickets and coordinate access for each account.

Step 3: Contact clients

At the same time, we directed any clients who still had access to their accounts to disconnect them from our MCC, and to grant access to a non-compromised email account. That way we were able to secure the accounts, work on them, and mitigate any damages immediately. We were also able to triage our accounts to figure out which we were still able to access, and which had no admins left with access.

Step 4: Reset billing

Disconnecting from our MCC wound up being a very important step. That’s because when our accounts were disconnected from the MCC, we were easily able to reset the billing by editing the payment manager and undoing all of the payment chaos that the hackers had created. We were then able to reconnect them without issue.

Step 5: Check change history

When we eventually did get back into the accounts, we immediately checked the change history, which we were able to do at the MCC level for additional speed. All the changes the hackers made during that time were there with time stamps, allowing us to put together a timeline of the hack and remediate any remaining issues.

Get the newsletter search marketers rely on.


Best practices for recovering from a hack

During all this activity, a few things were especially critical to our success in recovering the account and mitigating damage. Here’s a quick rundown of best practices to keep in mind.

Make sure clients have access

This isn’t just a best practice, but something we believe should always be the case for ethical reasons. Having additional admins in the account let us regain access immediately, despite being locked out of the MCC, and remediate issues without losing time or momentum. 

Google also pushed back on any access or billing changes that didn’t have approval from an existing admin, so having people still in the accounts was critical.

Keep your MCC clean

Remove old clients, and any other MCCs for tools you’re no longer using. We didn’t do this, and wish we had. We’ve made it a best practice for our accounts moving forward.

Limit team access

Make sure your team only has the minimum access they need. Standard access is great. Admin access should be reserved for as few people as possible. The compromised account belonged to a junior team member who didn’t need admin-level access. 

This isn’t to say they wouldn’t have gotten in through a more senior team member’s account — as mentioned, they did try to get in through several before succeeding — but it would have mitigated risk.

Use credit cards or invoices

Never connect your bank accounts to your MCC. We’ve heard of companies that have lost hundreds of thousands of dollars with this same kind of hack. Because our clients were all either on invoice or credit cards, the hackers couldn’t quickly spend money in a way that hit their accounts. 

As noted earlier, the credit card companies rejected the very suspicious half-million-dollar charges the hackers attempted to make, and notified the credit card holders. The clients we were invoicing were never charged, and everything was captured on the invoices before billing.

Invest in relationships

It’s important to invest in your relationships with your Google reps, and fellow agency owners. We remain incredibly grateful to all of the people who helped us, or even just commiserated with us along the way. This experience would’ve been even more painful if we’d had to go through it alone.

How to prevent being hacked

For those who have yet to be hacked, congratulations! Let’s try to keep it that way. Here are some things you can do to make it much less likely that this will ever happen to your accounts.

Start with a clean reset

Begin by kicking every single user out of your account, and have everybody on the accounts reset their passwords. Make sure you log everyone out of every session they were in on every device. 

Our hackers were sitting around auto-logging in and keeping their sessions open for over two months prior to the night they took over the MCC. If we’d forced a reset and logged everyone off, we would’ve removed their access without even realizing it.

Enable 2FA and allowed domains

Make sure there’s only one 2FA per person. 2FAs that use authenticators or physical keys are better than pinging a device. The hackers had created their own 2FA to get into our employees’ accounts, and we never even had an idea that it was happening.

Audit and limit access

Make sure the minimum number of people have the minimum access they need to the MCC. This reduces your risk.

Enable multi-party approval

Google rolled out this new feature quite recently to help prevent account takeovers. Essentially, the feature requires that a second admin verifies any big changes before they happen. If you’d like to read up on this feature, here’s a great guide introducing multi-party approval.

Back up your accounts

You can copy and paste your accounts into your preferred spreadsheet app via Google Ads Editor. Make a habit of doing this periodically so that you’ll always have a copy of how things were in case of a hack. With the backups, you can easily revert back if you need to.

Use strong passwords

It’s important to use unique passwords that aren’t being used anywhere else. That way, if one site gets hacked, your MCC is still not at risk. We’re still not sure how the hackers passed the initial password stage to be able to create their own 2FA.

Invest in security monitoring

If you want to be extra careful, invest in security software and/or a cybersecurity expert to monitor your system. We have now done this, and it’s been amazing (and scary) to see how many phishing attempts have already been caught in the six weeks since we did it.

A note for clients: If you’re a client and another team is managing your Google Ads, do not accept any Google Ads MCC access requests that you aren’t expecting. Please make sure you always know who and what you’re giving access to. When in doubt, double-check with the team that is managing your account. A little caution can go a long way.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Stay safe out there

The good news is that Google knows about these issues, and is actively finding ways to tighten their systems to prevent hacks. In the meantime, I hope this article has helped make our loss your gain. With an ounce of prevention, you’re likely to prevent a pound of pain.

How Google’s removal tools work for SEO and reputation management by Erase Technologies

14 April 2026 at 15:00

When a client calls about a damaging search result, you might typically default to one of two responses: “we can suppress it” or “there’s nothing we can do.” Both skip the middle ground — where Google’s removal tools live.

Google provides tools to remove or deindex content from search results. They’re underused, frequently misunderstood, and often conflated.

This guide breaks down what each tool does, when to use it, and what it can’t do — so you can triage client situations accurately and set expectations that hold.

The distinction that changes everything: removal vs. deindexing

Before you use any tool, get one thing right with clients: the difference between two outcomes that look the same but aren’t.

  • Removal at source: The content is deleted from the site where it lives. Once removed, Google will drop it from its index as it re-crawls the page. This is the cleanest outcome — but it requires the site owner to act. Google’s tools can’t force it.
  • Deindexing: Google removes the URL from its index, so it won’t appear in search results — even if the page still exists. Anyone with the direct URL can still access it. This is what most of Google’s self-service tools do.

The practical implication: deindexing fixes a search problem, not a content problem. If the content is the liability — a news article, court record, or damaging forum post — deindexing reduces risk but doesn’t eliminate it. That context matters when you advise clients.

Google’s removal tools, explained one by one

1. The URL removal tool (Search Console)

In Google Search Console under Index > Removals, this tool lets you temporarily hide a URL or directory from search results. Removal lasts about six months. If the URL still exists, it may reappear.

  • Who it’s for: You, if you control the site in Search Console. You can’t use it to remove someone else’s content.
  • Common use case: Your site has an outdated page you don’t want surfacing — old press releases, deprecated product pages, or pages you’ve updated or removed.
  • What it won’t do: Remove content from a site you don’t control. This misconception causes significant client frustration.

2. The outdated content removal tool

This is the public tool to request deindexing of pages already removed or significantly changed at the source.

  • When it works: The content is gone (the page 404s or the content is removed), but Google still shows a cached version. You submit the URL, Google recrawls it, and if the content is gone, it removes the result and cached snippet.
  • When it doesn’t: The page still exists and the content is live. Google will verify it and reject the request.
  • Practical use: After you’ve removed content at the source, use this to speed up deindexing instead of waiting for the next crawl. It’s not a removal tool — it triggers a recrawl.

For a more technical breakdown, see this step-by-step guide to Google’s removal tools.

3. The Results about you tool

Launched in 2022 and expanded in August 2023, the Results About You tool lets you request the removal of specific categories of personal information from Google Search. It added proactive alerts and broader coverage, then expanded again in early 2026 to include government-issued IDs, passport data, Social Security numbers, and improved reporting for non-consensual explicit imagery, including AI-generated deepfakes.

  • What it can remove:
    • Home addresses and precise location data
    • Phone numbers
    • Email addresses
    • Login credentials and passwords
    • Credit card and bank account numbers
    • Images of handwritten signatures
    • Medical records
    • Personal identification documents (passports, driver’s licenses)
    • Explicit or intimate images shared without consent
  • What it can’t remove: General information that falls outside these categories — news articles, reviews, social posts, court records, or professional information. Those require different paths.
  • Why it matters: If you’re dealing with doxxing, data broker sites, or exposed sensitive data, you now have a self-service path. Managing this tool is increasingly part of ORM work.

4. Legal removal requests

For content outside self-service categories, you can submit legal removal requests to Google:

  • Defamation: False statements of fact about an identifiable person.
  • Copyright (DMCA): Unauthorized use of copyrighted material.
  • Court orders: Legally binding orders requiring removal.
  • Right to be Forgotten (EU/UK): Requests under GDPR and UK law, based on the 2014 Google Spain v. AEPD ruling.
  • Other legal grounds: Harassment, illegal imagery, or other violations.

Google’s legal team reviews these requests; they aren’t automatic, and approval isn’t guaranteed. Defamation has a high bar: the content must be false, not just negative. A bad review isn’t defamation; an inaccurate factual claim may be.

Right to be Forgotten applies only if you’re in the EU or UK. It allows deindexing from Google’s European search properties. It doesn’t remove content globally or impact U.S. search.

5. The personal content removal form

Separate from Results About You, this Google form handles requests to remove non-consensual explicit images, doxxing content, and certain sensitive information on other sites.

This process is more manual. Google reviews the external site content rather than just deindexing a URL. Approval rates are higher for explicit imagery than for other categories, but the process is slower and less predictable.

What none of these tools do

Understanding the limits matters as much as knowing the tools. None of Google’s removal tools will:

  • Force a third-party site to delete content.
  • Remove content from other search engines (Bing, Yahoo, DuckDuckGo).
  • Remove content from Google Images, News, or Maps without separate requests.
  • Permanently fix the underlying content problem.
  • Remove results that are accurate, lawful, and in the public interest.

That’s why suppression remains core to reputation management: when you can’t remove content, you push it down with authoritative, well-optimized content.

How to triage a client removal situation

A practical decision flow for incoming removal requests:

Step 1: Can the client control the source site? 

If yes, remove it at the source, then use the outdated content tool to speed up deindexing.

Step 2: Is it personal information in Google’s covered categories? 

Use Results About You.

Step 3: Is there a legal basis? 

Defamation, copyright, court order, or GDPR right to be forgotten. If yes, file the appropriate request and set realistic timelines (weeks to months, not days).

Step 4: Is it none of the above? 

Suppression is likely the primary path. Build a content and link strategy around the branded SERP to displace the result over time. 

For high-stakes cases — like non-consensual content or permanent court records — firms like Erase.com handle direct outreach and legal escalation on a pay-for-success basis, bridging the gap between DIY tools and litigation.

Setting realistic client expectations

The most common client mistake is expecting Google to act like a content moderator. It isn’t. 

Google’s removal tools cover specific, narrow categories. Outside them, Google defaults to indexing what exists on the web.

Set this expectation upfront to protect the client relationship. It also positions suppression not as a fallback, but as the right tool for most ORM situations.

When removal is viable, these tools have improved over the past two years. Results About You has expanded and should be included in your standard ORM audit. The outdated content tool remains underused and is a quick win when source removal has already happened.

Know the tools. Use them where they apply. Suppress where they don’t.

Google simplifies Analytics and Ads consent rules

13 April 2026 at 22:43
How to use Performance Planner and Reach Planner in Google Ads

Google is changing how Google Analytics and Google Ads share consent signals — a shift that could have major implications for marketers’ tracking setups starting this summer.

What’s happening. Beginning June 15th, Google Ads data collection will rely solely on the ad_storage consent setting, removing a layer of complexity that previously came from linked Google Analytics configurations.

Until now, ad data flows between Analytics and Ads were influenced by both Consent Mode and Google Signals settings inside GA. That created confusion for marketers, especially because some of the controls were buried in Analytics settings rather than clearly surfaced in ad consent banners or tag implementations.

Starting in June, Google is simplifying that structure. Google Analytics data collection will still be governed by Google Signals, but Google Ads will look only at whether users have granted ad_storage consent.

That means a linked Google Analytics tag will no longer affect whether Google Ads can collect or use advertising identifiers.

What changes. For many advertisers, the update will effectively create a cleaner — but more rigid — consent framework.

If ad_storage is granted, Google Ads may use all available advertising signals, including linking activity to a user’s signed-in Google account when possible. If ad_storage is denied, Google will be limited to less persistent signals, such as URL parameters like gclid.

There appears to be little middle ground. Marketers will have less ambiguity about what drives ads data collection, but they will also have fewer ways to fine-tune what gets shared.

Why we care. This change makes consent settings much more consequential for measurement, attribution and audience targeting. From June, whether Google Ads can use identifiers will depend almost entirely on the ad_storage signal, so any gaps or errors in consent mode setup could directly affect campaign performance data.

It also removes some hidden complexity from linked Google Analytics settings, giving advertisers clearer rules — but less flexibility.

Between the lines. The move reflects Google’s broader push to make consent systems easier to understand for advertisers and regulators.

A single source of truth for ad consent could reduce implementation errors and make compliance easier to explain. But it also puts more pressure on brands to ensure their Consent Mode setup is working properly.

If consent updates are delayed, misconfigured or incomplete, marketers could see gaps in measurement, attribution and audience targeting.

What marketers should do now. Audit your consent implementation before the June deadline.

Teams should confirm that Consent Mode update calls are firing correctly and that ad_storage settings accurately reflect user choices. Brands with Google Signals turned off should pay particular attention: under the new setup, they could see more Ads-linked data than before if users grant ad consent.

For marketers, the takeaway is simple: cleaner rules are coming, but getting consent right will matter more than ever.

Dig deeper. Updates to Google Analytics Data Controls

Google is bringing back a familiar name: Data Studio

13 April 2026 at 20:20

In an AI-driven economy, companies have more data than ever but still struggle to turn it into useful daily decisions. Google is betting that a revamped Data Studio can become the place where users quickly explore, organize and act on data across its ecosystem.

Why the switch back. Google says the new Data Studio will serve as a central hub for a range of assets, from traditional reports and dashboards to data apps built in Colab and BigQuery conversational agents. The idea is to give users one place to work with the tools and information that shape their business each day.

Flashback. Three years ago, Google folded Data Studio into its broader analytics push by rebranding it as Looker Studio. Now, it is separating the products again as customer needs evolve.

Two versions. Google is launching two versions of the product.

  • Data Studio will remain free for individuals and small teams that need quick analysis and visualization.
  • Data Studio Pro, meanwhile, is aimed at larger organizations that need stronger security, compliance, management controls and AI capabilities, with licenses sold through the Google Cloud and Workspace admin consoles.

Why we care. The (kind of) new Data Studio could make it much easier to pull together campaign, audience and performance data from across Google’s ecosystem in one place. That means faster reporting, easier ad hoc analysis and quicker answers without relying as heavily on analysts or engineering teams. For brands already using Google Ads, BigQuery or Sheets, it could streamline how teams track performance and make day-to-day budget and creative decisions.

Where Looker fits in. Under the new structure, Looker will remain Google Cloud’s enterprise business intelligence platform, focused on governed data, semantic modeling and large-scale analytics. Data Studio, by contrast, is being positioned as the faster, more flexible option for personal exploration, ad hoc reporting and lightweight dashboards across services like BigQuery, Google Sheets and Ads.

What’s next. For existing users, Google says the transition should be seamless. Current reports, data sources and assets will carry over automatically, with no action required.

Google plans to share more about the relaunch and its broader analytics strategy at Google Cloud Next ’26 later this month.

Dig deeper. Data Studio returns as new home for Data Cloud assets

❌
❌