Reading view

Chloe Varnfield talks sneaky Google Ads settings and tanking performance

Chloe Varnfield, a digital marketing specialist at Atelier Studios with nearly eight years in PPC, joined me to share the mistakes that shaped her career — and the lessons every advertiser should take from them.

When Google sneaks settings past you

Chloe’s first story centers on Google’s account-level automated assets setting — a feature so well hidden that many advertisers don’t know it exists until a client sends a screenshot asking why their headline looks completely wrong. The setting, buried behind a three-dot menu, defaults to “on”, meaning Google can automatically generate and serve headlines advertisers never wrote or approved. The takeaway: always audit your account-level settings, and treat every Google update as a potential default you’ll need to turn off.

Why you should never make changes on a Friday

A client asked Chloe to narrow their campaign’s location targeting mid-call. She made the change quickly — and accidentally excluded the UK entirely while targeting only the desired regions. Campaigns stopped delivering. It took three days of head-scratching before she audited the full campaign and found the culprit. The lesson she now swears by: never make significant changes on a Friday, and when something stops working, go straight to a full audit rather than waiting for the algorithm to “fix itself.”

The time she listened to a Google rep — and tanked performance for two months

Chloe’s most costly story involves a campaign that was performing at its best in years. A Google rep recommended switching bid strategy from Maximise Conversions to Maximise Conversion Value. She made the switch — and performance collapsed. For small to medium-sized businesses that already struggle to hit the conversion volume thresholds needed for smart bidding to work effectively, changing bid strategy is a high-stakes decision that shouldn’t be made on the spot. It took two months to recover, with the pressure of a major seasonal sale looming. She fixed it — but the lesson stuck: don’t let enthusiasm or a rep’s insistence override your judgment. Sit on big decisions. Trust your gut.

The account mistakes that still happen in 2026

When auditing inherited accounts, Chloe consistently sees the same three problems: broken or absent conversion tracking (sometimes still pulling from Universal Analytics), broad match applied to brand campaigns — which makes it impossible to know whether results are genuinely driven by non-brand keywords — and accounts with zero negative keywords. These aren’t minor structural issues. They directly distort performance data and waste budget.

On honesty, client relationships, and not spiralling

Across all three of her own stories, Chloe’s client relationships survived because she communicated transparently — explaining what had gone wrong, what she was doing to fix it, and what the next step would be if that didn’t work. Her advice to anyone mid-crisis: breathe, be kind to yourself, stay calm, and remember that no one has died. The ability to fix problems under pressure is what builds expertise — and fixing something difficult often becomes your proudest professional moment.

The AI mistake too many marketers are making

On AI, Chloe is clear: using it to generate ad copy or proposals without reviewing or editing the output is lazy and obvious. AI should make you faster, not replace your judgment. Always put your own voice and review back into whatever it produces.

💾

Chloe Varnfield shares sneaky Google settings, Friday mistakes, and Google rep advice that tanked her campaigns — and the hard-won lessons that came after.

SerpApi asks court to throw out Reddit scraping complaint

Reddit vs SerpApi

SerpApi is asking a federal court to dismiss Reddit’s lawsuit over alleged scraping of Reddit content from Google Search, saying Reddit is trying to use copyright law to control user posts and public search results.

  • The motion follows Reddit’s amended complaint filed in February.
  • SerpApi says the filing still fails to show copyright ownership, circumvention of technical protections, or concrete harm.

SerpApi’s argument. SerpApi CEO Julien Khaleghy, in a blog post today, argued the lawsuit fails for several reasons:

  • Reddit doesn’t own most of the content at issue. Its user agreement states that users retain ownership.
  • Reddit holds only a non-exclusive license to user posts.
  • The snippets cited in the complaint (e.g., dates, addresses, short fragments) aren’t copyrightable.
  • SerpApi accessed Google Search pages, not Reddit itself.

DMCA. Khaleghy said Reddit claims SerpApi violated the Digital Millennium Copyright Act (DMCA) by circumventing technical protections. SerpApi disputes that claim, saying it retrieves the same search results visible to anyone who enters a query in Google. Khaleghy argued that:

  • SerpApi doesn’t break encryption or bypass authentication.
  • Accessing public webpages isn’t “circumvention” under the DMCA.
  • Reddit is trying to enforce copyright protections it doesn’t own.
  • Reddit’s privacy policy states that public posts may appear in search results.

Catch up quick. Legal fights over search scraping and AI data have intensified in recent months:

Why we care. The case tests whether companies can extract information from Google’s search results without violating copyright or the DMCA. The outcome could affect SEO tools and AI training data.

What’s next. The court must decide whether Reddit’s amended complaint can proceed. If the judge dismisses the case with prejudice, Reddit’s claims against SerpApi in this lawsuit would end.

SerpApi’s blog post. Reddit’s Lawsuit is a Dangerous Attempt to Expand Platform Power

Beyond keywords: Mastering AI-driven campaigns

Google Ads dashboard concept

The days of building campaigns around long lists of keywords are fading. Today, AI-powered Google campaigns and features like Performance Max (PMax) and AI Max are changing the rules.

These keywordless campaigns lean on automation, audience signals, and machine learning to find new opportunities, often faster and at greater scale than humans can.

At SMX Next, three PPC pros — Nikki Kuhlman, VP of search at Jumpfly; Brad Geddes, founder of Adalysis; and Christine Zirnheld, director of lead gen at Cypress North — explained where PMax and AI Max fit into your broader campaign strategy, where humans still make the difference, and how to strike the right balance between automation and control.

AI Max for Search: Best practices and what not to do

AI Max for Search is not a new campaign type. It’s a one-click opt-in setting within existing Search campaigns.

Without requiring you to switch to broad match, it expands your keywords — similar to broad match or Dynamic Search Ads — using your landing pages and other site assets. It then personalizes the ad copy and landing page the searcher sees.

The evolution from traditional setup

In the old setup, you might have used a keyword like “skincare for dry sensitive skin” that sent users to a moisturizer page with generic ad copy because you couldn’t capture every variation. With Google’s current matching, a specific ad group no longer guarantees that keyword will trigger that ad group.

AI Max for Search addresses this by generating ad copy based on the search query, making it more relevant and directing users to a landing page that better matches their needs.

Success with blog content

One area where AI Max for Search is seeing success beyond the norm is blog content. While DSA campaigns traditionally excluded blogs, AI Max for Search can now serve blogs as landing pages—and they’re converting. The key is that these blogs guide readers to specific products, not just general content.

The generated headlines are compelling and longer than what traditional RSAs allow, creating a more engaging user experience.

Best Practices for AI Max for Search

Do:

  • Use it on existing campaigns with history and data, not brand new campaigns
  • Test it as a 50/50 experiment instead of an outright change
  • Use it on brand campaigns with brand inclusion capabilities
  • Apply it to campaigns not hitting budget that could use more volume
  • Review landing pages and utilize URL exclusions (individual or rule-based)
  • Use landing page inclusions at the ad group level
  • Review search queries regularly and add negative search terms
  • Enable both text customization and final URL expansion for maximum value
  • Turn off AI Max at the ad group level when specific ad groups drive poor traffic

Don’t:

  • Use it on brand new campaigns without data
  • Change all campaigns at once without testing
  • Use it on brand campaigns without name recognition or brand inclusion ability
  • Apply it to budget-constrained campaigns
  • Turn off both URL expansion and text customization — if you’re not using both features, stick with broad match and smart bidding
  • Assume it works universally — test on individual campaigns

Your action plan

Week 1: Pick a search campaign to test (brand with brand inclusion, with budget capability, needing more volume). Review landing page URLs and add inclusions or exclusions.

Week 2: Review search queries and add negatives.

Week 3: Continue optimization and turn off AI Max at the ad group level as needed.

Experiment checklist:

  • Ensure enough volume for a 50/50 experiment
  • Give the experiment 6 weeks to 2 months
  • Set up a custom experiment if you need to enable brand inclusion or update settings
  • For one-click experiments, change confidence level to medium and turn off auto-apply

Match type performance: What the data shows

A comprehensive study analyzing over 16,000 campaigns revealed surprising insights about match type performance across different bidding strategies.

Match type basics

  • Exact match: Should match only when the search term has the same intent as your keyword. Misspellings and word order haven’t mattered for years — focus on user intent.
  • Phrase match: The search intent should match your keyword, but could have additional information around it, whether modifiers, phone numbers, or websites.
  • Broad match: Shows for anything related to the search intent. The key difference is that broad match uses additional signals that exact and phrase don’t, such as content on the landing page, other keywords in the ad group, and, most powerfully, previous search history for that user.

Performance by bidding strategy

Max Conversion strategies (Max Conversions, Max Conversion Value):

Most campaigns using max bid strategies have under 30 conversions per month, giving machines limited data to work with. The findings:

  • Exact match has the best click-through rates and conversion rates
  • Broad match had the worst conversion rates but the best return on ad spend
  • Broad match also had lower CPA than phrase match
  • Phrase match performed worst overall

Recommendation: Start with exact match, then skip phrase match entirely and layer in broad match if you have more budget to spend.

Target Bid Strategies (Target CPA, Target ROAS):

Most campaigns using these strategies have over 30 conversions per month, with many at 50 or 100+, giving machines substantially more data. The findings:

  • Exact match is again the best match type
  • Phrase match comes second
  • Broad match is third
  • Phrase match performs better with more data

Recommendation: Start with exact match, layer in phrase match with more budget, then add broad match if additional budget is available.

The phrase match puzzle

Why does phrase match perform poorly with limited data but better with more data?

Broad match uses additional signals, particularly previous search queries, to determine bids. When conversion data is limited (under 30 conversions monthly), broad match’s ability to leverage previous search history makes it much stronger than phrase match.

However, with sufficient data (50–100+ conversions), Google can properly match phrase match keywords using machine-learning pattern matching.

Brand vs. non-brand considerations

When you combine brand and non-brand data, exact match becomes even more powerful, delivering significantly higher click-through rates, higher conversion rates, lower CPAs, and much higher return on ad spend. That’s why segmenting keywords by brand and non-brand is crucial when determining your match type strategy.

Ecommerce exception

For ecommerce companies, broad match (and sometimes phrase match) can produce higher average order values than exact match. When someone searches for a specific product, and you carry that exact item, conversion rates are high, but they’re usually buying a single product with a lower checkout value.

When shoppers haven’t decided on a product, they tend to match broader keywords and build larger carts — resulting in lower conversion rates but higher order values.

Performance Max for lead generation

There’s a common misconception that Performance Max only works for ecommerce and is too difficult for lead generation. That couldn’t be further from the truth.

The critical success factor

The biggest mistake you can make—one you should avoid entirely—is optimizing campaigns for form submissions alone. If you treat every form submission as your campaign goal, you’ll end up with spammy submissions and frustrated sales teams.

The solution: integrate your Google Ads account with your CRM and import bottom-of-funnel leads—sales-qualified leads (SQLs), marketing-qualified leads (MQLs), opportunities, or even customers if the sales cycle is short.

When you tell Google Ads what you actually want and set it as your campaign goal, Performance Max can cast a wide net while still bringing in qualified prospects.

Available controls for regulated industries

Performance Max has significantly more controls now than at launch, making it viable for highly regulated industries:

  • Brand exclusions: Exclude all brand traffic from Performance Max campaigns
  • Campaign-level negative keywords: Exclude unwanted search terms directly
  • Search term reports: See what’s triggering your ads and exclude accordingly
  • Channel reporting: View spending and performance across different networks
  • Page feeds: Control where you send traffic on your site
  • Final URL expansion toggle: Turn it off completely if needed
  • Text enhancement controls: Optional feature that can be disabled entirely
  • Text guidelines: Specify words to avoid (e.g., “discount” or “directory”)

Device control: The secret weapon for B2B

One of the most underutilized levers for B2B and regulated industries is device control, introduced at the beginning of 2025. You can turn off any device from your Performance Max campaign.

A B2B SaaS example demonstrates the impact: Before device segmentation in January, the account had 224 SQLs from desktop at an acceptable CPA, but 33 from mobile at $319 CPA (above goal). After creating separate mobile campaigns with more aggressive target CPAs, they achieved 190 desktop SQLs and 37 mobile SQLs in a shorter month, with mobile CPA dropping to $204 and overall Performance Max CPA declining from $238 to $204.

Real Performance Max results for B2B SaaS

Despite lower conversion rates from Performance Max compared to search campaigns (due to broader reach), the results speak for themselves. In September 2025, one B2B SaaS account achieved:

  • Search Campaigns: 150 SQLs at $237 CPA
  • Performance Max: 204 SQLs at $220 CPA

Performance Max cast a wider net with cheaper CPCs, bringing in not just more leads but more sales-qualified leads at a lower cost.

How they did it:

  • Optimized for SQLs, not form submissions
  • Set lower target CPAs in Performance Max than search (to control spend while casting wider net)
  • Created separate campaigns for off-hours to control weekend spending
  • Turned off final URL expansion and text enhancements (client preference)
  • Implemented separate mobile and tablet campaigns with aggressive target CPAs

AI Max for Search in lead generation

AI Max for Search brings the power of Performance Max to the search network, where bottom-of-funnel intent is strongest. This is especially valuable for lead generation accounts that spend on other networks in Performance Max but don’t generate leads from them.

Early results: Higher ed financial services

A higher education financial client (loan products) showed promising early results:

Approved applications (primary KPI):

  • Standard Search: 86 approved applications at $660 CPA
  • AI Max: 70 approved applications at $579 CPA

AI Max brought in qualified leads cheaper despite the highly competitive keyword environment.

Down-funnel performance

Beyond the initial conversion action (soft credit check), AI Max showed superior performance throughout the funnel:

  • 42% of AI Max form submissions resulted in soft pulls vs. 36% for standard search
  • 9.9% of AI Max form submissions resulted in bookings vs. 5.58% for standard search

AI Max isn’t just bringing more qualified prospects at the top—lead quality remains higher throughout the entire funnel.

How they did it:

  • Optimized for approved applications, not form submissions
  • Set lower target CPAs in AI Max than standard search
  • Used high-performing bottom-of-funnel keywords with broad match types
  • Kept final URL expansion and text enhancements disabled (still worked well without them)

Win with AI without losing control

PPC success requires embracing AI-driven campaigns while maintaining strategic human oversight. Whether you use AI Max for Search, Performance Max for lead generation, or adjust match types based on bidding methods and data volume, the key is understanding how these tools work and applying best practices aligned with your business goals.

The data is clear: exact match remains powerful across scenarios, but phrase and broad match perform differently depending on bidding strategy and data volume. For lead generation, the game changer is optimizing for true bottom-of-funnel conversions rather than form submissions, combined with strategic device controls and proper campaign segmentation.

The future of PPC depends on knowing when — and how — to apply automation and control for maximum impact.

💾

PPC experts explain how AI Max and Performance Max reshape search, when automation wins, and where human strategy still drives better leads.

Why surface-level SEO tactics won’t build lasting AI search visibility

Google search monolith crumbling

A recent Harvard Business Review piece echoes the shift we’re sseeing in the SEO industry: at a macro level, LLMs and Google’s AI-powered SERP features, such as AI Overviews, aren’t just creating a zero-click environment, but also changing user journeys and behavior.

They’re collapsing what used to be multi-touch customer journeys into a single synthesized answer.

For a more visual and emphatic metaphor, the monolith of “Search” is crumbling.

When that happens, brands lose many of the touchpoints they once owned, and your marketing strategy must change accordingly. HBR captures this moment well, arguing that marketing now has a new audience and that algorithms increasingly shape first impressions.

That said, while the article points in the right direction on the broader trend, its tactical advice is generic and falls back on shallow tactics.

Much of the guidance returns to familiar marketing playbook ideas that sound strategic and innovative but lack real operational depth. That gap matters for the longevity and sustainability of visibility.

The narrative may be easy for you to understand and repeat at the executive level, but it glosses over the deeper structural changes you must actually make to adapt to the new search ecosystem.

The problem with flock tactics

The HBR article centers on schema, authorship signals, and branded concepts. These recommendations risk becoming what I call “flock tactics.”

These ideas spread quickly because they’re easy to explain, but they offer little lasting competitive advantage once everyone adopts them.

Schema 

Schema has been one of the most debated topics in LLM and AI optimization. Microsoft Bing confirmed it uses schema for its LLMs, but the relationship between Google’s models and third-party LLMs isn’t as straightforward.

While it isn’t necessarily wrong to recommend schema as part of your overall search optimization activities (SEO and AI), positioning it as a table-stakes tactic ignores diminishing returns once competitors implement similar markup and it becomes standard.

Another gap is the role of external knowledge systems, such as Wikidata or authoritative publishers. Much of the information LLMs rely on comes from those sources rather than a single company’s website.

This is less linear to understand, explain, and demonstrate as a single line item on an activity tracker, but these are nuances you now have to deal with, whether you like it or not.

What’s also missing is any exploration — or even a nod — to how models ingest and prioritize structured data compared with the many unstructured signals they rely on.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

E-E-A-T — shallow authorship signals

Attaching the names, credentials, and biographies of real experts follows familiar E-E-A-T logic and represents reasonable hygiene.

The problem is that the treatment remains superficial. It risks pushing you to focus on cosmetic signals such as bios, headshots, and credential lists without strengthening the underlying expertise pipeline.

There is a meaningful difference between placing an author bio on a page and cultivating a genuine expert entity whose work appears in conferences, third-party publications, standards committees, or academic collaborations.

Only the latter produces signals that models are more likely to recognize and trust.

Vanity concepts

The article also suggests creating branded frameworks or concepts — for example, something like “The Acme Index” — to help models associate ideas with your company. In theory this sounds appealing, but in practice it’s extremely difficult to execute.

Unless those ideas spread into the trusted datasets LLMs tend to prioritize, they rarely gain traction.

You need those concepts and frameworks adopted and discussed by entities other than yourself, including academic journals, technical standards, widely used software ecosystems, and other prominent entities in your category.

What often results instead is a proliferation of branded labels that remain largely invisible to the models they were meant to influence.

The structural blind spots

Beyond these tactical issues, the analysis overlooks deeper structural challenges. It treats AI primarily as an external platform shift.

The implication is that you must simply adapt to it rather than actively shaping your own environment.

Internalizing AI infrastructure

HBR never seriously considers the possibility of building AI into your own infrastructure. You can deploy assistants, RAG systems, and domain-specific agents within your own products and customer experiences.

These systems operate in logged-in, transactional contexts where first-party data and controlled interfaces still matter enormously.

In those environments, traditional concerns such as site architecture, structured data, and product design remain deeply relevant, though they operate differently from public search optimization.

It’s not just SEO

The discussion also frames SEO primarily as a page-ranking problem tied to discovery.

That perspective misses the broader shift toward entity-level knowledge management (things, not strings).

Visibility within LLMs increasingly depends on how well you structure entities, taxonomies, and knowledge graphs, and on how those systems connect with external data sources.

Most LLMs don’t process data at the petabyte scale Google uses to understand entity relationships. There is a strong correlation that when something ranks well on Google, third-party LLMs often correlate and “trust” Google’s guidance on which brands to show, for what, and when.

HBR’s phrase “engineering recall” points directly to this deeper data engineering work, yet the implications aren’t expanded.

LLM model heterogeneity

Another major omission is the diversity of AI systems themselves.

Different AI assistants and models rely on different training datasets, refresh cycles, retrieval mechanisms, and safety layers.

That heterogeneity means you can’t assume a single optimization strategy will work across all AI surfaces.

It also doesn’t explore the risk of broad-stroke approaches. If you try to increase visibility within AI models without accounting for safety filters, attribution errors, or hallucinations, you may gain visibility in ways that are inaccurate or reputationally damaging.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Surface-level tactics won’t build AI visibility

HBR’s article works well as a high-level explanation of how AI is changing marketing. It helps you understand that traditional SEO alone is no longer enough and that you must consider how AI systems see and describe your brand.

As a practical guide, however, the advice is thin. Most recommendations focus on surface-level tactics that many companies will quickly copy, reinforcing the echo chamber of flock tactics that are easy to sell and quantify, but risk narrowing your focus to short-term wins at the expense of longer-term strategy.

The real challenge is deeper. You need clear entity definitions, structured knowledge systems, reliable data in trusted sources AI models use, testing across how different models represent you, and AI-powered experiences within your own products.

“Winning” in the AI era will depend less on cosmetic SEO improvements and more on the harder structural work behind the scenes.

Only 15% of pages retrieved by ChatGPT appear in final answers: Report

AI search fan out

ChatGPT retrieves far more webpages than it cites. A new AirOps analysis found that 85% of discovered sources never appear in the final answer.

Why we care. If you want your content cited in AI-generated answers, discovery isn’t enough. Most retrieved pages never become visible to users.

Key finding. In AI answers, retrieval doesn’t equal citation. Your page can rank and be retrieved yet still lose the citation to a source that better matches the prompt or supporting context.

  • This shifts optimization toward earning selection inside the AI synthesis process—not just appearing in search results, per the report.

By the numbers:

  • 82,108 citations appeared in final responses.
  • Only 15% of retrieved pages were cited.
  • 85% of pages surfaced during research never appeared in answers.

Citation rates also varied by query type:

  • 18.3% for product discovery queries
  • 16.9% for how-to queries
  • 11.3% for validation searches

Fan-out queries. ChatGPT often expands prompts with additional internal searches while generating an answer, creating what the report calls a “second citation surface.” Across the dataset:

  • 89.6% of prompts triggered two or more follow-up searches.
  • Fan-out searches expanded 15,000 prompts into 43,233 queries.
  • 32.9% of cited pages appeared only in fan-out results—not the original prompt.
  • 95% of fan-out queries had zero traditional search volume.

Google ranking correlation. High Google rankings strongly correlated with citations:

  • 55.8% of cited pages ranked in Google’s top 20.
  • Pages ranking in Position 1 were cited 3.5 times more often than pages outside the top 20.

About the data. AirOps analyzed 548,534 pages retrieved across 15,000 prompts to examine how ChatGPT expands queries and selects citations.

The study. The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations

Stop paying for traffic: The enterprise CMO’s guide to ROI-driven SEO

Vanity metrics vs revenue

The standard agency reporting call is broken. Budgets are under extreme scrutiny, yet you still invest in vendors that celebrate arbitrary traffic gains while your sales pipeline stays flat.

Optimizing for raw traffic volume is a legacy mindset that hides real commercial performance. The new mandate is to build an acquisition engine that influences buyers and protects your profit and loss (P&L) long before the transaction.

To survive as a marketing leader today, you must ruthlessly challenge your internal teams and external agencies. Stop accepting reports on operational output and demand hard financial accountability: pipeline contribution, customer lifetime value (LTV) to customer acquisition cost (CAC) ratios, and reduced paid media dependency.

The new path to purchase: Why traffic is bleeding your budget

Chasing top-of-funnel informational traffic is a trap. If the users clicking your links aren’t actively buying, you’re paying for vanity metrics, not business outcomes.

This happens because many buyers now use large language models (LLMs) to conduct deep research before they reach a search engine’s transactional layer. If you aren’t the cited authority during that AI-driven research phase, you’re invisible by the time buyers finalize their purchase decisions.

The 7.48% reality: The power of the educated buyer

The contrast in traffic quality is staggering when you look at the data. Across our enterprise client base, traditional organic search converts at 2.75%, while AI search converts at 7.48%.

LLMs function as the ultimate trust proxy for today’s consumers. When tools like Gemini, ChatGPT, or Perplexity synthesize dozens of reviews, whitepapers, and Reddit threads to recommend your enterprise software, users trust the LLM’s consensus more than a branded blog post.

AI engines arm consumers with comprehensive data, comparisons, and consensus. By the time a user clicks your AI citation, they’ve already made their decision based on your authority and are prepared to transact.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

From found to cited: Architecting the default recommendation

Want to capture this 7.48% conversion rate? Your entire approach to digital asset creation must evolve. The strategy no longer centers on ranking among a list of links, but on being cited as the definitive option.

To win the AI consensus, you must translate your marketing strategy into structured capital management.

  • The old way: Publishing a 2,000-word blog post on top supply chain trends that generates 5,000 monthly visitors who bounce after reading and add zero value to your pipeline.
  • The new way: Build a generative engine optimization (GEO) hub—a dedicated supply chain cost calculator page with proprietary data tables, expert author schema tagging your lead engineers, and strict answer-first formatting.

LLMs require consensus and verifiable facts to generate confident answers. By structuring your digital assets with proprietary data and verifiable entities, you become the default recommendation.

This approach may yield only 500 highly qualified visitors, but it gives LLMs what they need to cite you in vendor comparison prompts and captures buyers at the exact moment of commercial evaluation.

Strategic ROI: Using citation authority to reduce ad spend

It’s time to stop viewing SEO as a siloed traffic generator. You must treat organic citation authority as a strategic financial lever to reduce overall CAC.

Align your organic assets with your highest-CAC paid campaigns. When organic search owns the AI Overview, your paid team can confidently pull back defensive ad spend.

Here’s how to leverage paid and AI search:

  • IF your brand becomes the default AI recommendation for a high-cost commercial category, THEN your paid team must aggressively reduce defensive brand bidding to slash overall cost per acquisition (CPA).
  • IF paid search identifies a highly profitable long-tail query, THEN SEO must prioritize building a structured asset to organically capture that exact demand in the future.
  • IF an LLM cites your competitor as the superior enterprise solution, THEN your paid team must immediately deploy targeted, bottom-of-funnel conquesting ads to intercept that user before the transaction, while the organic team rapidly engineers a proprietary data asset to win back the consensus.

The monthly cannibalization review: Your immediate action item

If your Head of Search and Head of Paid Media aren’t in the same room once a month mapping organic citations against paid brand bidding, you’re burning capital.

Align your teams and channels. Routinely audit where you’re paying for clicks on terms where you already own the AI citation and the top organic spot.

Treat this cannibalization review as a strict financial audit. Identify wasted defensive ad spend and immediately reallocate those dollars toward net-new market expansion.

The enterprise scorecard: 3 questions to ask your agency tomorrow

To regain control of your P&L, you must challenge your vendors to step up. Ask your agency these three questions tomorrow morning to see if they’re true business partners or order-takers.

1. What’s our citation share of voice for our highest-margin categories?

Challenge your team to map their organic efforts directly to the AI research phase of your most profitable products.

The answer you should hear: “We’ve mapped your 50 highest-margin queries. By securing the primary AI citation for these, we’ve generated $1.2 million in pipeline this quarter at a 3:1 LTV:CAC ratio.”

2. How is our citation strategy directly reducing our paid media CAC?

Require teams to prove how their organic authority captures demand that would otherwise require paid ad spend.

The answer you should hear: “By capturing the definitive AI citation for [category], we paused paid bidding on those terms. This reduced our blended CAC by 18% and saved $45,000 in defensive ad spend — which we’ve immediately reallocated to net-new market expansion.”

3. Are our digital assets structured for LLM extraction?

Push your teams to explain their strategy for AI-driven search models. It’s no longer enough to publish standard web pages.

The answer you should hear: “We’ve restructured your core commercial pages away from standard marketing copy, deploying answer-first’ frameworks, proprietary data tables, and expert author entities to ensure LLMs confidently extract and recommend your brand. This structural shift has increased our inclusion in commercial AI Overviews by 40% this quarter, directly feeding our bottom-of-funnel pipeline.”

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Demand commercial outcomes, not operational output

In a tough economy, SEO is a measurable business unit that must defend its budget with revenue data. Don’t accept operational output as proof of commercial success.

Audit your reporting frameworks immediately. Stop accepting vanity metrics as evidence of success. Demand pipeline impact, LTV:CAC ratios, and a resilient acquisition engine.

Any agency or internal team unwilling to tie its work directly to your P&L will become obsolete. Your job as an enterprise leader is to ensure your brand is cited as the authority long before the transaction begins.

Google Search Ads in 2026 require a different kind of audit

Google Search Ads value redistribution

Brandon Ervin, Director of Product Management for Google Search Ads, recently discussed campaign consolidation, AI Max, and what advertiser control looks like in 2026 on Google’s Ads Decoded podcast. The conversation was serious and informed, and reflected a product team that understands advertiser concerns and is actively working to address them.

But the podcast is also incomplete. The gap between what Google said and what advertisers actually experience from their sales organization is large enough to warrant a direct response.

Ervin’s team is doing genuinely good work, but the platform’s structural incentives haven’t changed. Google’s evolving product is creating problems faster than it can solve them. Performance is now measured on economic standards, shaping how a search ads audit is performed.

Recent improvements to Google Search Ads

Recentish improvements are genuine:

  • Brand exclusions in Performance Max and Demand Gen.
  • Site visitor and customer exclusions from PMax campaigns.
  • Network-level reporting within bundled campaigns.
  • Improved search term visibility.
  • Brand and geo controls inside AI Max at the ad group level.
  • Semantic modeling that doesn’t anchor on campaign or ad group IDs, reducing learning period risk during consolidation.

These are meaningful. They are also solutions to issues introduced by bundling, opacity, and aggressive automation rollout.

These products have been mercilessly shopped to advertisers since 2021, and the controls that make it usable arrived years after the sales push began.

The ability to separate brand from non-brand traffic inside PMax/AI Max should not be framed as innovation. It restores a fundamental distinction that previously existed by default. The ability to see network performance inside a bundled campaign is not an expansion of control. It restores visibility that was removed.

An audit must ask whether new tools are genuinely expanding control or merely reintroducing baseline transparency.

Table stakes: What everyone agrees on

Before the real audit begins, the fundamentals. These are uncontroversial and should already be in place:

  • Run full ad extensions (sitelinks, callouts, structured snippets, image, call).
  • Use automated bidding with intentional target-setting and conversion action selection (I recognize there are still holdouts here but seems crazy to me).
  • Maintain negative keyword lists.
  • Write ads relevant to the queries they serve.
  • Audit automatically created assets for accuracy and brand safety.
  • Cut Search Partners and Display expansion from Search campaigns.
  • Separate brand and generic campaigns using brand controls.
  • Exclude site visitors and past customers from prospecting campaigns where appropriate.
  • Import offline conversion data (MQLs, SQLs, revenue, CLV, repeat rate,) to feed the algorithm downstream signals.
  • Weight conversion values by actual downstream conversion rates.
  • Account for mobile vs. desktop performance gaps.

Those are table stakes. The real audit begins after that.

What a 2026 search audit must focus on

With the prevalence of AI, advertisers need to focus on reconstructing economic visibility in systems designed around aggregation and automation. 

Signal architecture

In the podcast, Ervin says “control still exists, it just looks different.” Ad controls — where, when, and to whom ads appear — are still important and changing, some think, for the worse.

The old ad controls — exact match, manual bids, network selection, and device modifiers — gave advertisers direct influence over where ads appeared and what they paid. 

However, the new controls are indirect. Control now lives in data quality, density, and selectivity. They influence the algorithm, but the algorithm makes the final call.

An audit should focus on three questions:

  • Quality: Are you importing revenue, pipeline stage, or qualified lead status, or only surface conversions?
  • Density: Is there enough high-quality data for the model to learn from, or is it sparse and noisy?
  • Selectivity: Are you intentionally limiting what Google can see, or are you passing everything indiscriminately?
Low prediction, high density

With these new tactics, you only pass net-new customers or high-value customers. The majority of the time, it is better to just pass the densest and most predictive conversion set.  

Incrementality

Google optimizes toward reported conversions, not incremental conversions. Brand search often captures existing demand. Retargeting often captures users already in motion. Pmax/AI Max frequently blends these signals.

Ervin was asked: Are AI-driven campaigns over-indexing on warm brand traffic to inflate blended ROAS (return on ad spend)?

He doesn’t dispute the problem, but points to partial solutions, including using brand controls, better theme your account, and looking at multi-campaign A/B testing. 

If incrementality is not measured, automation amplifies non-incremental signals.

Marginal returns

Google uses a blended cost-per-action (CPA). For example, the first $50K of spend might return a $30 CPA, while the next $50K might return $120. 

With automation, money is spent until the blended metric falls within tolerance, meaning the last dollar is not spent efficiently. The vast majority of advertisers are bidding far beyond what they should be and have no idea it is happening.

An audit must:

  • Plot spend against incremental conversions.
  • Estimate marginal CPA at each spend tier.
  • Identify diminishing return curves.
  • Compare marginal CPA to lifetime value.

A lower target makes the algorithm more selective, competing in fewer high-value auctions. Google doesn’t suggest this because that would mean less spend and lower bids are less effective in general.  

Query resolution and ability to lower targets

On the podcast, Ervin acknowledges that some AI Max matches can “look a little wonky” and says his team is working on exposing the model’s reasoning. 

Query mapping has gotten meaningfully worse over the past several years: queries landing in the wrong ad groups, matching to keywords with different intent, and broad match pulling in traffic unrelated to the keyword.

AI Max has accelerated this — there’s been an increase in the volume of irrelevant queries flowing through AI Max campaigns, with no connection to the advertiser’s business or keywords in the account. 

Meanwhile, Google’s recommendations consistently push toward broad matching and large themed ad groups.  

The issue is not whether broad match works, but whether high-value intent is being diluted in larger, broader ad groups. Fewer ad groups means that we cannot effectively or meaningfully lower targets without a massive structural negative schema, so performance differences have to be large enough to validate the new structure. 

An audit should:

  • Extract full search term reports.
  • Classify queries by intent tier.
  • Compare CPA and lifetime value by query type.
  • Quantify irrelevant or weakly related matches.
  • Measure performance drift across match types.

Network economics

Performance Max and Demand Gen bundle multiple networks into single campaigns, but offer limited visibility into which networks drive results. This makes it hard to cut the underperforming ones. The slow rollout of network-level controls systematically benefits Google’s less competitive inventory.

An audit must:

  • Break out performance by network.
  • Compare CPA and lifetime value by placement.
  • Identify cross-subsidization.
  • Determine whether weaker networks are relying on surplus from strong search inventory.

Value redistribution

Combining these elements in your audit will help you succeed in this new world of ad search: 

  • Non-incremental traffic inflates conversion counts, making performance look better than it is.
  • Looser match types expand where ads appear, diluting intent precision and forcing fewer ad groups/spend and blanket-level targets/bids.
  • No clean marginal return visibility means it is much more difficult to find the point of negative return
  • Network bundling hides which channels actually perform.

The cumulative effect is that the surplus value generated by your best inventory and high-intent, high-converting search queries gets redistributed across Google’s weaker inventory (i.e., Display, YouTube, Discover, Gmail, crazy tail queries).  

This is how to get a dwindling supply of valuable search queries to inflate the cost-per-clicks (CPCs) of low-quality inventory. 

The Ads Decoded episode: Is your campaign structure holding you back in the era of AI?

 

💾

Google Search Ad audits must now rebuild the visibility automation removed. Here's how. (A response to Google’s Ads Decoded podcast.)

Google leaves door open to ads in Gemini

How to use Google Gemini for better SEO

Google is leaving the door open to advertising in its Gemini AI app, with a senior executive telling WIRED the company is “not ruling them out” — a notable shift from the flat denials made just months ago.

What’s changed: In January, Google DeepMind CEO Demis Hassabis told reporters at Davos that Google had no plans to put ads in Gemini. Now, SVP Nick Fox is saying otherwise — noting that learnings from ads in AI Mode will “likely carry over” to Gemini down the road.

The current strategy. Rather than rushing into Gemini, Google is using AI Mode — its Gemini-powered Search product — as a testing ground for ad formats in AI experiences.

  • Ads are kept separate from organic results and clearly labeled
  • Google says it only shows ads when they’re relevant — if nothing fits, nothing runs
  • The company is drawing on 20-plus years of Search ad experience to inform the approach

Why we care. Google’s entire business is built on advertising. How and if they bring ads into AI products will shape the future of the industry — and set the tone for every AI company trying to figure out how to monetize free users. The brands that figure out how to show up relevantly in conversational AI environments now — before the auction gets competitive — will have a significant first-mover advantage.

The bigger picture. Google is in a stronger position than its rivals to take its time. The company crossed $400 billion in revenue in 2025, giving it the luxury of patience. OpenAI, by contrast, is under pressure to more than double its $30 billion in revenue this year — and has already started testing ads in ChatGPT’s free tier.

Between the lines: Fox’s framing is careful but revealing. By positioning Gemini ads as a “prioritization question” rather than a values question, Google is signaling it’s a matter of when — not if.

What to watch: Personal Intelligence — Gemini’s feature that pulls from a user’s Gmail, Photos, and Calendar — is the sleeper story here. Fox called personalization his “holy grail” for Search, and hinted it could eventually roll into the broader Search experience. If it does, advertisers would gain access to an entirely new layer of contextual targeting — though Fox was quick to add that user data will not be sold or shared.

What’s next. Advertisers should start preparing now. As Google refines its AI ad formats in AI Mode, those learnings will eventually migrate to Gemini. Brands that understand how to show up relevantly in conversational, context-rich AI environments will have a significant head start when the floodgates open.

Dig deeper. Google Is Not Ruling Out Ads in Gemini (registration needed)

Google AI Overviews cut search clicks 42%: Report

Google traffic redistribution

Google’s AI Overviews may be reducing traditional search clicks, but publishers still have meaningful growth opportunities in breaking news and Google Discover, according to new data from Define Media Group.

  • Organic search clicks have fallen 42% since AI Overviews began expanding in Google Search, according to Define Media Group’s analysis of Google Search Console data across its portfolio of 64 sites.

Why we care. AI-generated answers are reshaping search traffic. Evergreen content is losing clicks, while real-time news coverage and Discover distribution are emerging as stronger traffic channels for publishers.

By the numbers. Across Google Search, Discover, and Google News, breaking news traffic grew 103% from November 2024 through early 2026 in the company’s dataset. Losses were concentrated in informational and evergreen content:

  • Organic search traffic averaged 1.7 billion clicks per quarter from Q1 2023 through Q1 2024.
  • After AI Overviews launched, traffic fell 16% immediately and never recovered.
  • As Google expanded AI Overviews in May 2025, declines accelerated.
  • By Q4 2025, search traffic was down 42% from the pre-AI Overviews baseline.

Discover’s role: Google Discover, which grew 30% across the portfolio, is now the main growth engine for breaking news distribution. Discover traffic rose steadily as web search traffic fell. For the first time in the dataset, Discover and web search now drive roughly equal traffic.

Why is this happening? AI Overviews appear less often for news queries than for other topics. AI Overviews appeared for about 15% of news queries — nearly three times less often than in categories such as health and science — according to Ahrefs data cited in the report.

  • News queries often trigger the Top Stories carousel, which links directly to publisher articles. Searches for major developing events, such as international conflicts, typically show Top Stories rather than AI summaries.
  • Define Media Group suggests Google may be avoiding AI-generated summaries for breaking news because events change rapidly, accuracy stakes are high, and generative systems can still hallucinate.

The report. BREAKING! News Thrives in the Age of AI

Google Maps turns exploration into a conversation with Ask Maps

Google is launching Ask Maps, a conversational AI feature powered by Gemini that lets you ask Google Maps complex, real-world questions and get personalized, actionable answers.

What’s new. You can now ask Maps questions like “Is there a public tennis court with lights where I can play tonight?” or “My phone is dying — where can I charge it without a long wait?” and get a conversational answer with a customized map view.

Key capabilities:

  • Personalized recommendations: Results are tailored based on your search and save history, so Maps already knows, for example, that you prefer vegan restaurants before you ask.
  • Trip planning: Ask for recommended stops along a route and get directions, ETAs, and insider tips sourced from over 500 million community contributors across 300 million places.
  • Direct action: Book reservations, save places, or share them with friends directly from the response.

On ads. Ask Maps doesn’t include ads yet, but Google isn’t ruling them out, the Gemini team told SEO consultant Glenn Gabe. Because ads are already common in local search, it wouldn’t be surprising to see them appear here eventually.

  • Ask Maps is intent-rich and planning-focused. You’re deciding where to go and what to do — exactly the moment advertisers pay a premium to reach.

Why we care. Ask Maps changes how you find places, shifting discovery from keyword searches to AI-generated recommendations. The businesses that get picked will have rich, accurate, up-to-date Maps profiles and strong community engagement, because that’s the data Google’s AI uses to make its picks.

Availability. Ask Maps is rolling out now in the U.S. and India on Android and iOS, with desktop coming soon.

What’s next. Advertisers and local businesses should pay close attention. When AI mediates how people discover places, visibility in Maps becomes more critical than ever. Keep your business listings accurate, complete, and review-rich as Gemini draws from that data to power recommendations.

The announcement. How we’re reimagining Maps with Gemini

💾

Ask Maps, a new AI-powered conversational Google Map feature, lets users ask get personalized recommendations directly on the map.

Google Ads refreshes Asset Optimization layout for Demand Gen

Why Google Ads auctions now run on intent, not keywords

Google redesigned the Asset Optimization section in Google Ads for Demand Gen campaigns, consolidating AI-powered creative controls into a single, cleaner interface.

Why we care. Advertisers managing creative at scale now have a centralized panel to toggle automated features on or off — making the process less manual and time consuming.

What’s new. The redesigned layout groups three key automation capabilities together:

  • Auto-generated shorter videos — AI trims existing video assets into shorter cuts to qualify for additional placements.
  • Automatic video resizing — Videos are adapted across multiple aspect ratios to maximize inventory coverage.
  • Landing page image pulls — Images are sourced directly from an advertiser’s landing page to generate additional creative variations.

How it works. The new panel surfaces simple toggles for features like Resized videos and Image assets, letting advertisers quickly enable or disable each automation without digging through multiple menus.

Bottom line. Advertisers running Demand Gen campaigns should head into the Asset Optimization panel now and audit which automations are enabled. Turn on video resizing and landing page image pulls if you haven’t already — these are low-effort wins that can meaningfully expand reach without additional creative production.

Also make sure your landing pages are clean and visually strong, since Google will be pulling from them directly. And as Google continues rolling out more AI-driven creative tools, start shifting your workflow toward providing high-quality source assets and letting the platform handle format and placement optimization from there.

The marketing measurement flywheel: A 4-step framework for proving impact

The marketing measurement flywheel- A 4-step framework for proving impact

With AI-driven search and hyper-fragmented media channels reshaping how people discover brands, the “set it and forget it” approach to marketing measurement is officially dead. 

Measuring impact isn’t a static check of dashboard data. Used strategically, measurement is a virtuous cycle where data informs your ad platform settings and those settings, in turn, generate better data (and business outcomes).

Here’s how to build a measurement flywheel that keeps your growth efficient.

The 4-step measurement cycle

Imagine a Bay Area SaaS company, PowerLoop, selling an AI-powered analytics platform. They’re investing heavily in Google Search, LinkedIn, and some emerging AI publication sponsorships.

Their problem? Google Ads is reporting fantastic ROAS, but their internal CRM shows a significant number of leads and opportunities that can’t be directly attributed to any specific ad campaign, making it hard to prove marketing’s true impact to the board.

1. Platform ROAS

This is your in-engine reality. Whether it’s Google Ads or Meta, platform ROAS uses pixel and conversion API data to tell you what the platform thinks happened. This might go without saying, but platforms don’t have a habit of underestimating their own impact.

The ideal: Use this for real-time optimization.

The limitation: These signals feed your tCPA (target cost per acquisition) or tROAS (target return on ad spend) bidding strategies. It’s the fastest feedback loop you have, but it’s rarely the full truth. This leads us to…

What it looks like in practice (example): PowerLoop’s Google Ads account is configured with a tCPA bid strategy for “Free trial sign-ups.”

Google Ads reports a healthy $50 CPA, well within their target. LinkedIn also shows strong engagement and click-through rates. This looks great on paper, but the unattributed leads are a nagging concern.

Dig deeper: How to avoid marketing mix modeling mistakes that derail results

2. Back-end ROAS

Platform data is optimistic. Your bank account is realistic.

Back-end ROAS, coming from your CRM of choice (Salesforce, Shopify, HubSpot, etc.), connects your ad spend to your actual CRM or internal database. It’ll likely require some data engineering work to properly map back-end performance against ad platform spend, but the effort is well worth it.

The ideal: Clean out the “noise” (refunds, fake leads, or credit card declines), and evaluate marketing efficiency based on your own first-party data.

The benefit: You can use back-end ROAS to validate your account structure. If the platform says a campaign is winning but the back end shows low-quality leads, it’s time to restructure your targeting or creative.

What it looks like in practice (example): When PowerLoop connects their ad spend to Salesforce, they find that many of the “Free trial sign-ups” from Google Ads are either incomplete profiles or come from IP addresses outside their target market and never convert to qualified sales opportunities.

LinkedIn, while showing engagement, has a lower conversion rate than expected. This insight leads them to refine their Google Ads audience targeting and adjust LinkedIn campaign objectives to focus more on high-intent lead forms.

Get the newsletter search marketers rely on.


3. Incremental ROAS (iROAS)

This is the “So what?” metric. iROAS answers the question: How many of these sales would have happened even if we didn’t show the ad? This is where marketing mix modeling (MMM) and incrementality testing (geo-lift tests or holdout tests) come into play.

The goal: Identify true value and “halo effects” across channels.

The action: MMM insights tell you where to double down and where you’re just paying for customers who would have converted anyway. Use these insights to prioritize your next round of incrementality tests.

What it looks like in practice (example): PowerLoop conducts a geo-lift test by pausing Google Ads in select non-core markets for a few weeks and measuring the difference in sign-ups between dark areas and similar areas where ads are still running. They discover that while Google Ads drives some incremental sign-ups, a significant portion of those attributed by Google would have signed up organically anyway, through direct traffic or referrals. 

Conversely, their MMM suggests that the AI publication sponsorships, while not driving direct “last-click” conversions, are significantly contributing to brand awareness and reducing the overall CPA across all digital channels by driving more organic searches for their brand. This reveals that the sponsorships have a higher iROAS than initially thought.

Here’s an example of overvalued and undervalued channels:

Channel incrementality multiplier example

The greater the incrementality factor, the more undervalued this channel has been, such as YouTube and podcasts in this example. The lower the incrementality factor, the more overvalued these channels have been, such as paid review sites in this case.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

4. Marginal ROAS (mROAS)

The final frontier is understanding where to spend the next dollar. Every channel eventually hits a plateau where efficiency craters. This truism is called the law of diminishing returns. Understanding when you hit that mark is key to efficient budgeting.

The goal: Estimate the “room for growth” before hitting a performance ceiling.

The benefit: By monitoring mROAS, you know when to pull back on a saturated channel and reallocate that budget into emerging spaces.

What it looks like in practice (example): PowerLoop’s analysis shows that after spending $100,000/month on Google Ads, another $10,000 yields a marginal return of $0.80 for every dollar spent – meaning they’re essentially breaking even or losing money on additional spend. 

However, for their AI publication sponsorships, every additional dollar spent is still returning $2.50 in incremental value, indicating significant room for growth. They decide to reallocate 15% of their Google Ads budget to expand their sponsorship program.

Why the cycle never ends

Marketing measurement is a work in progress because the landscape is constantly shifting. Today, you might be perfecting your Google Search strategy. Tomorrow, you’re figuring out how to measure the impact of a mention in a ChatGPT or Perplexity response.

The hypothetical PowerLoop team understands this. They’re constantly evaluating new AI-driven channels and planning how to integrate them into their measurement cycle. They know that what worked last quarter might not work this quarter and that relying solely on platform data is a recipe for wasted spend.

The goal isn’t to find a “perfect” number that stays set in stone. The goal is to use this cycle to stay agile. When your iROAS reveals that a channel is more incremental than you thought, you push your tROAS targets in the platform (Step 1) more aggressively. When mROAS shows you’re hitting a plateau, you start testing new, unproven channels to find different audiences.

Dig deeper: Break down data silos: How integrated analytics reveals marketing impact

Prompt research: The next layer of SEO and GEO strategy

Prompt research: The next layer of SEO and GEO strategy

A growing share of search interactions now begins inside generative systems. Users open AI tools and ask questions the same way they’d ask a colleague: in full sentences, with context, and often across multiple follow-up prompts.

Generative systems synthesize answers from sources they interpret as credible and relevant to the prompt. Visibility increasingly depends on whether a brand’s content aligns with the questions people ask AI systems, not just the keywords they type into search engines.

Traditional search results haven’t disappeared. Today’s discovery environment blends ranked results, AI-generated summaries, and conversational assistants.

This shift introduces a new research layer: prompt research. It’s quickly becoming a foundational practice for SEO and generative engine optimization (GEO).

Here’s how prompt research works, why it matters, and how to incorporate it into content planning.

How prompt-based search is reshaping discovery

Search queries are becoming more context-rich as generative AI platforms encourage users to ask questions in natural language and refine them through follow-up prompts.

Many searches now unfold as a sequence rather than a single query. A user asks an initial question, reviews the generated response, then adds clarifying prompts with new constraints, comparisons, or context.

In these environments, search behaves more like a conversation than a lookup. Each prompt builds on the previous response, creating a chain that gradually clarifies intent.

Several shifts reinforce this pattern:

  • AI assistants and voice interfaces encourage natural phrasing.
  • Follow-up prompts allow search sessions to evolve conversationally.
  • Multimodal inputs combine text, images, and contextual signals.

As a result, the unit of search interaction is shifting. Instead of optimizing for isolated queries, you increasingly need to understand how prompts are phrased, sequenced and refined within AI-driven search sessions.

Understanding those prompt patterns is the goal of prompt research.

Dig deeper: A smarter way to approach AI prompting

What is prompt research?

Prompt research analyzes the questions people ask generative AI systems and how those prompts shape the answers those systems produce.

In practice, it functions as the AI-era extension of keyword research:

  • Traditional keyword research analyzes search queries, ranking opportunities, and competition within the results page.
  • Prompt research focuses on the prompts that lead AI systems to explain topics, compare options, or recommend specific tools, products, or brands.

This changes the research process. Instead of mapping keyword variations alone, teams need to:

  • Identify recurring prompt patterns.
  • Cluster related questions around a topic.
  • Anticipate how a user’s inquiry expands through follow-up prompts.

For example, someone researching email marketing software might begin with a prompt like:

  • “What are the best email marketing tools for small businesses?”

Follow-up prompts extend the conversation:

  • “Which email marketing tools are easiest for beginners?”
  • “How does Mailchimp compare to ConvertKit?”
  • “What features should small businesses look for in email marketing software?”

Prompt research identifies these patterns so you can structure content around how users explore topics through AI search.

Why prompt research changes SEO and GEO content strategy

Prompt research expands the scope of content strategy beyond ranking individual pages to clusters of related questions.

For SEO, that means ensuring content covers the full topic landscape rather than a single query. For GEO, it means ensuring content provides the context generative systems need to synthesize answers.

Several strategic priorities follow.

Topical authority

Prompt clusters reveal the full range of questions users ask about a topic. Content that addresses those related questions is more likely to rank in traditional search and surface in AI-generated answers.

Clear entity relationships

Search engines and generative systems rely on entities to understand context. Clearly referencing relevant companies, products, technologies, and concepts helps them interpret how information fits together.

Structured information

Well-organized content is easier for systems to work with. Clear headings, concise explanations, and logical sections help search engines index pages and help generative systems extract key points.

Conversational formatting

Prompt research often shows that users ask questions in natural language. Content that answers those questions directly — through explanations, comparisons, and FAQs — aligns better with search queries and AI prompts.

Together, these practices help content perform across the modern search environment.

Dig deeper: How generative engines define and rank trustworthy content

Get the newsletter search marketers rely on.


A practical framework for prompt research

Organizations can integrate prompt research into their SEO and GEO workflows through four stages.

1. Prompt discovery

Prompt discovery focuses on identifying the questions users ask across generative platforms and AI-assisted search.

Useful sources include:

  • AI chat logs and internal user research.
  • Community discussions and forums.
  • Customer support and sales questions.
  • AI-assisted search experiences.

The goal is to surface prompts with clear intent — especially questions that require explanations, comparisons, or recommendations.

2. Prompt clustering

Once prompts are collected, they can be grouped into intent-based clusters. These clusters reveal how users explore a topic across multiple questions.

Common prompt clusters include:

Informational prompts

  • “What is customer lifecycle marketing?”
  • “How does lifecycle marketing work?”

Comparative prompts

  • “Lifecycle marketing vs traditional email campaigns: what’s the difference?”
  • “Klaviyo vs. HubSpot for lifecycle marketing?”

Transactional prompts

  • “What tools support lifecycle marketing automation?”
  • “Which lifecycle marketing platforms are best for ecommerce?”

Strategic or multi-step prompts

  • “How should an ecommerce brand build a lifecycle marketing strategy?”
  • “What lifecycle emails should an ecommerce company send after purchase?”

Prompt clustering helps identify patterns and prioritize content topics.

3. Prompt mapping

Prompt mapping connects prompt clusters to content strategy.

This typically involves:

  • Aligning prompts with existing content.
  • Identifying new content opportunities.
  • Flagging gaps in topic coverage.

For SEO, this helps expand coverage across related queries. For GEO, it helps ensure content addresses the types of prompts that trigger AI-generated answers.

4. Response optimization

The final step focuses on structuring content so search engines and generative systems can interpret it clearly.

Effective response optimization often includes:

  • Concise explanations near the top of sections.
  • FAQ sections that mirror real prompts.
  • Supporting data, examples, or expert insights.
  • Reinforcing related concepts across content.

Clear, structured answers improve reader usability while increasing the likelihood that content surfaces in search results and AI-generated responses.

Dig deeper: How to use AI response patterns to build better content

Risks and challenges in the new search environment

Prompt research introduces new complexities for teams working across SEO and GEO:

  • Limited algorithm transparency: Generative systems provide little visibility into how sources are selected or weighted in AI-generated answers. This makes it difficult to predict which content will surface in response to specific prompts.
  • Attribution complexity: Tracking traffic from AI assistants and generative search interfaces remains inconsistent. Referral data is often incomplete, which complicates measurement for SEO and GEO performance.
  • Misinformation risks: Generative systems can occasionally surface inaccurate or outdated information, even when credible sources exist. This places greater emphasis on publishing clear, well-supported content that AI systems can reliably interpret.
  • Strategic balance: Content strategies still need to prioritize human readers. Information should remain clear, trustworthy, and genuinely useful — regardless of whether it appears in traditional search results or AI-generated responses.

Despite these challenges, the underlying opportunity remains clear: understanding prompt patterns helps you anticipate how AI systems assemble answers.

The example below illustrates how that process can shape a content strategy.

Case example: Optimizing for prompt clusters

Consider a hypothetical SaaS analytics company looking to expand its visibility across AI-generated answers and traditional search.

Initial prompt research reveals several clusters around predictive analytics:

  • “What is predictive analytics?”
  • “How does predictive analytics improve marketing ROI?”
  • “What are the best predictive analytics tools for ecommerce?”

Rather than targeting these prompts with isolated pages, the company builds a content structure around the broader topic.

  • A foundational guide: Explains predictive analytics, how it works, and why companies use it.
  • Supporting articles: Explore specific applications, such as marketing attribution, customer segmentation, or demand forecasting.
  • Comparison pages: Evaluate leading predictive analytics tools and platforms.

Each article includes structured explanations, FAQs that mirror common prompts, and citations from industry research.

This structure supports SEO and GEO. The foundational guide captures informational search demand, while supporting and comparison content addresses follow-up prompts users ask as they explore the topic.

Over time, the content appears in both traditional search results and AI-generated answers, expanding visibility in the new search environment.

Dig deeper: Advanced AI prompt engineering strategies for SEO

Putting prompt engineering in your search strategy

Brands that begin analyzing prompt patterns today will gain insight into emerging discovery behaviors. A practical starting point involves auditing existing content through a new lens:

  • Which prompts does this content answer clearly?
  • What follow-up questions might users ask?
  • How easily can generative systems interpret and synthesize the information?

Search visibility increasingly depends on how well content participates in AI-generated knowledge systems.

Prompt research helps ensure that participation happens by design rather than by chance.

Defensive SEO: How to protect your brand narrative in AI search

Defensive SEO: How to protect your brand narrative in AI search

Imagine your ideal customer going to ChatGPT and asking, “Is [BRAND] worth it?”

They’re not getting a vetted list of links in response. They’re getting a synthesized answer, most likely summarizing who you are, what you’re known for, and whether you’re credible. They’ll get a confident answer to the nebulous question of assigning worth.

You don’t control that summary. But it will shape their decision before they convert, possibly before they ever visit your site.

This is the new reality of search. SEO has traditionally been a discovery channel: higher rankings led to more traffic, which led to more conversions. But AI-powered search experiences, from AI Overviews to ChatGPT, Gemini, and beyond, are changing the game.

Narrative is now the goal. Brands have to actively monitor and shape how they’re described, evaluated, and synthesized in AI-powered search experiences.

SEO has officially entered its defensive era. Protecting brand narrative in the new search landscape is quickly becoming table stakes.

What is defensive SEO?

You’re probably asking: Isn’t this just reputation management? Or isn’t this what good SEO has always done? Not exactly.

Traditional SEO has focused on visibility: earning rankings, driving traffic, and increasing conversions. Defensive SEO focuses on something slightly different: how your brand is perceived once it’s visible.

Today, perception matters as much as placement. Defensive SEO is the practice of shaping that narrative. It means paying close attention to how AI tools describe your brand and where evaluation-based queries influence buying decisions.

In practice, defensive SEO is:

  • Monitoring how AI responses synthesize your brand.
  • Protecting against negative, incomplete, or outdated information.
  • Addressing evaluation-driven queries before third parties define them for you.
  • Managing the sentiment signals that influence how algorithms interpret your reputation.

Just as importantly, defensive SEO is not:

  • Crisis PR deployed after something goes wrong.
  • An attempt to suppress legitimate criticism.
  • Spin or manipulation.

It’s not about hiding weaknesses. It’s about reducing ambiguity.

When your positioning is unclear, AI fills in the gaps with whatever signals are readily available: reviews, old content, aggregator summaries, and competitor comparisons. Defensive SEO ensures the strongest and most accurate version of your brand gets reinforced.

At its core, defensive SEO is structured, proactive brand narrative management across the modern search landscape.

Dig deeper: Why SEO is your best defense against declining organic traffic

Why is this shift happening now

Several forces are converging to make defensive SEO necessary today.

1. AI summaries compress complex stories

Traditional search results allowed users to explore multiple perspectives. Someone researching a brand could read reviews, scan articles, and evaluate different viewpoints before forming an opinion.

AI-generated answers compress that process. Nuanced positioning, evolving messaging, and subtle differentiation can all be condensed into just a few sentences. Those sentences become a prospect’s first impression of your brand — a simplified version of your reputation.

2. Evaluation queries are becoming the default

Search behavior is shifting toward evaluation-driven questions. Users are increasingly searching for things like “Is [BRAND] worth it?” or “[BRAND] reviews and complaints.”

These are high-intent, high-impact queries. They signal real conversion consideration.

If brands avoid these topics, outside sources step in to answer them. Review sites, forums, and aggregator pages become the dominant narrative. Ignoring these evaluation queries doesn’t prevent them from shaping perception. It simply removes your voice from the conversation.

3. AI systems reinforce existing narratives

Generative engines don’t invent brand reputations. They amplify patterns that already exist.

They rely heavily on reviews and ratings, authoritative third-party mentions, and frequently cited claims or descriptions. Over time, this creates a feedback loop. The most commonly cited narrative gains weight and visibility, while alternative or evolving positioning becomes less prominent.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Defensive SEO in practice

Defensive SEO isn’t a single tactic. Like all SEO efforts, it’s an ongoing process focused on understanding and shaping how search engines interpret your brand.

Conduct AI visibility audits

The first step in your defensive SEO tactical plan should be an AI visibility audit.

Auditing AI-generated responses for brand consistency helps ensure that LLMs accurately and positively reflect your brand.

Start by querying AI tools the way real users would. Identify a standard set of questions that someone may realistically ask about your brand.

  • “What does [BRAND] do?”
  • “What services does [BRAND] offer?”
  • “Is [BRAND] good?”
  • “How does [BRAND] compare to other [INDUSTRY] competitors?”
  • “Pros and cons of [BRAND].”
  • “What is [BRAND]’s mission or values?”
  • “What are the reviews or feedback about [BRAND]’s customer experience?”
  • “Best alternatives to [BRAND].”

The goal is to test how the AI agents describe your company across different themes, such as brand overview, services, culture, reputation, and positioning.

Use the same question set across multiple AI tools and LLMs — ChatGPT, Gemini, Copilot, and Claude. Don’t forget to ask for citations, especially if the response is unexpected.

Now that you have all of this data, it’s time to analyze the responses for consistency, accuracy, and opportunity. Look for patterns.

  • Which adjectives appear repeatedly? 
  • What themes dominate the explanation? 
  • Is everything accurate? 
  • Is anything important missing entirely?

This audit should be done regularly. These patterns reveal how your brand narrative exists within AI-driven search, and how it evolves.

Dig deeper: 200+ AI audits reveal why some industries struggle in AI search

Get the newsletter search marketers rely on.


Improve the source material

The next step in your defensive SEO tactical plan: update the source material these LLMs are drawing from. While you may not be able to log into ChatGPT and “fix” an answer, you can influence how your brand is portrayed.

Own the evaluation content

Many brands avoid creating content that acknowledges trade-offs or criticisms. In the past, that instinct may have made sense. But today, avoidance can often backfire.

AI systems tend to trust content that provides balanced explanations and transparent comparisons. Ultimately, this type of comparison content is an age-old SEO tactic.

If you’re not creating content that addresses it, chances are your competition is. Clear answers to common concerns signal credibility to your audience and search engines alike.

Instead of ignoring evaluation queries, we should be addressing them head on. The goal isn’t to eliminate criticism, it’s to ensure the context around it is accurate and fair.

Strengthen third-party authority signals

We know that generative AI relies heavily on independent sources such as indexed content in traditional search engines, media mentions, reviews, and forum commentary. These third-party sources are influencing how your brand is described just as much, if not more than, owned content.

This means defensive SEO can’t exist in isolation. It requires alignment across multiple disciplines, including PR, social media, and customer experience.

SEO can influence visibility, but SEO alone can’t fix narrative gaps.

Leverage PR in coordination with off-page SEO to earn media coverage and mentions from authoritative third-party sources. Consider Reddit to engage with your audience and share content. Monitor and update social profiles, review aggregators, directory listings, and partner sites.

Update and clarify legacy content

Many brands evolve faster than their content does. Pricing models change, product offerings expand, and messaging shifts to reflect new positioning. Yet older pages with outdated information often remain.

AI systems pull from everything available and fill ambiguity with whatever is most prominent. That’s why outdated content can shape a brand’s AI output long after it’s relevant.

Regularly reviewing and updating legacy content on your website ensures the signals being used by generative AI reflect the brand you are today.

Use structured data and schema markup to clarify information. Ensure your About pages, service pages, and leadership bios are up to date and comprehensive. Publish well-optimized blog posts and press releases that reinforce your positioning.

If the web is your brand’s resume, make sure it reflects your strongest work, not an outdated version of who you used to be.

Dig deeper: How to use AI response patterns to build better content

Measuring success with defensive SEO

Traditional SEO metrics like rankings and sessions still matter, but they’re no longer sufficient on their own.

Defensive SEO introduces a new set of signals to monitor:

  • Sentiment alignment across search results.
  • Consistency in AI-generated content about your brand.
  • Visibility across evaluation-based queries.
  • Recurring descriptors associated with your brand.

Taken together, these indicators help reveal something traditional SEO dashboards rarely capture: how your brand is being interpreted across the search landscape.

Organic share of voice measures how often your brand appears, but in AI-powered search, presence alone no longer tells the whole story. What matters just as much is how your brand is described once it shows up.

This is where the broader idea of “description share of voice” becomes useful. Instead of measuring pure visibility, description share of voice looks at the language and framing associated with your brand relative to competitors.

For example, imagine two companies appearing equally often across AI-generated summaries and search results. One is consistently described as “innovative,” “trusted,” or “customer-focused.” The other is described as “affordable,” “basic,” or “consistent.” Both brands may technically have the same share of voice. However, the narrative attached to that visibility is completely different.

Description share of voice captures that distinction. It reflects the themes and positioning that AI is repeatedly associating with your brand relative to others in the category. And over time, patterns will emerge. Certain descriptors get reinforced, while others may disappear from the conversation entirely.

Tracking these patterns and adjectives provides a clearer understanding of how your brand is being framed and characterized when it does appear.

Defensive SEO is strategic

Despite the name, defensive SEO isn’t about reacting to threats. It’s about strengthening clarity and trust.

When brands actively manage their narrative across the modern search landscape, they reduce misinformation, support informed decision-making, and create a more consistent brand experience. Ultimately, defensive SEO ensures that when someone asks AI about your brand, the answer reflects who you actually are.

This shift isn’t just an evolution for SEO. It’s an organizational one.

Shaping how a brand is understood in AI-driven search queries forces collaboration between teams that too often operate in silos. PR influences the narratives circulating in the media. Customer experience teams hold the signals that shape reviews and sentiment. Social media can surface emerging perceptions long before they appear in search results.

All of those signals increasingly feed the systems that summarize and interpret brands for users.

The future of SEO is narrative ownership

Most SEOs agree that search has evolved beyond just a discovery channel. It’s now a reputation and perception engine, and often the first filter through which customers understand your brand.

In this multimodal, multichannel world shaped by AI, visibility alone isn’t enough.

Ranking without narrative alignment is fragile. Ranking without context leaves interpretation to systems you don’t control.

The brands that succeed will rank well, shape how they’re understood, and make sure the right story is told.

What 23 tests reveal about Google AI Max performance

What 23 tests reveal about AI Max performance in Google Ads

We’ve tested Google AI Max over the past nine months, analyzing 23 individual tests across 16 already mature advertisers operating within a range of verticals. This article reveals what we did to maximize success with this campaign type.

Your experiments and observations may vary. If so, we’d welcome the debate.

This is intended to be just one voice among many in the conversation around AI Max. All the analyses we discuss are replicable within your own accounts, so you can ratify or dispute the findings based on your own data.

The ground rules for AI Max

Before launching an AI Max test, consider several factors. Two are particularly significant:

  • Your campaigns should bid on a conversion action that’s meaningful for your business. Aim to get your conversion hygiene in as good a place as possible through tools like Enhanced Conversions and Google Tag Gateway. Value-based bidding is also ideal, although it’s not essential. Any automated targeting functionality can work.
  • Your campaigns shouldn’t be budget-constrained. This advice is true in many situations, but it’s particularly relevant with AI Max. What’s the point of opening up your targeting if your budget prevents you from entering those auctions anyway? If your campaign is limited by budget, then either increase your daily budget headroom or set more conservative bid strategy targets.

With those prerequisites satisfied, we can now cover some of the juicier findings we’ve uncovered from our AI Max tests.

Learning 1: Go all in with AI Max

AI Max performs best when you enable all three core features simultaneously:

  • Search term matching.
  • Text customization.
  • URL optimization.

Overall, we saw a 40% higher uplift in test success rates for campaigns that used all three features compared to those that opted in only to the baseline search term matching functionality.

Text customization drives stronger performance

Google has been pushing the text customization concept in various guises for a few years. However, earlier versions, like auto-applied recommendations, have had limited uptake. So, we were keen to finally assess the impact this would have.

Using the Added by segment in the assets report, you can compare how text customization performs compared to standard advertiser-provided assets.

We found that AI-edited assets delivered an improved return on ad spend (ROAS) and helped extract more value per impression. Put simply, clients were better off when text customization was activated than when it wasn’t.

This trend was consistent across both headline and description assets, even though we found that text customization modified headlines far more often than descriptions.

Text customization skews the auction in your favor

Strong performance is the ultimate objective for AI Max campaigns. But from a search geek’s perspective, the arguably more tantalizing result is that text customization demonstrably improved Quality Score.

We assessed historical Quality Scores for clients who activated text customization before and after the test launch. This analysis is valid because the Google Ads interface reports Quality Score only when the search query syntax exactly matches the keyword. This methodology provides a like-for-like comparison across a group of queries that were targeted both before and after switching on AI Max.

We saw a topline improvement in weighted Quality Score, from 6.8 to 7.3. This upward trend repeated across the three components of the Quality Score, with ad relevance showing the most notable uplift.

Impact on quality score, pre and post-text customization
*Quality Score components evaluated as below average = 1, average = 2, above average = 3

Logically, this shouldn’t be a surprise. After all, the premise of text customization is that Google shows the best possible ad to each individual user. Nonetheless, it’s satisfying to see this story unfold in our analysis.

At the same time, this finding is noteworthy because advertisers have generally been reluctant to use the full AI Max suite. Across all our test cases, only 50% used text customization, and even fewer (44%) enabled URL optimization.

Some brands will need to adhere to compliance guidelines that outright prohibit the use of these features. But our results suggest that if you have any wiggle room at all, you’d be well served by running a test with all three features.

Google is constantly rolling out additional guardrail features to clarify what is and isn’t off-limits from a brand messaging perspective. Marketers in more risk-averse organizations would be well-advised to keep a close eye on these releases.

Dig deeper: Google expands AI Max text guidelines globally

Learning 2: Take an account-wide approach with AI Max

This next suggestion might seem counterintuitive, but hear me out.

If you’re testing out AI Max for the first time, you might be better off enabling the feature across your entire account right from the start, rather than following a step-by-step approach. There are a few reasons for this.

Not all AI Max traffic is net-new

With AI Max enabled, you can target more queries and users than before. And of those queries, many will genuinely be net-new to your account.

However, it’s also common for queries that another campaign in your account once reached to get pulled into your AI Max campaign.

When we assessed performance at the campaign level, we saw an average +7% increase in conversion value, directly generated by queries the campaign had never targeted before.

When we zoomed out to an account-level view, however, only 46% of those queries were actually new to the account. The remaining 54% had previously been captured elsewhere in the account.

That still isn’t a bad result. An approximately 3% incremental uplift in conversion value, especially for accounts that were already running with a high broad match adoption, is great.

But this finding does have two key implications:

  • If you care about your search term hygiene, enabling AI Max in only a subset of your campaigns could disrupt your search term-to-campaign funnel. Because brand inclusion lists are now exclusively available for AI Max-enabled campaigns, enabling AI Max account-wide can help you maintain a cleaner search term-to-campaign funneling system.
  • Single campaign adoption muddies the water when assessing the success of your test. You care about net-new conversions, not reorganizing existing traffic within your account. When testing AI Max, make sure you assess the full account-wide impact.

Get the newsletter search marketers rely on.


How not to evaluate AI Max

Don’t rely on a cost per acquisition (CPA) by match type analysis to assess AI Max’s efficacy. This approach reveals attribution data within your campaign. But what you really want to know is whether AI Max has improved your overall ability to generate returns at an incremental investment that you’re comfortable with.

There are examples of advertisers trialing AI Max and achieving account-wide efficiency improvements. But you should identify those cases by reflecting on macro, account-wide performance — not by looking at your match type CPAs.

Why you should monitor campaign types

Consider how AI Max interacts with your other campaign types and targeting methods. Let’s call out one particularly glaring example: Dynamic Search Ads (DSA). In our own analysis, every successful AI Max test occurred in an account with low-to-no adoption of DSA campaigns.

This is understandable. Almost every single capability of DSA campaigns is now available in AI Max. So, it shouldn’t be surprising that having both campaign types running in parallel doesn’t improve performance.

It’s plausible that we may not be that far away from Google announcing another round of campaign streamlining initiatives, similar to those for Smart Shopping and Discovery campaigns in previous years. But until then, it’s on marketers to put some thought into the role you intend each campaign type to play within your overall account plan.

Dig deeper: AI Max in action: What early case studies and a new analysis script reveal

Learning 3: Think beyond AI Max

If you’re already comfortable with AI Max and you’re ready to push onto the next step, there’s a wealth of new testing opportunities to think about.

Search Bidding Exploration (SBE) was and still is the first major user-facing change to Google’s bidding technology in the last five years. Yet there’s been remarkably little industry chatter so far about this feature. SBE feels like a natural partner for AI Max, given that both tools are designed to reach incremental and previously inaccessible customers.

AI Max also gives you the chance to evolve your thinking around account structure. In an AI Max world, the optimal balance between segmentation and consolidation may lie elsewhere than before.

We’re already starting to see some green shoots of successful hyper-consolidation approaches. But it’s still too early to decisively comment one way or another.

Dig deeper: AI Max increases revenue 13% but drives higher CPA: Study

Putting AI Max to the test in your own account

It’s an intriguing time to be working in paid search, and AI Max has already sparked significant debate and experimentation within the industry. If you’re a later adopter or if you’re looking to improve on a previously unsuccessful foray into AI Max, then consider the following:

  • Implement key ground rules: Ensure that you have objective-oriented bid strategies in place, powered by strong conversion hygiene. Remove campaign budget constraints once and for all.
  • Adopt an all-in approach: Text customization and URL expansion may not be as popular as search term matching. But we’ve observed that using the full package can actually improve the likelihood of success — by up to 40% in our experiments.
  • Prioritize an account-wide impact: Consider the interplays between AI Max, your regular keyword campaigns, and DSA. It might be that an AI Max everywhere approach is preferable. When judging results, look beyond campaign-level tests where possible, and block out the CPA-by-match-type brigade.
  • Get creative: Think about the more innovative ways you can integrate AI Max with other facets of your account.

Eight out of ten PMax advertisers are now running CTV ads

Google TV: What you need to know CTV buying in Google Ads

Eight in ten Performance Max advertisers are receiving connected TV (CTV) impressions via YouTube, as reported by Smarter Ecoommerce’s Mike Ryan. Google has expanded the channel’s reach over the past year — and the trajectory is only accelerating.

The timeline of how we got here:

  • Q2 2025: Google began serving CTV ads using standard product feed images, meaning advertisers with no video assets were suddenly generating TV impressions from their existing catalog photos
  • January 2026: Google announced shoppable CTV ads — letting viewers browse products and scan QR codes to purchase directly from their TV screen, pulling directly from Google Merchant Center product feeds.

Why we care. CTV is no longer a specialist buy. If you’re running PMax, you’re almost certainly already on the big screen — and Google has been steadily upgrading what that means for commerce. Google is automatically turning your product feed images into TV ads and allocating budget to CTV impressions, with no action required on your part.

Without actively checking your channel performance breakdown, you have no visibility into where your spend is going or whether auto-generated creative is actually fit for a 65-inch screen.

What advertisers should do right now:

  1. Pull your Channel Performance reportGoogle’s native channel breakdown will show you exactly how much of your PMax spend and impressions are going to CTV. If you haven’t looked, you may be surprised.
  2. Audit your feed images — since Q2 2025, those product photos are being used to generate CTV ads automatically. Low-quality images that worked fine in Shopping are now appearing on 65-inch TV screens. Clean them up.
  3. Check if shoppable CTV applies to you — if you’re running PMax with a Merchant Center feed, your campaigns may already be eligible for shoppable CTV formats. Google reports that Demand Gen campaigns including TV screens drive 7% incremental conversions at the same ROI. Understand whether that inventory is working for you — or being wasted.
  4. Think about creative — feed images as CTV ads is a floor, not a ceiling. Advertisers who invest in purpose-built video assets optimized for the TV screen will outperform those relying on auto-generated formats.

The big picture: YouTube CEO Neal Mohan confirmed that TV has surpassed mobile as the primary device for YouTube viewing in the U.S. by watch time, and YouTube has been the #1 streaming platform in the U.S. for two consecutive years. PMax advertisers are already there — the question is whether they’re managing it intentionally or just along for the ride.

Dig Deeper. YouTube Viewing on TV Now Surpasses Mobile, Desktop in U.S.

Yahoo adds personalized homepage to its Scout AI search engine

Yahoo MyScout

Yahoo today introduced MyScout, a customizable homepage inside Yahoo Scout, its beta AI answer engine.

How MyScout works. Logged-in users can customize the homepage with tiles that pull information from Yahoo properties (e.g., Mail, News, Sports, Finance, Games). Examples include:

  • Inbox previews from Yahoo Mail.
  • Stock updates from Yahoo Finance watchlists.
  • News topics and trending stories.
  • Scores and schedules for favorite teams.
  • Weather, shopping comparisons, and games.

Users can add, remove, reorder, or create tiles based on topics or queries they want to follow.

  • Some tiles update in real time, such as stock prices.
  • Other tiles refresh throughout the day with updates like email, sports scores, and breaking news.
  • The experience will become more “agentic and personalized” as the system learns from user activity, Yahoo said.

New publisher features. Yahoo says Scout supports the open web by linking users directly to original sources used in its AI answers. To support that goal, Yahoo News is also launching new publisher features designed to help you grow recurring audiences on its platform:

  • Publisher brand pages that aggregate your articles, videos, and social feeds on Yahoo.
  • A follow feature that lets users subscribe to your content and receive curated newsletters in their inbox.

Availability: Yahoo Scout — including MyScout — is available in beta for U.S. users at Scout.com and through the Yahoo Search app on iOS and Android.

Yahoo’s announcement. Yahoo Introduces MyScout, the First Personalized Homepage for AI Answers

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Salary: $55k-$65k Digital Marketing Specialist Location: Oberlin, Ohio Full-Time About AdeptAg AdeptAg LLC is a North American leader in controlled environment agriculture, integrating innovative growing, automation, and irrigation solutions for customers both domestic and international. We support todays growers with forward-thinking, cost-efficient systems designed to meet the evolving challenges of modern agriculture. Our […]
  • Benefits: 401(k) matching Dental insurance Health insurance Vision insurance Digital Marketing & Listing SpecialistPosition Title: Digital Marketing & Listing Specialist Department: Marketing & Revenue Reports To: General Manager (Victoria Swinford) Location: Santa Rosa Beach, FL (On-site preferred; hybrid considered) Employment Type: Full-Time, Salaried Position Summary Southern Holiday Homes is seeking a highly creative, detail-oriented Digital […]
  • Job Description Biointron is a global antibody services CRO seeking a client-facing, detail-oriented, and self-starting Marketing Associate to join our fast-growing team. Reporting directly to the Marketing Manager, this role is responsible for implementing marketing initiatives and collaborating with the global Biointron marketing team and regional business development teams to support company objectives. The ideal […]
  • Job Description Hi, we’re TechnologyAdvice. At TechnologyAdvice, we pride ourselves on helping B2B tech buyers manage the complexity and risk of the buying process. We are a trusted source of information for tech buyers, delivering advice and facilitating connections between our buyers and the world’s leading sellers of business technology. Headquartered in Nashville, Tennessee, we […]
  • About Haven Services Haven Services LLC is a $100MM residential and commercial plumbing, HVAC, and electrical services and contracting company. Haven Services is executing a growth strategy targeting $200MM in revenue by 2031. We are committed to delivering exceptional service to the homeowners and businesses we serve, and we’re looking for a results-driven digital marketing […]
  • Who We Are iPullRank is a eleven-year-old digital marketing remote agency based in New York City, founded by industry trailblazer Michael King. We’re not here to follow trends—we set them. Our team blends technical expertise with creativity to deliver SEO, Content, and Generative AI services that drive results. We work with some of the biggest […]
  • Job Description MetTel is a global communications solutions provider with the most complete suite of fully managed services that focus on secure connectivity, and network and mobility services. We simplify communications and networking for business and government agencies. Our customers include many of the Fortune 500, and Gartner recognizes us as an industry leader. We […]
  • Job Description Salary: $105k – $115k/yr. Company: IPS Group is a design, engineering and manufacturing company focused on low power wireless telecommunications and parking technologies. IPS manufactures its products in the United States of America and has been delivering world-class solutions to the telecommunications and parking industries for over 25 years. The company is best […]
  • About Definitive Healthcare: At Definitive Healthcare (NASDAQ: DH), we’re passionate about turning data, analytics, and expertise into meaningful intelligence that helps our customers achieve success and shape the future of healthcare. We empower them to uncover the right markets, opportunities, and people—paving the way for smarter decisions and greater impact. Headquartered just outside of Boston, […]
  • The best organic search strategies don’t just move rankings—they move businesses. As Director of Organic Search, you’ll lead a team of specialists in building strategies that tie SEO and AEO performance directly to the outcomes clients care about: revenue, market share, and competitive advantage. This is a role for someone who thinks like a business […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Tombras, a 450+ person, full-service, national advertising agency with a digital mindset, is seeking a Paid Social Strategist. You’ll be joining one of the top independent agencies in North America.Connecting Data & Creativity for Business Results® is working for our clients and creating a flywheel affect fueling both client and agency growth. You’ll be a […]
  • Do you love writing and designing compelling ads and landing pages that convert? Are you proven at building PPC strategies, launching campaigns, testing ideas, and continuously improving results … not just talking about performance, but driving it? Are you both strategic and hands‑on… someone who doesn’t just plan campaigns, but builds, optimizes, and owns them […]
  • Join the Blacksmith Team! Blacksmith Agency is a boutique digital agency based out of Phoenix, AZ, specializing in top‑of‑the‑line, custom website design and development. By forging digital products and online experiences rooted in user expectations and data, Blacksmith helps partners grow, innovate, and exceed their business objectives. Top clients include Google, General Electric, Voss Water, […]
  • About Delve Deeper Delve Deeper is a Performance Media Agency focused on helping clients grow their customer base by integrating the power of Tech & Data in Media. We act as one highly functioning team that is powered by our professional “Fire in the Belly”, with a passion for creating exceptional value by delighting our […]
  • McGarrah Jessee is looking for a Paid Social Manager — someone who doesn’t just launch campaigns, but architects how social drives full-funnel growth for a major retail business. This role sits at the intersection of social strategy, performance marketing, creative collaboration and retail commerce — translating business ambition into orchestrated social ecosystems that drive traffic, […]

Other roles you may be interested in

Search Engine Optimization Manager, Colling Media (Hybrid, Phoenix Arizona)

  • Salary: $73,000 – $83,000
  • Develop and maintain strategic keyword and topic targeting plans for client campaigns
  • Monitor keyword rankings, search visibility, and performance trends to inform optimization strategies

Paid Ads/Growth Manager, Robert Half (Hybrid, Atlanta Metropolitan Area)

  • Salary: $65,000 – $85,000
  • Manage, optimize, and scale paid campaigns across Google Ads (Search, Display, YouTube) and Meta Ads (Facebook/Instagram).
  • Continuously refine targeting, bidding strategies, and creative to improve CPL, conversion rates, and overall ROAS.

SEO Manager, Clutch (Remote)

  • Salary: $60,000 – $75,000
  • Execute day-to-day SEO tactics across multiple client accounts, ensuring alignment with predefined campaign objectives.
  • Implement optimization strategies, including technical SEO audits and recommendations.

Marketing Manager – SEO & GEO, Care.com (Hybrid, Austin Texas)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Digital Marketplace Manager, Venchi (Hybrid, New York, NY)

  • Salary: $120,000 – $130,000
  • Define and execute channel-specific and cross-marketplace strategies, balancing brand positioning, commercial performance, and operational efficiency.
  • Manage Amazon advertising across Sponsored Products, Brands, and Display campaigns.

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌