Normal view

Today — 19 December 2025Main stream

How to use broad match without losing control

19 December 2025 at 18:00
How to use broad match without losing control in a Smart Bidding world

Broad match used to mean “more reach, less relevance.”

Now it means more reach, with a machine learning layer deciding what relevance looks like.

Google has been steadily steering advertisers toward fewer moving parts – fewer match types, fewer manual levers, and more automation. 

Making broad match the default for new Search campaigns in July 2024 was the clearest signal yet that this is the direction of travel.

If you still think of broad match as “the loosest match type,” you will manage it like it is 2016. 

That is where the pain comes from: CPC inflation, irrelevant search terms, and leads that look fine in Google Ads but do not survive contact with sales.

Today’s broad match is designed to work as part of a system, including query matching, Smart Bidding, and conversion signals, with optional guardrails such as audiences, negatives, and brand controls. 

Google positions broad match as a growth lever for Smart Bidding campaigns, not a standalone reach tactic.

This article breaks down what changed, why Google wants you using it, and how to run it safely without giving up standards.

The real risk with broad match isn’t relevance, it’s direction

Broad match rarely fails all at once. Instead, it drifts.

If your optimization goal is shallow, broad match combined with Smart Bidding will find the fastest way to hit it at scale. That can mean:

  • Informational queries that trigger cheap form fills.
  • Users who convert easily but never buy.
  • Lead types that make CPA look great and pipeline look weak.

Nothing is technically “wrong” in the interface. Spend is efficient. Conversions are happening.

But the account is optimizing away from commercial intent.

That is why the conversation about broad match has to start with how it actually behaves today.

What broad match actually is now

Broad match no longer operates as a standalone keyword setting. 

It functions as part of a larger optimization system.

It’s built to work with Smart Bidding

Google is explicit that broad match is intended to run alongside Smart Bidding, because bidding decisions now happen at auction time using signals like:

  • Device.
  • Location.
  • Time of day.
  • Query context.
  • User behavior

Broad match expands the pool of eligible queries. Smart Bidding decides which of those queries are worth paying for and how much.

Running broad match without Smart Bidding is no longer how the product is designed to work.

Google has materially improved broad match matching

In its 2024 updates, Google said AI improvements to quality, relevance, and language understanding led to a 10% performance uplift for broad match campaigns using Smart Bidding.

That does not mean broad match is safe by default. 

It means Google believes the matching layer is now strong enough to justify wider adoption.

It’s no longer positioned as optional

From July 2024, new Search campaigns launch with broad match enabled by default.

There is also a campaign-level setting that enforces broad match usage and is available only when conversion-based Smart Bidding is active.

This is not a quiet test. It is a directional shift.

Why Google wants advertisers to adopt broad match

Google’s reasoning is consistent across documentation and announcements:

  • Search behavior is increasingly long-tail and unpredictable.
  • Manual keyword lists cannot keep up with language and intent shifts.
  • Machine learning can interpret intent at auction time more effectively than rigid match logic.

Google frames broad match as a growth lever for Smart Bidding campaigns, giving algorithms access to more auctions and then optimizing toward conversion goals.

You do not have to agree with the philosophy. But if you are advertising on Google Search, you are operating inside it.

Get the newsletter search marketers rely on.


A framework for using broad match without losing control

Broad match increases surface area. Control comes from the constraints you apply beneath it.

Conversion goals that reflect quality, not convenience

Smart Bidding optimizes exactly to the conversion actions and values you define.

If your primary conversion is low intent, broad match will scale low intent.

Safer setups usually include:

  • Optimizing for deeper-funnel actions where possible.
  • Using conversion values to differentiate lead quality tiers.
  • Importing offline conversions, such as qualified leads or revenue.

This prevents the system from learning that cheap volume equals success.

Intent filters through audience signals

Broad match decides which queries to match. Audience signals influence who sees the ad when those queries occur.

Use audiences to add context, not just for reporting:

  • Customer lists to bias optimization toward known buyers.
  • Remarketing lists for controlled expansion.
  • Audience insights to identify which segments correlate with quality.

Even in observation mode, these signals help diagnose whether broad match growth is happening in the right places.

Negative keyword structures that scale

With broad match, negative keywords stop being clean-up and start being infrastructure.

Effective accounts usually have:

  • Account-level shared negative lists, such as jobs, free, definition, training, and template terms.
  • Campaign-level exclusions tied to intent boundaries.
  • A consistent cadence for search terms reviews, especially early on.

Broad match explores by design. Negatives define where exploration stops.

Brand controls to protect intent

Google has introduced brand controls that can materially reduce unwanted broad match behavior.

You can apply: 

  • Brand inclusions, which restrict matching so ads show only when specified brands appear in the query.
  • And brand exclusions, which prevent ads from showing on queries that include certain brand names,

These controls are especially useful when broad match starts bleeding into competitor brand intent or misaligned brand searches.

How broad match succeeds – and where it breaks

A low-risk rollout usually looks like this:

  • Choose one campaign with reliable tracking and sufficient conversion volume.
  • Use Smart Bidding aligned to meaningful outcomes.
  • Launch with shared negatives already in place.
  • Review search terms frequently in the first month.
  • Validate lead quality outside Google Ads before scaling.

Broad match can work. 

Google’s improvements are real, and the default shift reflects confidence in the system. But it is not a shortcut.

When broad match fails, it is usually because of one of three avoidable mistakes:

  • Optimizing to the wrong conversion: The algorithm will do exactly what you asked.
  • No negative keyword system: Exploration without boundaries always turns expensive.
  • Judging success using platform metrics alone: CPC and CPA can improve while revenue quality declines.

Broad match is a system, not a setting

Broad match is becoming the default because Google wants Search to run on systems, not keyword spreadsheets.

That does not mean control disappears. It just moves.

Broad match rewards accounts that:

  • Define quality clearly.
  • Constrain intent deliberately.
  • Measure success beyond the interface.

Used properly, it can unlock incremental demand.

Used casually, it will optimize you into a corner.

Yesterday — 18 December 2025Main stream

How Bayesian testing lets Google measure incrementality with $5,000

18 December 2025 at 19:00
How Bayesian testing lets Google measure incrementality with $5,000

Incrementality testing in Google Ads is suddenly within reach for far more advertisers than before.

Google has lowered the barriers to running these tests, making lift measurement possible even without enterprise-level budgets, as recently reported in Search Engine Land.

That shift naturally raises a question: How is Google able to measure incrementality with so much less data?

For years, reliable lift measurement was assumed to require large budgets, long test windows, and a tolerance for inconclusive results. 

So when Google claims it can now deliver more accurate results with as little as $5,000 in media spend, it understandably sounds like marketing spin.

But it’s not. It’s math.

Behind this change is a fundamentally different testing methodology that prioritizes probability over certainty and learning over rigid proof. 

Understanding how this approach works is essential to interpreting these new incrementality results correctly – and turning them into smarter PPC decisions.

Glossary: Bayesian terms for search marketers

Before we dive in, here are some definitions to refresh your memory from Stats 101. 

  • Prior: What the system believes before the test.
  • Posterior: Updated belief after observing data.
  • Credible interval: Where the result likely falls (Bayesian).
  • P-value: Probability of observing this result if nothing changed (Frequentist).

Why traditional A/B testing fails modern marketers

Most PPC advertisers are already familiar with frequentist statistics, even if they’ve never heard the term.

Any classic A/B test that asks “Did this change reach statistical significance?” and relies on p-values and fixed sample sizes to answer that question is using a frequentist framework. 

It’s the model that underpins most experimentation platforms and has shaped how marketers have been taught to evaluate tests for decades.

Let’s look at what that means for a realistic, smaller-budget test. 

For simplicity, assume a click-based experiment with equal exposure to both variants.

  • Total test budget: $5,000.
  • Split: 50/50 → $2,500 per variant.
  • Average CPC: $2.
  • Clicks per variant: 1,250.
  • CPA target: ~$100.

Observed results

  • Control: 1,250 clicks → 25 conversions → 2.00% conversion rate.
  • Treatment: 1,250 clicks → 30 conversions → 2.40% conversion rate.
  • Observed lift: 20% more conversions, ~16.7% lower CPA.

On paper, that looks promising: better conversion rate and lower CPA for the treatment.

But when you run a standard two-proportion z-test on those rates, the result tells a very different story.

Formula for standard two-proportion z-test

The output looks like this:

  • Z ≈ 0.68
  • One-tailed p ≈ 0.25
  • Two-tailed p ≈ 0.50

In other words, under a traditional frequentist framework, this test is not statistically significant. 

A 20% lift and a visibly better CPA are still treated as “could easily be noise.”

The advertiser has spent $5,000, seen encouraging numbers, but can’t claim a clear winner.

At the budget levels many advertisers can realistically afford, the old-style incrementality tests, which are frequentist in nature, often fail to produce conclusive results.

That’s the gap Google is trying to close with its newer, Bayesian-style incrementality methods: keeping tests useful even when the budget is closer to $5,000 than $100,000.

Here’s why a different approach to the test significantly reduces the required budget.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Bayesian testing: What matters is likelihood, not certainty

Bayesian models ask different – and often more decision-useful – questions. 

Instead of asking whether a result is statistically significant, they ask a more practical question: 

  • Given what we already know, how likely is this to be true?

Now let’s apply that framing to the same $5,000 budget example that produced an inconclusive frequentist result.

Using a simple Bayesian model with flat priors (Beta(1,1)):

  • Control: 25 conversions out of 1,250 clicks → Beta(26, 1226)
  • Treatment: 30 conversions out of 1,250 clicks → Beta(31, 1221)

From these posterior distributions, we can compute:

  • Mean lift: ~18–20%
  • 95% credible interval: roughly spans negative to positive lift (wide, as expected with small data)
  • Probability that lift > 0: ~75–80%

A traditional A/B test looked at the same data and said:

  • “Inconclusive. Could be noise. Come back with a bigger budget.”

But a Bayesian read says something more nuanced and infinitely more practical:

  • “There’s about an 80% chance the treatment really is better.”

It’s not proof, but it may be enough to guide the next step, like extending the test, replicating it, or making a small allocation shift.

Bayesian methods don’t magically create signal where none exists. So what is the magic then, and why does this work? 

So, how does Google make $5,000 tests work?

Short answer: priors + scale.

Frequentist methods only look at observed test data. 

Bayesian models allow you to bring prior knowledge to the table. 

And guess which company has a ton of data about online ad campaigns? This, indeed, is Google’s advantage. 

Google doesn’t evaluate your test entirely in isolation. Instead, it draws on:

  • Informative priors (large volumes of historical campaign data).
  • Hierarchical modeling (grouping your test with similar campaigns).
  • Probabilistic outputs (replacing p-values with likelihoods).

Google explains these concepts in their Meridian MMM documentation.

Here’s an example:

Test typePosterior liftProb(lift > 0)Interpretation
No prior+0.7%54%Inconclusive
Prior (~10% lift)+20.5%76%Directionally confident

The prior belief, in the example above, that similar campaigns often see ~10% lift, stabilizes the result enough to support real decisions.

Dig deeper: Exploring Meridian, Google’s new open-source marketing mix model

Smart Bidding already works this way

Should we trust this new approach that uses prior knowledge? 

We should, because it underpins a different system from Google Ads that advertisers are happy with – Smart Bidding

Consider how Smart Bidding establishes expectations for a new campaign. It doesn’t start from scratch.

It uses device-level, location-level, time-of-day, vertical, and historical performance data to form an initial expectation and updates those expectations as new data arrives.

Google applies the same principle to incrementality testing.

Your $5,000 test inherits learnings from campaigns similar to yours, and that’s what makes insight possible before spending six figures.

That’s the “memory” behind the math.

Why frequentist thinking leaves marketers stuck

Let’s put Bayesian and frequentist methods side by side:

AspectFrequentistBayesian
OutputP-valueProbability of lift
Sample sizeLargeSmaller if priors are strong
FlexibilityBinaryProbabilistic
Real-world relevanceLimitedHigh
Handles uncertaintyPoorlyExplicitly

Marketers don’t make decisions in black-and-white terms. 

Bayesian outputs speak the language of uncertainty, risk, and trade-offs, which is how budget decisions are actually made.

Get the newsletter search marketers rely on.


Google’s data advantage

Google doesn’t guess at priors. They’re informed by:

  • Historical campaign performance.
  • Cross-campaign learning.
  • Attribution modeling (including data-driven attribution and modeled conversions).

Then priors are downweighted as test data accumulates, a core principle of Bayesian statistics and one that’s especially relevant for advertisers concerned about bias or “baked-in” assumptions.

Prior, data, posterior

At the start of a test, when data is sparse and noisy, prior information plays an important stabilizing role. 

It provides a reasonable starting point based on how similar campaigns have performed in the past, preventing early results from swinging wildly based on a handful of conversions.

But as more data is observed, something important happens. 

The information coming from the test itself, the likelihood becomes sharper and more informative. 

Each additional conversion adds clarity, narrowing the range of plausible outcomes. 

Over time, that growing body of evidence naturally outweighs the influence of the prior.

In practical terms, this means Bayesian tests don’t stay anchored to their starting assumptions. They evolve. 

Initially, the model relies on historical patterns to interpret limited data. 

Later, it increasingly trusts what actually happened in your campaign. 

Eventually, with enough volume, the results are driven almost entirely by the observed data, much like a traditional experiment.

This dynamic is what makes Google’s approach viable at both ends of the spectrum. 

It allows small tests to produce usable directional insight without overreacting to noise, while still ensuring that large, data-rich tests converge on conclusions driven by real performance rather than inherited assumptions.

What advertisers should watch for

The system is powerful, but not perfectly transparent. Important open questions remain:

  • Are priors fully removed once enough test data exists?
  • Can advertisers inspect or validate priors?
  • What safeguards prevent irrelevant priors from influencing results?

Google has indicated that priors diminish as data grows, but advertisers still need to apply judgment when interpreting results.

Dig deeper: How causal impact studies work and when to use them in PPC

Stop chasing significance, start reducing uncertainty

Statistical significance is a blunt instrument in a world that demands nuance. 

Bayesian testing offers a more practical way to measure impact, especially when budgets are limited and decisions can’t wait.

The next time Google shows you a lift estimate from a $5,000 test, don’t dismiss it.

It’s not smoke and mirrors. 

It’s math with all the benefits of Google’s massive knowledge about the performance of ad campaigns that have come before yours. 

And it’s a welcome new capability from Google Ads for all advertisers who want to make better data-driven optimization decisions.

Before yesterdayMain stream

Google adds animation and image editing tools to Merchant Center’s Product Studio

17 December 2025 at 21:04
Google Shopping Ads - Google Ads

Google has expanded Product Studio inside Merchant Center, rolling out three new creative features that go beyond its original image generation tool.

What’s new. In addition to image generation, Product Studio now lets merchants animate static product images into short videos using suggested text prompts, a move aimed squarely at short-form ads and social-style creative.

Google has also added one-click background removal to help isolate products and create cleaner, more consistent Shopping visuals.

The third update increases image resolution, allowing advertisers to upscale older or lower-quality assets to meet modern visual standards.

Why we care. Product imagery plays a major role in Shopping performance, but creating and refreshing assets is often slow and resource-heavy. These updates give merchants more ways to produce high-quality visuals quickly — without leaving Merchant Center or relying on design teams.

The big picture. Google continues to embed AI-powered creative tools directly into commerce workflows. By housing animation, editing, and enhancement inside Merchant Center, Google is lowering the barrier to frequent creative testing — a key lever for Shopping and Performance Max campaigns.

What to watch. These tools could significantly speed up asset iteration for advertisers with limited creative resources, especially as Google pushes more video-forward and visually rich ad formats across Search, Shopping, and YouTube.

First seen. This update was spotted by Senior PPC Specialist – Vojtěch Audy

How to use Google’s Channel Performance report for PMax campaigns

17 December 2025 at 18:00
How to use Google’s Channel Performance report for PMax campaigns

For years, PPC advertisers have considered Performance Max (and Smart Shopping before it) to be a black box, even a black hole.

While its powerful automation drives convincing results, the lack of transparency into channel performance has been a persistent frustration. 

Now, Google is beginning to provide some answers. 

The rollout of the new Channel Performance report marks a significant step toward the transparency advertisers have been demanding. 

This guide explains what the report is, highlights its strengths and weaknesses, and shows you how to use it.

What is the Channel Performance report – and why is it a big deal?

The Channel Performance report is essentially a pre-built network report (we can discuss the semantics of channel versus network another day), which can be found under Campaigns > Insights and Reports > Channel Performance (beta).

It offers tabular network data and an interactive flow diagram from impressions down through conversions. 

The Channel Performance report only works for Performance Max campaigns. However, credible clues suggest that this report may support additional campaign types in the future.

This is important because, while Performance Max is (in)famously a “channel soup,” all campaign types are capable of serving across different ad networks within Google’s grasp, and many of them do so by default.

Previously, untangling this mix to see which channels were actually performing was a task left to manual reports or, in the case of PMax, third-party scripts based on guesswork.

The Channel Performance report is Google’s native solution. 

A tour of the Channel Performance report

The report is composed of two main elements: 

  • An account-level view that offers a compact summary of each campaign’s channel data (plus some hidden features).
  • A campaign-level view that offers a neat but, in my opinion, deeply flawed Sankey diagram, and another data table, more detailed than at the account level. 

Furthermore, there are various customization options, which can be saved as preferred views, and multiple export options.

1. The account-level overview: Channel data in the palm of your hand

The account view is a newer addition to the Channel Performance report, and in some ways my favorite view. 

Previously, when you accessed this report, you’d land on a blank page prompting you to select an individual Performance Max campaign. 

Now, this handy table is the first thing you’ll see.

SEL_pmax-channel-performance-report_asset-6

It has a series of rows for each campaign, nested rows for each channel, and columns for the performance metrics. 

One thing I love is that each nested row has the channel icon next to it. 

Tabular data can sometimes make my eyes cross, but this simple visual aid makes the data much easier to skim.

By default, the campaign rows are sorted alphabetically, and you’ll likely want to sort by something more practical, like impressions, costs, revenue, etc.

After that, you can really leap down the page easily, comparing the distribution of your key campaigns.

But that’s the obvious part.

My top tip for this view is that you can change your segment, and among the options, two really stand out for me: 

  • Ads using product data.
  • Ad event type (under Segment > Conversions).
SEL_pmax-channel-performance-report_asset-4

The first allows you to see the volume and performance of “ads using product data” (feed-based ads) versus “ads not using product data” (asset-based ads).

Yes, that’s right, finally a simple comparison of feed ads and asset ads. Besides network performance, this has been one of the most contentious and least transparent areas in PMax, prompting numerous advertisers to run so-called “feed-only” PMax campaigns.

Now you can easily see what’s going on with this performance facet across all your PMax campaigns, plus an account-level summary row at the bottom. 

Whether you like or dislike what you’re seeing, you can head over to your asset-group-level and asset-level reporting to dig deeper. 

Be cautious when judging the performance of asset-based ads. They should not be held to the same efficiency standards.

The second segment, ad event type, might sound non-descript, but it’s really important.

It lets you easily understand the volume and performance of your click-through versus view-through conversions. 

This has been (yet another) divisive topic in PMax: 

  • Do view-based conversions belong mixed together with standard conversions? 
  • Does this inflate performance? 

Now you can answer these questions per campaign and also at the account view in the summary row.

But what if you want even more detail? 

What if, for example, you want to learn your feed versus asset share in, say, YouTube specifically? 

That’s not possible at the account level, but it certainly is at the campaign level.

Just click on any campaign and it will load a new page drilling down to the next reporting level. 

2. The campaign-level view: Data visualization and detailed analysis

The first thing you’ll notice on this page is the large Sankey diagram. 

It’s visually striking and has become a signature of the Channel Performance report.

That said, we need to set it aside for now. Scroll down to the data table below, which is similar to the one you just saw.

The campaign data table: A deeper dive

While the Sankey diagram gives a high-level view, the table below is where real analysis happens. 

It’s more reliable for decision-making because it shows the raw numbers without visual distortion.

The table breaks performance down by channel and ad type – the feed-based versus asset-based split we discussed earlier. 

For each segment, you can review multiple metrics by default, but my top tip is to go to Columns > Conversions.

There, you can select Conv. value / Cost (a.k.a. ROAS) and Cost / Conv. (a.k.a. CPA). 

These are hidden by default, but you can indeed see them, and I don’t think I have to tell you why they are interesting to know.

SEL_pmax-channel-performance-report_asset-1

Crucially, the table also includes an export function, plus scheduling options, allowing you to pull the raw data for deeper analysis in a spreadsheet.

The Sankey diagram: Visualizing the flow

As noted earlier, this visualization – officially called the Channels-to-Goals chart – is visually striking, but it has limitations. 

Before addressing those issues, let’s clarify its purpose and what it can tell us.

The Sankey diagram presents a visual breakdown of performance across the channels within your PMax campaign. 

SEL_pmax-channel-performance-report_asset-5

It maps the customer journey within your campaign – how users move from seeing an ad (impressions) to clicking or engaging with it (interactions), and, ultimately, to converting (results or conversions).

This is great. For the first time, advertisers can see the flow of core funnel metrics right in Google Ads, all segmented by the specific channel driving the traffic. 

This allows you to understand how PMax allocates your budget and which parts of its vast inventory are actually working for you.

Decoding the channels

People often look at the Sankey and get stuck. “Where’s my Shopping data?” is probably the single biggest example of this. 

As we’ve discussed, a key feature of the report is how it segments ads into feed-based and asset-based ads.

When we combine that dimension with the network or “channel” dimension, we can translate the labels into more familiar terms:

  • Search
    • Ads using product data: These are your Shopping ads.
    • Other ads: This represents your Dynamic Search Ads (DSA) and Responsive Search Ads (RSA) traffic.
  • Display
    • Ads using product data: These are Dynamic Product Ads, which in my assessment is likely a lot of Dynamic Remarketing and some Dynamic Prospecting.
    • Other ads: These are your standard Responsive Display ads.
SEL_pmax-channel-performance-report_asset-3

These are my interpretations of the data, which might not be perfect. 

It would be extremely helpful if Google offered more detailed documentation on what’s included.

For example, feed-based YouTube ads can comprise a variety of formats and placements, some of which, such as “GMC Image Shorts,” are not documented anywhere.

Google’s guidance is quite vague.

Get the newsletter search marketers rely on.


The limitations of the native report

While a welcome addition, the report has some shortcomings.

The misleading Sankey diagram 

The visual proportions of the diagram are not based on volume, which makes it extremely misleading at a glance. 

A channel that appears to drive significant traffic may actually account for only a tiny share of your impressions.

In the example below, the asset-based Search ads segment appears to have a couple hundred thousand impressions, but in reality only has 4,500 impressions. 

This makes the chart almost useless for quick, accurate analysis, which is the entire point of data visualization.

SEL_pmax-channel-performance-report_asset-3

The lack of ratios in the data table 

The data table provides useful raw data, but it lacks key calculated metrics needed for analysis, such as conversion rate and cost per click.

To see the full picture, you must export the data and do your own calculations.

This feels, to be honest, a bit petty of Google. 

They could easily add these columns, but it seems they would prefer not to. Grab your calculator.

How to make the most of the report

Despite its limitations, you can still extract valuable insights into which channels deliver what.

The key is to focus on asset quality and traffic quality, because direct channel control is limited.

Analyze placement data for quality control 

While the report doesn’t let you directly control channel mix, it helps you monitor traffic quality. 

Use the placement reports to see exactly where your Display and YouTube ads are showing.

  • Export this data into Google Sheets. Note that, frustratingly, it only contains impression data.
  • Use built-in functions like =GOOGLETRANSLATE() to understand foreign-language placements and the integrated =AI() function to help categorize domains and videos for brand safety.
  • Exclude low-quality or irrelevant placements or content at the account level, prioritizing bad placements that are higher in volume.

Build your own Sheets-based reporting or try scripts

Google has confirmed that API access and MCC-level reporting are coming to the Channel Performance report. I also expect this data to be supported in the Report Editor. 

In the meantime, you can export the report as a .csv or send it directly to Google Sheets.

With a smart setup, these exports enable you to calculate custom metrics, build charts, apply heatmaps, and reshape the data as needed.

To help the community, I helped build a script that enhances Google’s report in several practical ways:

  • Adds key metrics like conversion rate, CTR, CPC, CPM, and more.
  • Applies clear, common-sense labels such as “Shopping” and “Responsive Display.”
  • Includes charts with proportional visuals for more accurate interpretation.
  • Cleans and parses columns to remove friction.

The script works for individual PMax campaigns, not the account-level view. I’m waiting for Google’s feature set and scripting options to stabilize before expanding the script.

What’s next for PMax reporting?

We know Search Partner data is coming, along with API access, MCC-level reporting, and likely support for additional campaign types such as Demand Gen.

It’s encouraging to see Google share this level of detail, and there’s reason to believe this momentum will continue. 

The Channel Performance report already addresses one of the most persistent criticisms of Performance Max – that it operates as a black box. 

Three years ago, it would have been hard to imagine Google responding to advertiser feedback at this scale, particularly on transparency.

Still, better visibility doesn’t automatically translate into better decisions. 

Interpreting this data correctly takes time, context, and careful analysis – and that work remains firmly in the hands of advertisers.

SEL_pmax-channel-performance-report_asset-7

Google: Exact match keywords won’t block broad match in AI Max

16 December 2025 at 23:54
Why phrase match is losing ground to broad match in Google Ads

Ginny Marvin, Google’s Ads Liaison, is clarifying how keyword match types interact with AI Overviews (AIO) and AI Mode ad placements — addressing ongoing confusion among advertisers testing AI Max and mixed match-type setups.

Why we care. As ads expand into AI-powered placements, advertisers need to understand which keywords are eligible to serve — and when — to avoid unintentionally blocking reach or misreading performance.

Back in May. Responding to questions from Marketing Director Yoav Eitani, Marvin confirmed that an ad can serve either above or below an AI Overview or within the AI Overview — but not both in the same auction:

  • “Your ad could trigger to show either above/below AIO or within AIO, but not both at this time.” Marvin confirmed.

While both exact and broad match keywords can be eligible to trigger ads above or below AIO, only broad match keywords (or keywordless targeting) are eligible to trigger ads within AI Overviews.

What’s changed. In a follow-up exchange with Paid Search specialist Toan Tran, Marvin clarified that Google has updated how eligibility works. Previously, the presence of an exact match keyword could prevent a broad match keyword from serving in AI Overviews. That is no longer the case.

  • “The presence of the same keyword in exact match will not prevent the broad match keyword from triggering an ad in an AI Overview, since the exact match keyword is not eligible to show Ads in AI Overviews and hence not competing with the broad match keyword.” Marvin said.

Since exact and phrase match keywords are not eligible for AI Overview placements, they do not compete with broad match keywords in that auction — meaning broad match can still trigger ads within AIO even when the same keyword exists as exact match.

The big picture. Google is reinforcing a clear separation between traditional keyword matching and AI-powered intent matching. Ads in AI Overviews rely on a deeper understanding of both the user query and the AI-generated content, which is why eligibility is limited to broader targeting signals.

The bottom line. Exact and phrase match keywords won’t show ads in AI Overviews — but they also won’t block broad match from doing so. For advertisers leaning into AI Max and AIO placements, broad match and keywordless strategies are now essential to unlocking reach in Google’s AI-driven surfaces.

When Google’s AI bidding breaks – and how to take control

16 December 2025 at 19:00
When Google’s AI bidding breaks – and how to take control

Google’s pitch for AI-powered bidding is seductive.

Feed the algorithm your conversion data, set a target, and let it optimize your campaigns while you focus on strategy. 

Machine learning will handle the rest.

What Google doesn’t emphasize is that its algorithms optimize for Google’s goals, not necessarily yours. 

In 2026, as Smart Bidding becomes more opaque and Performance Max absorbs more campaign types, knowing when to guide the algorithm – and when to override it – has become a defining skill that separates average PPC managers from exceptional ones.

AI bidding can deliver spectacular results, but it can also quietly destroy profitable campaigns by chasing volume at the expense of efficiency. 

The difference is not the technology. It is knowing when the algorithm needs direction, tighter constraints, or a full override.

This article explains:

  • How AI bidding actually works.
  • The warning signs that it is failing.
  • The strategic intervention points where human judgment still outperforms machine learning.

How AI bidding actually works – and what Google doesn’t tell you

Smart Bidding comes in several strategies, including:

Each uses machine learning to predict the likelihood of a conversion and adjust bids in real time based on contextual signals.

The algorithm analyzes hundreds of signals at auction time, such as:

  • Device type.
  • Location.
  • Time of day.
  • Browser.
  • Operating system.
  • Audience membership.
  • Remarketing lists.
  • Past site interactions.
  • Search query.

It compares these signals with historical conversion data to calculate an optimal bid for each auction.

During the “learning period,” typically seven to 14 days, the algorithm explores the bid landscape, testing bid levels to understand the conversion probability curve. 

Google recommends patience during this phase, and in general, that advice holds. The algorithm needs data.

The first problem is that learning periods are not always temporary. 

Some campaigns get stuck in perpetual learning and never achieve stable performance.

Dig deeper: When to trust Google Ads AI and when you shouldn’t

Google’s optimization goals vs. your business goals

The algorithm optimizes for metrics that drive Google’s revenue, not necessarily your profitability.

When a Target ROAS of 400% is set, the algorithm interprets that as “maximize total conversion value while maintaining a 400% average ROAS.” 

Notice the word “maximize.”

The system is designed to spend the full budget and, ideally, encourage increases over time. 

More spend means more revenue for Google.

Business goals are often different. 

You may want a 400% ROAS with a specific volume threshold. 

You may need to maintain margin requirements that vary by product line. 

Or you may prefer a 500% ROAS at lower volume because fulfillment capacity is constrained.

The algorithm does not understand this context. 

It sees a ROAS target and optimizes accordingly, often pushing volume at the expense of efficiency once the target is reached.

This pattern is common. An algorithm increases spend by 40% to deliver 15% more conversions at the target ROAS. Technically, it succeeds. 

In practice, cash flow cannot support the higher ad spend, even at the same efficiency. 

The algorithm does not account for working capital constraints.

Key signals the algorithm can’t understand

AI bidding works well, but it has limits. 

Without intervention, several factors can’t be fully accounted for.

Seasonal patterns not yet reflected in historical data

Launch a campaign in October, and the algorithm has no visibility into a December peak season.

It optimizes based on October performance until December data proves otherwise, often missing early seasonal demand.

Product margin differences

A $100 sale of Product A with a 60% margin and a $100 sale of Product B with a 15% margin look identical to the algorithm. 

Both register as $100 conversions. The business impact, however, is very different. 

This is where profit tracking, profit bidding, and margin-based segmentation matter.

Customer lifetime value variations

Unless lifetime value modeling is explicitly built into conversion values, the algorithm treats a first-time customer the same as a repeat buyer. 

In most accounts, that modeling does not exist.

Market and competitive changes

When a competitor launches an aggressive promotion or a new entrant appears, the algorithm continues bidding based on historical conditions until performance degrades enough to force adjustment. 

Market share is often lost during that lag.

Inventory and supply chain constraints

If a best-selling product is out of stock for two weeks, the algorithm may continue bidding aggressively on related searches because of past performance. 

The result is paid traffic that cannot convert.

This is not a criticism of the technology. It’s a reminder that the algorithm optimizes only within the data and parameters provided. 

When those inputs fail to reflect business reality, optimization may be mathematically correct but strategically wrong.

Warning signs your AI bidding strategy is failing

The perpetual learning phase

Learning periods are normal. Extended learning periods are red flags.

If your campaign shows a “Learning” status for more than two weeks, something is broken. 

Common causes include:

  • Insufficient conversion volume – the algorithm typically needs at least 30 to 50 conversions per month.
  • Frequent changes that reset the learning period.
  • Unstable performance with wide day-to-day fluctuations.

When to intervene

If learning extends beyond three weeks, either:

  • Increase the budget to accelerate data collection.
  • Loosen the target to allow more conversions.
  • Or switch to a less aggressive bid strategy like Enhanced CPC. 

Sometimes the algorithm is simply telling you it does not have enough data to succeed.

Budget pacing issues

Healthy AI bidding campaigns show relatively smooth budget pacing. 

Daily spend fluctuates, but it stays within reasonable bounds. 

Problematic patterns include:

  • Front-loaded spending – 80% of the daily budget gone by 10 a.m.
  • Consistent underspending, such as averaging 60% of budget per day.
  • Volatile day-to-day swings, like spending $800 one day, $200 the next, then $650 after that.

Budget pacing is a proxy for algorithm confidence. 

Smooth pacing suggests the system understands your conversion landscape. 

Erratic pacing usually means it is guessing.

The efficiency cliff

This is the most dangerous pattern. Performance starts strong, then gradually or suddenly deteriorates.

This shows up often in Target ROAS campaigns. 

  • Month 1: 450% ROAS, excellent. 
  • Month 2: 420%, still good. 
  • Month 3: 380%, concerning. 
  • Month 4: 310%, alarm bells.

What happened? 

The algorithm exhausted the most efficient audience segments and search terms. 

To keep growing volume – because it is designed to maximize – it expanded into less qualified traffic. 

Broad match reached further. Audiences widened. Bid efficiency declined.

Traffic quality deterioration

Sometimes the numbers look fine, but qualitative signals tell a different story. 

  • Engagement declines – bounce rate rises, time on site falls, pages per session drop. 
  • Geographic shifts appear as the algorithm drives traffic from lower-value regions. 
  • Device mix changes, often skewing toward mobile because CPCs are cheaper, even when desktop converts better. 
  • Time-of-day misalignment can also emerge, with traffic arriving when sales teams are unavailable.

These quality signals do not directly influence optimization because they are not part of the conversion data. 

To address them, the algorithm needs constraints: bid adjustments, audience exclusions, or ad scheduling.

The search terms report reveals the truth

The search terms report is the truth serum for AI bidding performance. 

Export it regularly and look for:

  • Low-intent queries receiving aggressive bids.
  • Informational searches mixed with transactional ones.
  • Irrelevant expansions where the algorithm chased conversions into entirely different intent.

A high-end furniture retailer should not spend $8 per click on “free furniture donation pickup.” 

A B2B software company targeting “project management software” should not appear for “project manager jobs.” 

These situations occur when the algorithm operates without constraints. 

Keyword matching is also looser than it was in the past, which means even small gaps can allow the system to bid on queries you never intended to target.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Get the newsletter search marketers rely on.


Strategic intervention points: When and how to take control

Segmentation for better control

One-size-fits-all AI bidding breaks down when a business has diverse economics. 

The solution is segmentation, so each algorithm optimizes toward a clear, coherent goal.

Separate high-margin products – 40%+ margin – into one campaign with more aggressive ROAS targets, and low-margin products – 10% to 15% margin – into another with more conservative targets. 

If the Northeast region delivers 450% ROAS while the Southeast delivers 250%, separate them. 

Brand campaigns operate under fundamentally different economics than nonbrand campaigns, so optimizing both with the same algorithm and target rarely makes sense.

Segmentation gives each algorithm a clear mission. Better focus leads to better results.

Bid strategy layering

Pure automation is not always the answer. 

In many cases, hybrid approaches deliver better results.

  • Run Target ROAS at 400% under normal conditions, then manually lower it to 300% during peak season to capture more volume when demand is high. 
  • Use Maximize Conversion Value with a bid cap if unit economics cannot support bids above $12. 
  • Group related campaigns under a portfolio Target ROAS strategy so the algorithm can optimize across them. 
  • For campaigns with limited conversion data or volatile performance, Enhanced CPC offers algorithmic assistance without full black box automation.

The hybrid approach

The most effective setups combine AI bidding with manual control campaigns.

Allocate 70% of the budget to AI bidding campaigns, such as Target ROAS or Maximize Conversion Value, and 30% to Enhanced CPC or manual CPC campaigns. 

Manual campaigns act as a baseline. If AI underperforms manual by more than 20% after 90 days, the algorithm is not working for the business.

Use tightly controlled manual campaigns to capture the most valuable traffic – brand terms and high-intent keywords – while AI campaigns handle broader prospecting and discovery. 

This approach protects the core business while still exploring growth opportunities.

COGS and cart data reporting (plus profit optimization beta)

Google now allows advertisers to report cost of goods sold, or COGS, and detailed cart data alongside conversions. 

This is not about bidding yet, but seeing true profitability inside Google Ads reporting.

Most accounts optimize for revenue, or ROAS, not profit. 

A $100 sale with $80 in COGS is very different from a $100 sale with $20 in COGS, but standard reporting treats them the same. 

With COGS reporting in place, actual profit becomes visible, dramatically improving the quality of performance analysis.

To set it up, conversions must include cart-level parameters added to existing tracking. 

These typically include item ID, item name, quantity, price, and, critically, the cost_of_goods_sold parameter for each product.

Google is testing a bid strategy that optimizes for profit instead of revenue. 

Access is limited, but advertisers with clean COGS data flowing into Google Ads can request entry. 

In this model, bids are optimized around actual profit margins rather than raw conversion value. 

This is especially powerful for retailers with wide margin variation across products.

For advertisers without access to the beta, a custom margin-tracking pixel can be implemented manually. It is more technical to set up, but it achieves the same outcome.

Dig deeper: Margin-based tracking: 3 advanced strategies for Google Shopping profitability

When AI bidding actually works

AI bidding works best when the fundamentals are in place: 

  • Sufficient conversion volume.
  • A stable business model with consistent margins and predictable seasonality.
  • Clean conversion tracking.
  • Enough historical data to support learning.

In these conditions, AI bidding often outperforms manual management by processing more signals and making more granular optimizations than humans can execute at scale.

This tends to be true in:

  • Mature ecommerce accounts.
  • Lead generation programs with consistent lead values.
  • SaaS models with predictable trial-to-paid conversion paths.

When those conditions hold, the role shifts.

Bid management gives way to strategic oversight – monitoring trends, identifying expansion opportunities, and testing new structures.

The algorithm then handles tactical optimization.

Preparing for AI-first advertising

Google is steadily reducing advertiser control under the banner of automation. 

  • Performance Max has absorbed Smart Shopping and Local campaigns. 
  • Asset groups replace ad groups. 
  • Broad match becomes mandatory in more contexts. 
  • Negative keywords increasingly function as suggestions the system may or may not honor.

For advertisers with complex business models or specific strategic goals, this loss of granularity creates tension. 

You are often asked to trust the algorithm even when business context suggests a different decision.

That shift changes the role. You are no longer a bid manager. 

You are an AI strategy director who:

  • Defines objectives.
  • Provides business context.
  • Sets constraints.
  • Monitors outcomes.
  • Intervenes when the system drifts away from strategic intent.

No matter how advanced AI bidding becomes, certain decisions still require human judgment. 

Strategic positioning – which markets to enter and which product lines to emphasize – cannot be automated. 

Neither can creative testing, competitive intelligence, or operational realities like inventory constraints, margin requirements, and broader business priorities.

This is not a story of humans versus AI. It is humans directing AI.

Dig deeper: 4 times PPC automation still needs a human touch

Master the algorithm, don’t serve it

AI-powered bidding is the most powerful optimization tool paid media has ever had. 

When conditions are right – sufficient data, a stable business model, and clean tracking – it delivers results manual management cannot match.

But it is not magic.

The algorithm optimizes for mathematical targets within the data you provide. 

If business context is missing from that data, optimization can be technically correct and strategically wrong. 

If markets change faster than the system adapts, performance erodes. 

If your goals diverge from Google’s revenue incentives, the algorithm will pull in directions that do not serve the business.

The job in 2026 is not to blindly trust automation or stubbornly resist it. 

It is to master the algorithm – knowing when to let it run, when to guide it with constraints, and when to override it entirely.

The strongest PPC leaders are AI directors. They do not manage bids. They manage the system that manages bids.

❌
❌