Sick of slow PS5 game downloads? This simple trick could save you a whole lot of time
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
(Provided to Search Engine Land by SEOjobs.com)
(Provided to Search Engine Land by PPCjobs.com)
Advertising Media Manager, Vetoquinol USA (Remote)
Programmatic Advertising Manager, We Are Stellar (Remote)
Marketing Manager, Backstage (Remote)
Demand Generation Manager, Shoplift (Remote)
Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)
Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)
Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)
Paid Search Marketing Manager, LawnStarter (Remote)
Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)
Search Engine Op imization Manager, NoGood (Remote)
Note: We update this post weekly. So make sure to bookmark this page and check back.
Google is reaching out directly to advertisers via email, requiring them to confirm whether their campaigns contain EU political ads — with a hard deadline of March 31st.
Why we care. This isn’t optional. EU regulation now requires Google to verify political ad status across all active campaigns, and advertisers who don’t act before the deadline could face compliance issues.
What’s happening. Google is asking every advertiser to declare whether their existing campaigns include EU political ads. The requirement applies to all current campaigns and must be completed by March 31, 2026.

How to comply: Google has outlined three ways to submit the confirmation:
Between the lines. The account-level option is the most efficient route for most advertisers who are confident none of their campaigns fall under the EU political ads definition. Google has made it straightforward to reverse or adjust the selection at any point, so there’s no risk in acting early.
The bottom line. Check your inbox — Google is contacting advertisers directly. If you run campaigns targeting EU audiences, log in and complete the confirmation before March 31st to stay compliant.
First seen. This update was spotted by Paid Search expert, Arpan Banerjee, who shared the details of the comms on Linkedin.
Do you think you’re able to answer the question every marketing leader dreads hearing from leadership: “Why isn’t our marketing effort doing more?”
How do you even go about answering that?
Let’s look at what I mean using a fictional location analytics company we’ll call Acme Area Analytics.
The Acme team reviews its reports. Nothing appears broken. Campaigns are running, leads are still coming in, and performance metrics are mostly stable. Yet sales momentum isn’t clearly accelerating, and it’s hard to pinpoint why.
Insights are scattered across site analytics, brand monitoring and SEO tools, CRM systems, and paid media dashboards. Each platform reflects part of the story, but none shows the full picture.
That fragmentation is exactly how well-intentioned “data-driven decisions” can go wrong. Let’s look at how that happens and how Acme, and you, can fix it.
In global, multi-channel campaigns like Acme Area Analytics’, the hardest moments are when nothing is obviously underperforming. Digital channels are running. Leads are coming in, and metrics are mostly stable, yet sales momentum is stalled and it’s unclear which lever to pull next.
At the same time, subtle signals raise concerns. Non-brand CPCs are creeping upward, and a competitor — Spotter Intelligence — is suddenly appearing more frequently in branded search.
Let’s say you’re part of the Acme marketing team. You go back to your reports and ask the question most marketers ask in this situation: Which tactic is underperforming?
When diving into the platform data, you uncover what looks like a clear answer: remarketing performance for your API has softened, conversion rates have dipped slightly, and efficiency has begun to decline.
On the surface, you have your answer. Spend should be pulled back to match demand because audiences have likely seen the creative too many times.
That decision could certainly make sense, and it’s what many teams actually end up doing. But it’s also often wrong. Why? Because you haven’t yet asked the right question.
The more useful question is harder to answer: “Is demand actually declining, or are we failing to create new interest upstream?”
Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts
The real issue becomes clear when you look beyond a single channel. The location analytics market still had strong growth potential, but your product was encountering a shortage of engaged audiences receptive to the message. That disconnect became clearer when you looked beyond paid media.
Site engagement trends in analytics and brand search behavior in Search Console suggested interest in your type of location AI wasn’t disappearing. It just wasn’t converting yet.
The focus had shifted from reach to engaged awareness, with a priority on attention and engagement, not just exposure. So your Acme team decided to introduce additional campaign layers, including new content designed to build relevance and trust.
Crucially, you didn’t see any improvement right away. Cost-per-lead efficiency continued to decline, and it looked worse after increased upper-funnel investment. From a platform-only view, this looked like the time to pull back.
But looking across systems changed how performance was interpreted. Engagement from awareness activity began feeding remarketing pools, but the impact wouldn’t surface immediately for a product with long sales cycles like your API.
During that gap, the Acme team maintained confidence in its strategy by sharing early signs of upstream momentum. Only later did results begin to show up. Remarketing efficiency improved and higher sales volumes of the API were confirmed from integrated CRM data.
The takeaway for the Acme Area Analytics marketing team wasn’t just that “remarketing worked again,” or that upper funnel activity drives demand. It’s that the hardest marketing decisions are the ones you have to make — and hold — before success shows up in the metrics leadership typically trusts.
In our Acme example, each dashboard told a technically accurate story, but no single dashboard could fully articulate the whole picture.
Looking at any of those in a silo wouldn’t have allowed Acme’s marketing team to fully understand what was happening.
But we know that the insight didn’t live in any single view. When the question the team asked itself shifted to whether demand was moving effectively through the funnel, and dashboards were evaluated together in context, the decision changed.
This is what unsiloed analytics looks like in practice. It’s not about teams fighting over which touch led to the result, but recognizing that each part of a marketing plan plays a distinct and important role in creating momentum that grows demand and lifts sales.
Leadership wants proof. Pipeline and revenue might feel like the safest validation. But in complex, multi-channel programs, those are often lagging indicators of solid performance.
By the time pipeline clearly reflects demand creation, teams have often already pulled back awareness investment, cut channels that looked inefficient in isolation, and shifted budget toward short-term demand capture.
In the example above, waiting for proof would have meant that Acme reduced awareness and remarketing spend and possibly exited a market that would later show great promise.
Integrated data didn’t eliminate the risk of shifting investment from lead generation to awareness-building in a market that had declining metrics. Instead, it added credibility to the case for doing so.
Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era
This dynamic isn’t limited to complex, multi-channel programs. You can see it even within a single platform when multiple tactics work together.
Let’s look at a scenario where Acme’s brand search impression volume increased by roughly 50% year over year while Share of Voice remained flat. That means more people have been searching for Acme as the company has invested across out-of-home and other digital campaigns. Acme’s Google campaign then harvested the demand created by other channels.
If Acme’s brand search had been evaluated only in terms of its media plan efficiency, this signal of growing demand would have been easy to miss. In context, it confirmed that Acme’s awareness efforts were working, even though attribution couldn’t perfectly assign credit to individual channels.
In these examples, integrated data — unsiloed data — shifted the conversation.
Instead of Acme’s marketing teams debating budget cuts, they could monitor signs of early momentum, including longer time on site and rising brand search volume. Over time, that interest could be seen in the CRM as higher-quality leads that converted more frequently into closed deals.
The good news is that this doesn’t require new tools or perfectly stitched together data. It simply requires stepping back during planning and asking better questions about how potential customers signal interest as they consider your product.
Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma
In my experience, the most valuable marketing insights come from understanding how different data points relate.
Unsiloing your data isn’t about proving causality or winning attribution debates. Instead, it’s about recognizing opportunity early enough to act on it and identifying which metrics suggest that demand is quietly being built in the background.
The teams that win aren’t only better at reporting results. They’re better at seeing momentum while it’s still forming and acting on it early.
If I hear “always be testing” one more time, I might scream. It was great advice in 2016. In 2026, it’s a great way to light your budget on fire.
That mantra made sense when budgets were loose and platforms forgave a lot of chaos. Launch five audience tests simultaneously? Sure, why not! Swap out three creative variables at once? Go for it!
But the rules have changed. Our new reality has tighter budgets, longer learning phases, and signal fragmentation everywhere. One poorly structured test can distort your performance for weeks, not days. That performance hit compounds fast.
Modern experimentation is expensive and risky. Why pay that price when we have the power of agentic AI to help? And by help, I don’t mean slapping AI onto our existing process and asking it to generate more ad variants. That would just be an expedient way to light our budgets on fire.
Instead, it’s time to use agentic AI to design smarter experimentation systems.
In an “always be testing” era, it was all too easy to throw things to test at the scale Oprah gives out cars or Taylor Swift fills auditoriums. It often led to unstructured testing where we launched ideas on a Monday and checked results on Friday hoping for a lift. There was nary a risk model, overlap detection, or strategic sequencing in sight.
The costs of that approach are now exponentially higher. Take platform disruption. Algorithms crave stability. Industry benchmarks show ad sets stuck in learning phases often see CPAs 20-40% higher than stable sets.
Every time you significantly change creative, audience, or budget, you risk resetting that learning. If you’re running three overlapping tests that each trigger resets, you’re voluntarily paying a volatility tax on your entire media spend.
Then there’s waste. The majority of A/B tests deliver no statistically significant lift. If you aren’t ruthless about what deserves to run, you’re burning budget to prove most ideas don’t matter. “Always be testing” without guardrails turns into “always be destabilizing.”
The shift looks like this. Old approach: “AI, write me 10 new headlines.” New approach: “AI, design the smartest next experiment within our budget, risk tolerance, and current learning state.”
The reframe from creative generation to experimentation architecture is where real leverage lives.
Here’s a practical seven-step framework to turn testing from a tactical habit into strategic infrastructure.
Before you let any AI near your experiments, lock in constraints. Without them, AI lacks proper context. With them, AI becomes a disciplined strategic partner.
Define and document five hard boundaries.
Document this in a single file (e.g., experimentation-guardrails.md) to teach AI the constraints that make ideas viable. Your AI agent must reference this before proposing any test.
Most teams have the data sitting in spreadsheets, but never extract the lessons. Feed your last six months of test results into an AI agent and have it analyze variables changed, duration, performance delta, statistical confidence, and platform resets.
Ask it to find patterns, such as:
This is how AI becomes a true analytical partner.
Rather than jumping straight from idea to launch, use AI to help you enforce hypothesis discipline.
Structured hypotheses create institutional memory. Six months later, when someone suggests testing “speed messaging” again, you’ll know exactly who it worked for and why. Yes, it feels like paperwork, but this discipline can protect your budget from algorithm chaos.
Budget isn’t infinite and neither is algorithm stability. Your AI agent should evaluate each proposed test across five dimensions and assign a risk score.
High risk + low learning = Kill it. Low risk + high insight = Green light.
Example: Testing a radical new enterprise positioning statement is high risk in a paid conversion campaign. Instead, your AI agent might suggest validating it first via organic LinkedIn content or low-budget audience polling. Low risk. High signal.
This is one of the most underused applications of AI in experimentation. Synthetic testing means simulating how different personas may react to messaging before spending media dollars, and the data backs it up.
A study involving researchers from Stanford and Google DeepMind found that digital agents trained on interview data matched human survey responses with 85% accuracy and mimicked social behavior with 98% correlation.
This makes synthetic audiences surprisingly useful for early-stage signal gathering. While they don’t replace real-world data (at least not yet), they can act as creative QA.
Here’s how it works. Define psychographic archetypes.
Feed your proposed messaging into your AI system and ask, “How would the Skeptical CMO react to this?”
You might get feedback like: “The phrase ‘All-in-One’ triggers skepticism. It signals feature bloat. Consider reframing as ‘Integrated’ or ‘Modular.’”
That kind of signal costs pennies in API calls instead of thousands in paid testing.
Changing audience, creative, and landing page in the same week teaches you almost nothing. Your AI agent should act like air traffic control: scan active campaigns, flag conflicts, and recommend sequencing.
A better flow:
If overlap is unavoidable, enforce clean holdout groups so you always have a source of truth.
Treat tests like disposable experiments and you lose the compounding value. Have your AI auto-summarize every completed test:
Over time, this database becomes your moat. Everyone can buy the same targeting. Few teams have 100+ validated customer truths at their fingertips.
“Always be testing” was a growth-era mindset. In 2026, the winning mindset is “always be compounding intelligence.”
Rather than more tests, build your competitive advantage through structured, risk-aware, insight-driven experimentation that protects algorithm stability and ties experimentation directly to revenue.
The next time your stakeholder asks why you aren’t testing more, show them your experimentation architecture and say, “We’re not just running experiments. We’re building an intelligence engine.”
Because intelligence compounds.
Video advertising has never been easier to distribute. Platforms can deliver impressions and views at an enormous scale across YouTube, paid social, short-form video, and connected TV.
But distribution isn’t the same as effectiveness. Many campaigns generate impressive platform metrics while producing little measurable business impact.
The problem usually isn’t targeting, budget, or platform choice. It’s a deeper strategic issue: campaigns are optimized for outputs like views and impressions rather than outcomes like attention, persuasion, and action.
Poor targeting, limited budgets, and platform choice are rarely the real problem. The bigger issue is that many video ads are still produced as if they’re television commercials.
In the early days of online video, distribution was the challenge. Getting a video seen at all felt like a win. Today, distribution is abundant. Attention isn’t.
Every major platform — YouTube, paid social, short-form video, connected TV — competes for fragments of cognitive bandwidth. Users arrive with intent, habits, and expectations that have nothing to do with your campaign. We plan for reach, while viewers respond to relevance.
I’ve sat in many meetings where success was defined by impressions delivered or views accrued. But when you look downstream — search lift, site engagement, conversion — the connection often disappears.
Platforms will reliably deliver impressions. Turning those impressions into memory, persuasion, or action requires a fundamentally different mindset.
Dig deeper: From Video Action to Demand Gen: What’s new in YouTube Ads and how to win
Skippable formats changed video advertising permanently, but many advertisers still haven’t adjusted creatively.
Early in my career, I believed strongly in branding up front. Logos, product shots, music cues — everything that signaled professionalism. Those ads looked great in presentations. They underperformed in market.
A clear pattern emerged over time. Ads that opened with a recognizable problem, a provocative statement, or an unexpected visual held attention longer — even when branding appeared later. Ads that opened with branding signals were skipped almost reflexively.
View-through rate isn’t persuasion. A “view” simply means the platform’s minimum threshold was met. It doesn’t mean the message landed, the brand registered, or the viewer cared.
In multiple brand lift analyses, most measurable impact occurred before the skip button appeared. If the opening didn’t earn attention, the rest of the ad didn’t matter.
What works: treat the opening frame like a headline, not a preamble. Lead with tension, a question, or a familiar problem. Design for sound-off environments. If the first frame wouldn’t stop a scroll, nothing that follows will matter.
One of the most counterintuitive lessons in modern video advertising: polished ads frequently underperform scrappier ones.
I’ve seen simple, phone-shot videos outperform meticulously produced studio spots across YouTube, paid social, and short-form platforms. Not because quality doesn’t matter — but because perceived authenticity matters more.
Audiences are exceptionally good at identifying advertising. When something looks like an ad, they disengage. When it looks like content, they give it a chance.
Algorithms reinforce this: they reward watch time, retention, rewatches, and shares. They do not reward lighting setups or production budgets.
I’ve seen brands “upgrade” social video to look more premium, only to watch performance decline. The creative looked better. The results were worse.
The goal isn’t to look amateurish. It’s to look like you belong.
Match the platform’s visual grammar. Prioritize clarity over polish. Use real people and authentic voices whenever possible.
Ads that feel native get watched. Ads that feel inserted get skipped.
Dig deeper: How to get better results from Meta ads with vertical video formats
“Shorter is better” is one of the most persistent — and misleading — rules in video advertising.
Six-second ads can work. So can 60-second ads. I’ve seen both exceed expectations, and I’ve seen both fail badly. The difference was never duration — it was justification.
Some messages can be delivered instantly. Others require context, proof, or emotional buildup. Forcing every idea into the same runtime produces predictable results: safe, bland, forgettable ads.
I’ve reviewed retention graphs where a 45-second ad held viewers longer than a 15-second version, because the story justified its length. I’ve also seen six-second ads lose half their audience in the first two seconds because they wasted the opening.
Test multiple edits, not just multiple lengths. Watch retention curves, not averages. Build modular narratives: hook, then value, then proof, then action.
The “right” length is however long it takes to make the viewer feel their time was respected.
Platforms provide more data than ever. The problem isn’t a lack of metrics. It’s confusing metrics with outcomes.
I’ve seen campaigns praised for high completion rates that produced no measurable business impact. Strong engagement coexisting with low conversion. Impressive view counts that delivered zero lift.
This happens because platforms optimize for their success metrics, not yours. If your goal is to maximize views, the platform can do that easily. If your goal is to influence consideration, preference, or action, things get more complicated.
One uncomfortable question I’ve learned to ask early: what would failure look like here? If the answer is vague, the campaign is already at risk.
Define success in business terms before launch. Tie video metrics to downstream behavior wherever possible. Use lift studies, holdouts, or assisted conversions when they’re available. If you’re running a brand-building campaign, measure brand lift. If you’re running a performance campaign, measure conversions.
Dig deeper: AI for video advertising: 5 best practices for PPC campaigns
Creative is often blamed when video ads underperform. In reality, creative usually does exactly what it was asked to do. The problem is the brief.
Vague objectives produce generic ads. “Brand awareness” without context leads to unfocused messaging. “Make it engaging” isn’t a strategy.
Strong video ads almost always begin with clear answers to three questions:
When those answers are clear, creative decisions become easier. When they aren’t, the work is compromised before production begins.
The deeper diagnostic questions are worth keeping close:
I’ve seen entire campaigns improve simply because the brief forced alignment around audience insight rather than assumptions.
Another common mistake is treating creative and distribution as separate decisions. They aren’t.
The way an ad is consumed — fullscreen versus feed, sound-on versus sound-off, lean-back versus lean-forward — should shape how it’s made.
A video designed for connected TV shouldn’t simply be resized for mobile. A short-form ad shouldn’t be a truncated long-form story without rethinking the hook entirely.
I’ve seen strong ideas underperform because the creative didn’t match the placement. The concept wasn’t wrong. The context was.
Design with placement in mind from the start. Create platform-specific versions, not one-size-fits-all assets.
Accept that “reuse” often means “rethink,” not “repurpose.” Distribution constraints aren’t limitations — they’re creative inputs.
Dig deeper: How to dominate video-driven SERPs
Testing is indispensable. It’s also frequently misunderstood.
Running endless A/B tests without a hypothesis rarely produces insight. It produces noise.
The most effective testing focuses on variables that materially affect attention and comprehension: opening frames, narrative structure, on-screen text versus voiceover, proof points versus emotional appeals.
It’s also important to recognize what testing can’t do. Algorithms are excellent at optimizing toward measurable signals. They don’t understand brand equity, long-term memory, or cumulative effect. Testing should inform judgment — not replace it.
Ultimately, the only thing that matters for creative effectiveness tools is whether their predictions actually correlate to real media and sales outcomes — reliably enough to inform strategy and media decisions.
The question worth asking of any such tool is simple: How often does what it predicts will happen actually happen?
For example, I frequently cite data from DAIVID, an AI-driven creative effectiveness platform. Why? Because in independent testing, DAIVID’s predictions aligned with real-world outcomes more than 80% of the time — a meaningful foundation for making creative decisions with greater confidence before a campaign goes live.
Platforms will change. Formats will evolve. Algorithms will shift in opaque and sometimes frustrating ways. But attention, curiosity, and trust remain stubbornly human.
The best video ads I’ve worked on weren’t optimized for view counts or completion rates. They were optimized for relevance. They respected the viewer’s time. They said something worth hearing.
Video ads don’t succeed because they follow platform rules. They succeed because they understand people. And that principle outlasts every algorithm update.
We are not only seeing RDNA 4 GPUs gradually dropping in prices in some regions, but retailers are now also providing hefty discounts on some models. Ark PC Lists Radeon RX 9060 XT 16 GB for Just $379 and RX 9070 XT for $632 in Spring Sale Deals Not long ago, we saw the AMD RDNA 4 GPUs starting to drop in prices in some regions. After a continuous price increase over several weeks, the Japanese market saw some relief as the demand dropped for overpriced GPUs. The RX 9000 prices were climbing quickly in the last few months due […]
Read full article at https://wccftech.com/japanese-retailer-ark-pc-launches-spring-special-discounts-for-rx-9000-series/

Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerce’s Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.
Why we care. AI Max isn’t a minor update. It’s Google’s most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, that’s both an opportunity (possible growth) and a risk (an efficiency tradeoff).
By the numbers. The result of the analysis:

Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.
Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely won’t follow, Ryan concluded
What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction — bringing PMax-style automation into classic Search. The result is three core features:

Four pitfalls Smarter Ecommerce identified:
Between the lines. Google’s 14% uplift stat conspicuously excludes retail — an omission Ryan flags as significant for ecommerce advertisers. There’s also a deeper irony: you’re most likely to adopt AI Max if you’re already running Broad Match, DSA, and PMax — yet Google says those accounts will see the lowest incremental benefit.
What’s next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.

Ryan recommends activating AI Max’s keywordless features in your existing Search campaigns now and beginning to wind down DSA — not migrating it to PMax.
Ryan’s verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and don’t let FOMO around AI Overviews drive your decision.

The report. The Ultimate Guide to AI Max for Google Search
New SMEC study analyzes AI Max in Google Ads Search campaigns, showing a 13% conversion value lift but higher CPA and unpredictable ROAS results.
The post What SMEC’s Data Reveals About AI Max Performance appeared first on Search Engine Journal.
Sony reportedly halts PC porting efforts for single-player PlayStation 5 games Recent years have seen Sony bring more and more of its classic PlayStation titles to PC, to the point that it created its PlayStation PC publishing unit in 2021 to support these efforts. Now, it looks like Sony has fallen out of love with […]
The post Sony shifts PC strategy towards console exclusivity appeared first on OC3D.
Rambus has announced the development of its fastest HBM controller yet, based on the HBM4E standard, offering up to 16 Gbps transfer speeds per pin. Ready For Next-Gen AI Data Center Superchips, Rambus Intros HBM4E Memory Controller As expected, Rambus has developed the world's fastest HBM4E memory controller, offering a 60% boost over its HBM4 controller with up to 16 Gbps pin speeds (vs 10 Gbps on HBM4) and up to 4.1 TB/s of total bandwidth per module (vs 2.56 GB/s on HBM4). The HBM4E standard will be utilized by NVIDIA's Rubin Ultra GPUs and AMD's MI500 series accelerators. Press […]
Read full article at https://wccftech.com/rambus-hbm4e-memory-controller-60-percent-faster-vs-hbm4-at-4-1-tbps/

Google is investigating a disruption affecting Google Ad Manager, according to an update posted on the Google Ads Status Dashboard.
The incident began at 13:49 UTC on March 4. By 13:54 UTC, Google said it was reviewing reports that some users could access Ad Manager but weren’t seeing the most up-to-date data.
What’s happening. The issue appears to impact reporting consistency. Specifically, Ad Exchange match rate and Ad Exchange request values are not aligning between Ad Manager’s interactive reports and the legacy reporting query tool (now deprecated).
Why we care. Reporting discrepancies in Google Ad Manager can directly impact how you evaluate performance and optimize campaigns. If Ad Exchange match rates and request data don’t align across reporting tools, it becomes harder to trust the numbers driving pacing, forecasting and revenue decisions.
What it means. Users can still log into Ad Manager, but reporting discrepancies may affect data accuracy — at least temporarily. There’s no indication yet of a full outage, but for publishers and advertisers relying on real-time reporting, mismatched metrics could complicate performance monitoring and optimization decisions.
What’s next. Google says it’s actively investigating and will provide further updates. In the meantime, affected users are advised to monitor the status dashboard and contact support if they’re experiencing issues not listed there.
Google introduced a new availability value in Google Merchant Center — built specifically for vehicle sellers who don’t carry every model on the lot. The new attribute, “build to order,” lets dealers flag vehicles that aren’t physically in inventory but can be customized and ordered by customers.
What needs to change. Sellers must update two areas: their structured data (set availability to BuildToOrder) and their Merchant Center feed (set availability to build to order). Consistency between structured data and feed submissions is critical to avoid disapprovals.

[availability] attribute in GMC Why we care. Until now, sellers had limited ways to signal that a vehicle wasn’t available for immediate pickup. The new value better reflects how many modern automakers operate — especially direct-to-consumer brands like Tesla and Rivian, where buyers configure features before production. For dealers offering factory orders or custom builds, this means clearer expectations for shoppers — and cleaner data for Google.
The fine print Vehicles marked “build to order” must have the condition attribute set to “new.” If a listing is marked “used,” it will be disapproved — Google considers build-to-order vehicles to be newly configured, not pre-owned.
Bottom line If you sell customizable or factory-order vehicles, this update gives you a more accurate way to reflect availability — but only if your feed, structured data and condition fields are properly aligned.
First spotted. This update was shared by Google Shopping specialist Emmanuel Flossie, where he shared how to implement this update on his blog.
Dig deeper. “Availability [availability]” Google Merchant Centre help doc
PPC platforms are asset-hungry. What began as simple text ads and keyword bidding has evolved into an AI-driven ecosystem.
Tools inside Google Ads can now remove backgrounds, generate lifestyle scenes, and even create synthetic humans in minutes. But just because the technology allows it doesn’t mean every brand should use it.
That shift forces PPC advertisers to confront difficult questions:
A brand integrity hierarchy offers a way to navigate those decisions — a four-level framework that helps determine how much AI manipulation your brand, industry, and audience can tolerate.
Generic AI ethics guidelines don’t account for the operational realities of paid search. PPC isn’t a brand storytelling channel. It’s a high-volume, high-velocity system that demands constant image production across dozens of audiences, formats, and placements.
You must generate fresh lifestyle imagery at a pace traditional creative workflows can’t sustain.
At the same time, Google and Bing enforce strict policies around accurate product representation, especially in Merchant Center, where even minor visual inaccuracies can trigger disapprovals or account risk.
Layer on top of that the platform pressure. Google Ads added Nano Banana Pro, turning Asset Studio into an AI co-creation environment. Performance Max actively pushes you toward AI-generated backgrounds, variations, and lifestyle images to improve performance. Demand Gen and Merchant Center also now have capabilities to change product images at scale.
Most brands can’t afford the photoshoots required to keep up with this demand, yet the volume and placement of images across channels make them unavoidable if you want to compete.
This combination of policy risk, creative pressure, and platform-promoted tools is unique to PPC — which is exactly why the industry needs its own AI ethics framework.
Dig deeper: What’s next for PPC: AI, visual creative and new ad surfaces
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
![]()
Definition: The product and the human exactly as they exist in reality.
Permitted activities:
PPC context: This level is fully compliant with Google and Microsoft’s “accurate representation” policies. Merchant Center explicitly permits technical edits that don’t alter the product itself. This is the safest zone for regulated industries such as finance, healthcare, legal services, and brands with strict authenticity standards.
Client talk-track: “We’re using AI to make your reality look its best on every screen size. We aren’t changing what the product is, only how it’s displayed.”
Risk assessment: Zero brand risk. Zero policy risk. Maximum consumer trust.
I think about Level 1 the same way I think about working with a graphic designer in Photoshop. You’re not changing the product, the setting, or the truth — you’re simply cleaning up what already exists.
This level is about technical refinement, not creative invention. It’s the equivalent of adjusting lighting, removing dust, fixing a crooked crop, or correcting color balance. Nothing about the image becomes “untrue.” You’re enhancing reality, not altering it.
Definition: AI-generated environment, not AI-generated product.
Permitted activities:
Google Ads context: Performance Max’s AI background generation is designed for this level. Google allows contextual enhancement as long as the product remains unchanged. This approach is useful for scaling creative variations without expensive location shoots or studio rentals.
Risks:
Client talk-track: “We’re using AI to build a world for your product to live in. The product the customer receives is identical to the one in the ad.”
Risk assessment: Low brand risk. Low policy risk. Maintains consumer trust if executed thoughtfully.
Level 2 sits in an odd psychological space. The manipulations themselves are still low-risk. You’re creating scenes, composites, or enhanced environments the same way a graphic designer would in Photoshop.
Brands have been doing this manually for decades. But the moment AI performs the same task, something shifts. To customers, and even to some advertisers, the exact same edit can feel more artificial simply because an algorithm did it instead of a human.
That perception gap matters.
Even when the output is identical, AI-assisted scene creation can trigger a sense of “this looks fake” that traditional Photoshop work never did. It’s irrational, but it’s real and worth acknowledging at this second tier. The actual risk is still low, but the emotional risk is higher than Level 1.
Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now
Definition: Altering the “hero” — the product or the person.
Activities:
PPC industry context: The platforms prohibit misleading or manipulated product imagery. Merchant Center disapprovals often occur at this level. High sensitivity exists in beauty, apparel, food, and health categories, where consumer expectations are tied directly to visual accuracy.
Recent consumer trust studies show that users feel deceived when they discover product images have been significantly altered. This is a policy concern, more so a brand reputation issue.
Half of U.S. adults (51%) believe AI-generated and edited content needs better labeling, CNET reports. One in five (21%) believe AI content should be prohibited on social media with no exceptions.
Risks:
Client talk-track: “This is where we risk the ‘press call-out.’ If we remove a model’s birthmark or make a burger look like a 3D render, we aren’t optimizing — we’re fabricating.”
Risk assessment: High brand risk. High policy risk. Potential for long-term damage to consumer trust.
Level 3 moves into territory where the image no longer reflects the real person or product. And yes, brands have been doing this in Photoshop for years, and they’ve been called out for it just as long. There’s precedent, and there’s backlash.
What changes at Level 3 is scale. AI lets you make edits instantly, repeatedly, and across entire product catalogs or campaigns. The ethical risk isn’t new, but the volume and speed at which AI enables these distortions make the consequences far bigger. A single questionable Photoshop edit is one thing. Hundreds of AI-altered images pushed across every channel is something else entirely.
This is where the risk stops being theoretical and starts becoming reputational — and where paid search teams need a clarified stance.
Definition: Synthetic humans, synthetic products, or fully AI-generated scenes.
Activities:
PPC context: Synthetic humans are allowed in some formats with proper disclosure, but Merchant Center prohibits listing products that don’t exist. There is a high risk of disapproval for “inaccurate representation.” This level may be acceptable for creative testing or conceptual campaigns, but it’s dangerous as a primary brand identity.
Legal precedents regarding copyright protection for non-human-authored creative works remain murky. Using fully synthetic assets may cause challenges if ownership disputes arise or if synthetic models are mistaken for real individuals without proper disclosure.
Risks:
Client talk-track: “This is for high-speed testing or fringe creative. If we use this for our main brand identity, we must be prepared for the ‘inauthentic’ label.”
Risk assessment: Critical brand risk. Critical policy risk. Use with extreme caution and full disclosure.
Level 4 is where AI stops enhancing reality and starts inventing it. The image becomes a construction. While I haven’t personally worked with brands operating at this tier, it’s absolutely where the industry could be headed, and it deserves serious consideration.
Fully fabricated imagery can mislead customers, violate platform policies, and erode trust at scale. When AI creates people, products, or environments from scratch, the line between creative expression and consumer deception becomes razor-thin. The reputational fallout from getting this wrong is far greater than anything in Levels 1 through 3.
This is the highest-risk tier because it asks a fundamental question: Are you still advertising your product or an AI-generated fiction of it?
Not every brand should operate at the same level of the brand integrity scale. Your acceptable AI usage depends on four factors.
Every brand must choose its acceptable level(s) on the scale and document it in a brand AI manifesto for PPC.
Examples:
Action: Create a PPC brand AI manifesto in collaboration with creative, legal, and executive leadership.
Two critical questions should guide every AI decision:
The press test is the real guardrail. Google’s policies change. Public perception is permanent.
Every AI-assisted asset must be checked for:
Automated AI generation should never bypass human review, especially in regulated verticals.
Different audiences have different tolerances for AI manipulation:
Dig deeper: Why creative, not bidding, is limiting PPC performance
Implement a pre-flight checklist for AI-generated assets:
Safe placements for AI-generated assets
Unsafe placements
Legal teams should:
Industry standards and emerging frameworks, such as the Coalition for Content Provenance and Authenticity (C2PA), are establishing transparency protocols for AI-generated media. Monitor these developments and align your practices accordingly.
Some PPC professionals are already experimenting with the tools discussed in this framework.
Ameet Khabra, owner of Hop Skip Media, tested Nano Banana when it first appeared inside the Google Ads interface. She found the tool useful for ideation and quick edits, but noted that strong results often required highly specific prompts.
That level of prompt detail may be realistic for experienced advertisers, but it’s less likely for many SMBs experimenting with AI-generated assets.
Even when AI imagery is available, some advertisers remain skeptical of how it appears to audiences.
Julie Friedman Bacchini, owner of Neptune Moon, says AI-generated images often look noticeably artificial.
To understand how people outside the industry view these changes, I also polled the community on Threads.

The sentiment was strikingly consistent: while the industry focuses on efficiency, the public is increasingly wary of fantasy versus reality.
One commenter wrote:
Another described the issue more bluntly:
AI isn’t inherently deceptive. Nor is it inherently transparent. It’s a tool. Like all tools, its ethical impact depends on how it’s used. As PPC experts with access to these technologies and advisory roles with brands, we need a clear point of view to guide these decisions.
The brand integrity scale outlined above provides a structured approach to AI use in PPC, helping you navigate the tension between automation and authenticity. By defining your brand’s position on this spectrum today, you ensure tomorrow’s campaigns are remembered for their resonance.
Adopt ethical AI standards — define your brand AI manifesto, implement the press test, and ensure every AI-generated asset passes human review before it reaches your audience. Your brand’s integrity depends on it.
Google is communicating that starting April 1st, Customer Match uploads through the Google Ads API will stop working for certain users, in a message sent to API developers.
Specifically, developers who haven’t uploaded Customer Match data in the past 180 days using their developer token will no longer be able to do so via the Ads API.
What’s changing. If you fall into that inactive bucket, any attempt to upload Customer Match lists through the Google Ads API after April 1 will fail. Instead, Google wants you to move those workflows to the Data Manager API. The change applies only to Customer Match uploads — all other campaign management and reporting tasks should continue as normal in the Google Ads API.

Why Google says it’s doing this. Google positions the Data Manager API as a more modern, unified data ingestion solution across its platforms, with stronger security protocols. It also includes features not available in the Ads API, such as confidential matching and enhanced encryption — signaling a push to centralize and better secure audience data handling.
Why we care. If you or your developers haven’t touched Customer Match uploads in the last six months, this could catch you off guard. After April 1, 2026, the old workflow simply won’t work — and errors will replace uploads.
The takeaway. Check whether your developer token has been used for Customer Match recently and plan a migration to the Data Manager API now, before Google flips the switch.
First spotted. This announcement was shared by Paid Search specialist Arpan Banerjee who shared the message he got from Google on LinkedIn.
Google has long been considered the gold standard for ad spend compared to social platforms. But scale doesn’t equal immunity. Click fraud remains a persistent risk, and the safety of your budget depends entirely on where your ads are running.
While Google Ads offers immense reach, its campaigns aren’t created equal. Some are significantly more exposed to malicious activity than others. To protect your margins, you must understand what constitutes click fraud, where it originates, and how to shield your campaigns.
Invalid clicks are interactions that lack legitimate consumer intent. Because they aren’t driven by real human interest, they skew performance data and drain budgets without any possibility of conversion. These clicks generally originate from four primary sources:
Dig deeper: Own your branded search: Building a competitive PPC defense
The average invalid click rate across Google Ads is 11.4%, a recent study by Fraud Blocker found. The figure is climbing.
That upward trend becomes clearer over time. In 2010, the average invalid click rate sat at 5.9%. By 2024, that number jumped to 12.3%. This doubling of fraud is likely driven by the increased sophistication of AI-powered bots and malware that can more effectively bypass basic security filters.

Invalid click rates fluctuate based on your campaign setup. Three key factors typically drive these numbers:
Not all Google Ads inventory carries the same level of risk. Here’s how campaign types stack up from highest to lowest exposure.
Across the diverse range of industries my clients serve, I’ve identified specific patterns in how fraud manifests across different sectors. As a result, the best prescription is proactive. Address these vulnerabilities by shifting from broad, automated settings to a more refined, high-intent strategy.
The following table highlights the specific patterns we monitor to lower invalid click rates:
| Factor | Higher risk (Aggressive) | Lower risk (Strict) |
| Location | Global or “Presence or Interest” | “Presence Only” (User is physically there) |
| Keywords | Broad match / Generic terms | Exact match / Long-tail phrases |
| Networks | Including “Search Partners” and “Display” | Google Search Network only |
| Exclusions | No negative keywords or placement lists | Robust negative lists and app exclusions |
| Scheduling | 24/7 (Bots often spike at night) | Custom schedules aligned with business hours |
Here are proactive steps you can take to reduce your exposure to fraud.
Dig deeper: PPC in the age of zero-click search: How to stay profitable
Google is far from a uniform entity. It’s a diverse ecosystem of distinct environments where risk levels can vary by as much as 400%.
Prioritizing high-quality traffic results in superior data integrity, more precise optimization, and reduced acquisition costs. In today’s market, the strategic structure of your campaigns is just as vital to your success as the size of your budget.

One of the most profitable Google Ads targeting tactics is retargeting: showing ads to people who are already familiar with your business. But if you still think that “retargeting” means a Display campaign chasing users around the web with banner ads, you’re missing out on how “Your data segments” actually function today.
Let’s explore how you can leverage your proprietary audience data in new ways, and what mistakes to avoid in 2026 and beyond.
Retargeting means showing ads to people who are already familiar with your business. Google uses the euphemistic name “Your data segments” to refer to all the retargeting lists in your account.
A variety of different retargeting methods are available in Google Ads. They mirror what you’ll find on other ad platforms like Meta, LinkedIn, or TikTok. I find it helpful to group them into four categories:
Many practitioners overlook this detail: your data segments aren’t just about ad targeting.
Even if you don’t have a single retargeting campaign running, the mere existence of these lists in your account provides a vital signal for Smart Bidding and Optimized Targeting.
For example, when you upload a customer list, you’re telling Google, “These are the people who actually buy from me.” Even if you never add that list to your audience signal in Performance Max, Google will still use it to understand likely converters and adjust bidding/targeting accordingly.
Similarly, let’s say you only run Search and Shopping campaigns, and you use Target ROAS bidding. When Google is trying to set the right bid for the right user at the right time, their presence (or lack thereof) on a “your data segment” list is one of many signals incorporated into that bidding calculation.
Different campaign types handle audience data differently. It’s important to know the distinction so you can plan your targeting strategy accordingly.
If you’re new to retargeting, I find Demand Gen the best place to start. It’s built for visual storytelling and works well with the Google Engaged Audience or basic website visitor lists.
If you have some experience with retargeting campaigns, you might want to try New Customer Acquisition or Customer Retention mode in PMax or Shopping, as these are powered by Your data segments.
Over-segmenting. I know it can be tempting to create 50 different lists: “People who visited the cart on a Tuesday,” or “People who looked at three pages but didn’t click the ‘About’ section.”
Unless you’re spending six figures or more every month, this level of granularity doesn’t help, and may actually hurt your campaigns. Google’s AI needs data density to learn. When you slice your audience into tiny slivers, you don’t have enough “matched records” for the system to optimize.
Upload your unique data to Google Ads, keep your strategy simple, and let the bidding algorithms do the heavy lifting in driving returning customers for you.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.
Looks like the new APU series will be better for mini PCs, as the number of PCIe lanes has been noticeably reduced. Ryzen AI 400 Desktop CPUs Only Bring 10 or 12 PCIe 4.0 Lanes, Limiting Lane Width for GPUs/NVMe SSDs Two days ago, AMD finally debuted its first Zen 5-based desktop APU series for the AM5 platform, giving users the flexibility to build APU-based gaming builds. AMD hasn't been so aggressive in the desktop segment when it comes to delivering peak graphical performance through integrated graphics. Still, the debut of Ryzen AI 400 and Ryzen AI Pro 400 series […]
Read full article at https://wccftech.com/amd-nerfs-ryzen-ai-400-desktop-cpus-with-just-10-12-usable-pcie-lanes/

Google will begin enforcing a minimum daily budget for Demand Gen campaigns starting April 1, 2026.
What’s happening: The Google Ads API will require a minimum daily budget of $5 USD (or local equivalent) for all Demand Gen campaigns. The change is designed to help campaigns move through the “cold start” phase with enough spend for Google’s models to learn and optimize effectively. The update will roll out as an unversioned API change, applying across all buying paths.
Technical details:
BUDGET_BELOW_DAILY_MINIMUM error, with additional details available in the error metadata.UNKNOWN error, with the specific validation failure referenced in the unpublished error code field.The rule applies when modifying budgets, start dates, or end dates in ways that push daily spend below the $5 floor — covering both daily and flighted budgets.
Impact on existing campaigns. Current Demand Gen campaigns running below the minimum will continue serving. However, any future edits to budgets or scheduling will require compliance with the new floor.
Why we care. For advertisers and developers, this adds a new compliance layer to campaign management workflows. Systems will need updating to catch and handle the new validation errors before deployment.
The bottom line. Google is standardizing a minimum investment threshold for Demand Gen — prioritizing performance stability, while requiring advertisers to adjust budgets and automation accordingly.
Every year, Google suspends tens of millions of Google Ads accounts for advertising policy violations. One specific policy area that confuses many legitimate advertisers is Google’s “three-strikes” system.
Essentially, if Google decides your account has repeatedly violated any of 15 specific Google advertising policies, you’re at risk for temporary (and potentially permanent) suspension of your Google Ads account.
To help you prevent a single policy issue from snowballing into a full account suspension, here’s how Google’s three-strike system works and what you should do at every stage to keep your ads running.
Over the past 10+ years, I’ve helped thousands of advertisers identify and resolve Google’s policy concerns so that their businesses can resume running ads. One such situation involved helping a business that sells ceremonial swords for military dress uniforms.
Google’s Other Weapons policy prohibits advertising swords intended for combat. However, that same policy permits the advertising of non-sharpened, ceremonial swords, which is what this business sells. Even though this business was properly advertising its products within Google’s ad policy parameters, Google issued them a warning for violating the Other Weapons policy.

After the warning, we documented for Google that the business wasn’t violating Google’s policy. We also added specific disclaimers to the business’s sword product pages, noting that the swords were only ceremonial. Frustratingly, Google decided to issue a first strike to the business anyway.
We appealed the strike because the business wasn’t violating Google’s policy. But Google quickly denied that appeal. We tried appealing again, and Google denied the second appeal. The ad account remained on hold with no ads serving, and the business was losing revenue.
Ultimately, we had to “acknowledge” the strike to Google (I’ll explain what that means later) so that the ads would resume serving. We then worked with Google to craft more precise disclaimer language, stating that the swords for sale were ceremonial blades and not sharpened for use as weapons. This disclaimer was added to the business’s website footer so that both Google’s robots and human reviewers could see it on every single page (regardless of whether swords were for sale on a particular page).
Because of all these changes, Google’s concerns were satisfied and the business has never received any subsequent warnings or strikes. The end result was a success, even though technically there should never have been a warning or strike issued because an actual policy violation never occurred.
Key takeaway: Google will sometimes incorrectly issue warnings and strikes, and even reject appeals, and will often require excessive website disclaimers to convince them that all is well.
Understanding Google’s strikes system can save your ads account from suspension. The search giant adheres to a system that begins with an initial warning and is followed by a “three strikes and you’re out” protocol.
Before issuing your ad account an initial strike, Google will first send you a warning notification.

This warning informs you that there’s a problem and allows you to address and resolve Google’s concern before your account is penalized with an official strike.
Treat warnings seriously — ignoring them likely ensures your account will begin receiving strikes.
If Google decides that the same policy violation still exists after a warning was issued, your ad account will receive its first official strike.

Acknowledge the strike
This is your fastest path back to serving ads. But Google counts strikes as cumulative over a 90-day period.
If you acknowledge the strike rather than successfully appeal it, you’ve started the clock on the possibility of three strikes and a permanent suspension. Deciding which approach is best is a case-by-case determination.
To acknowledge the strike, you must:

After you acknowledge the strike and the three-day hold ends, your ads will resume serving.
Appeal the strike
Submit this appeal form and explain why your ads aren’t violating Google’s policy. Keep in mind:
Appealing is often justified, but it costs time and success isn’t guaranteed (even if you’re in the right, as the earlier case study shows).
If Google decides there’s been another policy violation within 90 days of resolving your first strike, or if your original violation was unresolved during those 90 days, your account will receive a second strike.

If Google decides there’s been another policy violation within 90 days of resolving your second strike, or if your previous violation was unresolved during those 90 days, your account will receive a third strike.

Successfully appealing a suspension is definitely possible. But the process is often a nightmare, and the results are never guaranteed.
Important: Once suspended, you’re unable to make any changes to your ad account.
Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs
Google is sometimes inconsistent at following their own rules. Here are two examples I’ve seen first-hand.
I have a client who acknowledged a first strike on June 25. They received a second strike on July 26, which they successfully appealed. You would think that should reset the 90-day counter back to June 25.
However, Google gave them another second strike on October 16, far beyond 90 days from the date of the first strike, but within 90 days from the date of the “first” second strike, which they successfully appealed.

I have a client who received a warning on August 7, followed by a first strike on September 7. They acknowledged the first strike, and that strike expired on December 6, 90 days after it was issued.
However, the account immediately reentered “warning” status, with a new 90-day clock starting from when the first strike expired. There was no new email notification about this warning, and the warning didn’t appear on the Strike history tab.





Important: If you violate one of Google’s many other policies not listed above, you could find your ad account suspended immediately, with no warning or three-strikes system.
Dig deeper: Google Ads boosts accuracy in advertiser account suspensions
Follow these best practices and tips to minimize the chances of receiving a Google Ads strike:
Google understandably cares deeply about its reputation and the safety of its users. That’s why Google’s policy team often strictly enforces its advertising policies, and why they’re sometimes over-aggressive when interpreting and applying their own policy language.
To keep our Google Ads accounts in good health and our ads running, the best thing we can do as advertisers is to deeply understand Google’s advertising policies and requirements.
Always be ready to jump through hoops to explain your unique situations, and over-comply with Google’s edicts whenever feasible.
Here’s hoping you never see a third strike!

Meta is updating its ad measurement framework, aiming to simplify attribution in what it calls a “social-first” advertising world.
What’s happening. Meta is narrowing its definition of click-through attribution for website and in-store conversions. Going forward, only link clicks — not likes, shares, saves or other interactions — will count toward click-through attribution. The change is designed to reduce discrepancies between Meta Ads Manager and third-party tools like Google Analytics.

Between the lines. Social media has overtaken search as the world’s largest ad channel, according to WARC, but many attribution systems were built for search-era behaviors. On social platforms, engagement extends beyond link clicks. Historically, Meta counted all click types toward click-through conversions, while many third-party tools only counted link clicks — creating reporting misalignment.
What’s changing. Conversions previously attributed to non-link interactions will now fall under a renamed “engage-through attribution” (formerly engaged-view attribution). Meta is also shortening the video engaged-view window from 10 seconds to 5 seconds, reflecting faster conversion behavior — particularly on Reels. The company says 46% of Reels purchase conversions happen within the first two seconds of attention.
Why we care. This update makes it easier to see which actions actually drive conversions, reducing confusion between Meta reporting and third-party analytics like Google Analytics. By separating link clicks from other social interactions, marketers get a clearer view of campaign performance, while the new engage-through attribution captures the value of likes, shares, and saves.
This gives advertisers more confidence in their data and helps them make smarter, more impactful
Third-party tie-ins. Meta is partnering with analytics providers like Northbeam and Triple Whale to incorporate both clicks and views into attribution models, aiming to give advertisers a more complete performance picture.
The rollout. Changes will begin later this month for campaigns optimizing toward website or in-store conversions. Billing will not change, but reporting inside Ads Manager may shift as attribution definitions update.
The bottom line. Meta is attempting to balance clearer, search-aligned click reporting with better visibility into uniquely social interactions — giving advertisers cleaner comparisons across platforms while still capturing the incremental impact of engagement-driven conversions.
Dig deeper. Simplifying Ad Measurement for a Social-First World
What do conversion rate optimization (CRO) and findability look like for an AI agent versus a human, and how different do your strategies really need to be?
More and more marketers are embracing the agentic web, and discovery increasingly happens through AI-powered experiences. That raises a fair question: what does CRO and findability look like for an AI agent compared with a human?
Several considerations matter, but the core takeaway is clear: serving people supports AI findability. AI systems are designed to surface useful, grounded information for people. Technical mechanics still matter, but you don’t need entirely different strategies to be findable or to improve CRO for AI versus humans.
If a consumer does business directly through an agent or an AI assistant, your business needs to make the right information available in a way that can be understood and used. Your products or services need to be represented through clean, well-structured data, with information formatted in ways that downstream systems can process reliably.
As more people explore doing business with AI assistants, part of the work involves making sure your products and services can connect cleanly. Standards, such as Model Context Protocol (MCP), can help by enabling agents to interact with shared sources of information.
In many cases, a human may still decide to engage directly on a brand’s site. In that context, content and formatting choices matter. Whether you focus on paid media or organic, ensuring your humans can take desired actions — and will want to — is important.
Dig deeper: Are we ready for the agentic web?
The SEO toolkit you know, plus the AI visibility data you need.
Old‑school SEO encouraged the idea that more keywords and larger walls of text would perform better. That approach no longer holds.

Both humans and AI systems tend to work better with clearly structured, modular content. Large blocks of uninterrupted text can be harder for people to scan and understand. Clear sections, spacing, layout, and visual hierarchy help users quickly understand what they can do and how to accomplish the goal that brought them to the page.
There’s no fixed minimum or maximum amount of text that works best. You should use the amount of content needed to clearly explain what you offer, why it’s useful, and what sets it apart.

A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
Visual components can be helpful when paired with useful alt text. Lead gen forms should be easy for humans to complete and regularly audited for spam or friction. Content that’s hard for people to use is also harder for automated systems to interpret as helpful or relevant.
Dig deeper: Lead gen PPC: How to optimize for conversions and drive results
One of the best ways to communicate clearly to systems is to communicate clearly to people. Lean into what makes you an expert, but avoid unnecessary jargon or overly complex language. Descriptions should stay specific, accurate, and on-brand.
A simple gut check: if a 10-year-old couldn’t broadly understand what you do, why it matters, and how to engage with you, you’re probably making things harder than necessary. Even though AI systems are sophisticated, clarity still matters because the goal is ultimately to support a human outcome.
If you’re unsure, try putting your positioning copy into an AI assistant and asking it to critique its clarity. Ask for simplification and clearer explanations, not for new claims or embellishment.
Visual components matter here as well. Comparison tables can help when they genuinely support understanding, but they can hurt when they’re used as a gimmick rather than a guide. Accessibility principles matter, too. Color contrast, readable font sizes, and restrained font choices reduce the risk that someone can’t process your site.

Images should be easy to understand and clearly connected to the surrounding text. Alt text helps people using assistive technologies and reinforces the relationship between visuals and written content.
A user comes to your site to do something. They might want to buy, request a quote, or speak with your team. That action should be clear.
When the intended action is unclear, it becomes harder for both people and automated systems to understand what your site enables.

Shopping experiences tend to surface in conversations with shopping intent because assistants are trying to complete the task they were given. If it’s unclear how to add an item to a cart or complete a purchase, you make it harder for a human to do business with you. You also make it harder for systems to understand that you’re a transactional site rather than a catalog of items without a clear path forward.
Lead generation requires similar clarity. If the goal is to talk to your team, include a phone number that can be clicked to call. You might also include a form that submits directly into your lead system or a flow that opens an email client. Forcing users through multiple form pages often frustrates people and adds unnecessary complexity to the experience.
Dig deeper: 6 SEO tests to help improve traffic, engagement, and conversions
I cover technical considerations last for a reason. The most important work you can do is support the humans you serve. Technical improvements help, but they rarely succeed on their own.

Tips from the Microsoft AI guidebook. (Disclosure: I’m the Ads Liaison at Microsoft Advertising.)
Excessive imagery, low contrast between text and background, or unstable layouts can create challenges.
Make sure your site renders consistently and meaningfully. Large layout shifts after load, measured in cumulative layout shift (CLS), can frustrate users. Pages overloaded with ads or pop-ups can distract from the reason someone arrived in the first place and may introduce trust concerns.
Security matters as well. Malware warnings, broken rendering, or incomplete page loads can raise red flags for both users and automated systems.

Tools like IndexNow can help notify search systems of content changes more quickly. Microsoft Clarity is a free tool that shows how users behave on your site, surfacing friction you might otherwise miss. This includes Brand Agents that help your humans have more meaningful chatbot experiences.

One useful check is to review how your site appears when used as input for ad platforms or auto-generated creative tools, such as Performance Max campaigns or audience ads.

These can provide a helpful lens into how platforms interpret your content. When the resulting positioning and creative align with what you intend, you’re usually doing a good job serving both crawlers and people. When they don’t, it’s often a signal to revisit clarity, structure, or user flow.
Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages
Track, optimize, and win in Google and AI search from one platform.
Humans and AI systems need many of the same things when it comes to CRO:
Remember these CRO fundamentals that carry over:
When those fundamentals are in place, you’re supporting both human outcomes and AI-driven discovery.
Google is rolling out Video Reach Campaign (VRC) Non-Skip ads, expanding how brands reach connected TV audiences on YouTube.
What’s happening. VRC Non-Skips are now live globally in Google Ads and Display & Video 360. Built for the living room experience, they run as non-skippable placements optimized for connected TV (CTV) screens.
Why we care. YouTube has been the No. 1 streaming platform in the U.S. for three straight years, making the TV screen a critical battleground for your brand budget. With guaranteed, non-skippable delivery, you can ensure your full message reaches viewers in premium, lean-back environments.
AI in the mix. Google AI dynamically optimizes across 6-second bumper ads, 15-second standard spots, and 30-second CTV-only non-skippable formats. Instead of manually splitting your budget by format, you can rely on AI to allocate impressions for maximum reach and efficiency.
Bottom line. Advertisers now have a simpler way to secure guaranteed, full-message delivery on the biggest screen in the house — using AI to maximize reach and efficiency across non-skippable formats without manually managing the mix.
Google’s announcement. VRC Non-Skip ads are now generally available, allowing brands to reach TV audiences with Google AI.
Google is expanding its recurring billing policy to allow certified U.S. online pharmacies to promote prescription drugs with subscriptions and bundled services.
What’s happening. Certified merchants can now offer:
Requirements for eligibility. Merchants must maintain certified status, submit subscription costs in Merchant Center using the [subscription_cost] attribute, include clear terms and transparent fees on landing pages, and comply with all existing Healthcare & Medicine and recurring billing policies. Accounts previously disapproved can request a review once requirements are met.
Why we care. The update opens new revenue opportunities for online pharmacies, letting them leverage recurring models and bundled services while staying compliant with Google policies.
The bottom line. Certified U.S. online pharmacies can now run recurring prescription and bundled offers, giving them more flexibility to reach patients and scale subscription-based services.
Dig deeper. Recurring billing policy expansion: Prescription drugs
If you’re not actively managing your branded search campaigns, you’re leaving money on the table and your reputation in the hands of competitors, review aggregators, and affiliate marketers.
Brand protection through PPC isn’t just about bidding on your own name. It’s a strategy that spans defensive bidding, query monitoring, ad copy testing, and reputation management across the entire customer research journey.
Most PPC managers treat brand campaigns as an afterthought. Set up a campaign, bid on the exact brand name, maybe add some close variants, and call it done.
But the reality is far more complex, especially when we’re talking about bigger, well-known brands. Your brand exists across dozens of query contexts, each representing a different stage of the customer journey and requiring a different strategic approach.
Consider what happens when someone searches for your brand. They’re not just typing your company name, they’re asking questions, seeking validation, comparing alternatives, and researching specific features.
If you’re only covering exact-match brand terms, you’re missing the majority of brand-related searches and leaving those high-intent users exposed to competitor messaging.
Third-party sites like review aggregators and affiliate comparison websites actively bid on your brand terms to capture traffic and redirect it to their comparison pages, where your competitors pay for prominence.
The cost? Your brand equity, customer trust, and ultimately, conversion rates.
The SEO toolkit you know, plus the AI visibility data you need.
Based on user intent and competitive vulnerability, branded searches fall into four strategic categories. Each requires different bid strategies, ad copy approaches, and landing page experiences.
Let’s break down each category and the specific PPC tactics that can work.
These searchers are in the validation phase. They’ve heard of your brand but want social proof before committing.
The competitive threat here comes from review aggregators and affiliate sites that will happily show your reviews alongside competitor CTAs.
PPC strategy
Users searching for feature-specific information are evaluating whether your solution meets their requirements. Competitors often bid on these queries with ads suggesting they offer superior features.
PPC strategy
This is the most competitive category. Users are actively comparing you to alternatives, and both direct competitors and third-party comparison sites are bidding heavily. This is where you’re most vulnerable to losing customers who were already considering you.
PPC strategy
These queries reveal specific concerns or evaluation criteria. They’re often low-volume but extremely high-intent because they represent genuine decision-making criteria.
PPC strategy
Dig deeper: How to benchmark PPC competitors: The definitive guide
The traditional single-brand campaign approach doesn’t give you enough control or insight at scale. Instead, structure your brand defense across four specialized campaigns, each targeting different intent signals and requiring distinct bid strategies.
This covers exact-match brand terms and common misspellings with aggressive bidding to maintain 95%+ impression share and top positions. Never let this campaign be budget-limited.
Use multiple RSAs to test different value propositions. Monitor lost impression share due to rank as your primary competitive threat indicator.
Capture phrase-match queries like “[Brand] CRM” or “[Brand] for [use case],” where users are researching you within a specific product context.
Bid slightly lower than core brand terms, but ensure ad copy acknowledges the category and emphasizes your category leadership. Test whether category-specific landing pages outperform your homepage for these queries.
These intercept validation-phase users searching “[Brand] reviews,” “[Brand] ratings,” or “is [Brand] good” before they click through to third-party aggregators. Bid aggressively here — these comparison-shopping clicks are worth more than core brand searches.
Use review extensions prominently, include specific social proof metrics in ad copy (4.8 stars, 10,000+ reviews), and send traffic to dedicated testimonial pages rather than your homepage. Test video testimonials on landing pages.
Control the narrative for queries like “[Brand] vs [Competitor],” “[Brand] alternative,” or “better than [Brand].” These are users you’re at risk of losing, so pay up to your maximum acceptable CPA.
Create unique landing pages for each major competitor with honest comparisons that emphasize your advantages, include side-by-side feature tables, and offer special conversion incentives like extended trials or migration assistance.
Sites like G2, Capterra, and other affiliate comparison sites actively bid on your brand terms without violating trademark policy because they legitimately have content about your brand.
But they’re siphoning off your traffic and often presenting biased or incomplete information. Your defense requires three coordinated approaches.
Review aggregators bid heavily on “[Brand] reviews” and “[Brand] ratings” because these are their money keywords, so you need to bid even higher.
Run the math: If a review aggregator click costs you $3 but sends that user to a page where your competitor’s ad costs $50, you’re getting a deal at $10 per click on your own review keywords.
Calculate the lifetime value of a customer versus the cost of letting them click to a third-party site where competitors can advertise. Also, keep in mind it’s cheaper for you to bid on your own brand than for competitors to outbid you.
Even if you can’t prevent them from bidding on your brand, ensure that when users click through, they see optimized content, strong ratings, and an active presence with responses to reviews.
Many review platforms offer advertising options — test running ads on your own profile pages to capture users who arrive via organic search or competitor ads.
Make yours more compelling than third-party review aggregators. Include video testimonials, detailed case studies with metrics, filterable reviews by industry or use case, and verified customer badges.
Then use your PPC ads to drive users to these owned properties instead of letting them discover review aggregators organically.
Dig deeper: When to use branded and competitor keywords in PPC
Your brand campaign ad copy needs to do more than confirm your brand name. It needs to preempt objections, differentiate from competitors, and provide compelling reasons to click your ad instead of a competitor’s or third-party site. Three frameworks deliver results.
Identify the top 3-5 objections that come up in your sales process and address them directly in your ad copy before users encounter them on competitor or review sites.
Don’t just state features, state features your competitors don’t have or can’t match. This is especially critical for comparison queries where you know competitors are showing ads. Examples include:
If you can’t identify any unique features or USPs, that’s a signal to improve your product positioning or capabilities. Without clear differentiation, PPC alone won’t drive sustainable conversions.
Combine multiple types of social proof to build credibility quickly. Don’t just pick one element, stack them. Try
Dig deeper: How to write paid search ads that outperform your competitors
Sending all brand traffic to your homepage is a missed opportunity. Different branded queries represent different user intents and concerns, and your landing pages should address those specific intents.
When users search “[Brand] + [feature],” send them to dedicated pages that explain the feature in detail, show it in action, and provide clear next steps.
Include a hero section explaining the feature in one sentence, a video demo or animated screenshot, technical specifications for enterprise buyers, integration details if relevant, and customer examples using this specific feature.
Create dedicated comparison landing pages for each major competitor. Be honest about differences while emphasizing your advantages. Include side-by-side feature tables, pricing comparisons if advantageous, and customer testimonials from switchers.
Acknowledge competitor strengths without being dismissive, highlight 3-5 key differentiators where you excel, and offer migration assistance or switch incentives. Make your CTA clear and prominent, offering a trial or demo.
For review and reputation queries, create dedicated pages that aggregate social proof rather than linking to your G2 profile or hoping users browse scattered testimonials.
Display aggregate ratings prominently (average of G2, Capterra, etc.), place video testimonials above the fold, show recent reviews with verified badges, make reviews filterable by industry, company size, and use case, include case studies with concrete metrics, and highlight third-party awards and recognition.
Brand protection isn’t a set-it-and-forget-it strategy. The competitive landscape constantly evolves, new competitors emerge, third-party sites adjust their strategies, and user search behavior shifts. You need systematic monitoring and rapid response capabilities across three time horizons.
Review:
For validation queries like “is [Brand] good” or “does [Brand] work,” use dynamic keyword insertion to echo the user’s specific question in your ad copy, creating higher relevance and click-through rates. Try headlines like “Yes, {KeyWord:[Brand]} Is Excellent” or “Absolutely, {KeyWord:[Brand]} Works.”
If you have location-specific offerings or competitors vary by geography, create geo-modified brand campaigns. Users searching “[Brand] New York” or “[Brand] enterprise” may have different needs than general brand searchers.
Apply audience segments to brand campaigns to adjust bids based on user quality. Users who’ve visited your pricing page before should get higher bids on brand searches than first-time visitors. Similarly, prioritize users who match your ideal customer profile demographics.
While Google generally allows competitors to bid on your brand terms, using your trademarked brand name in their ad copy is often prohibited.
Monitor competitor ads and file trademark complaints when they use your brand name in headlines or descriptions. This is particularly effective against smaller competitors and affiliates who may not realize they’re violating policy.
Capture queries where users are researching whether your solution addresses a specific problem. These are often high-intent and represent clear use case alignment.
Target queries like:
How much should you invest in brand protection versus acquisition campaigns? The answer depends on three factors:
If you operate in a highly competitive category where multiple well-funded competitors actively bid on your brand terms, invest more in brand protection. Run auction insights weekly to monthly to quantify competitive presence.
If competitors show in 40% or more of your brand auctions, this is a high-threat environment requiring aggressive defense. Stronger brands with dominant organic presence can afford to spend less on core brand defense because their organic listings provide natural protection. This doesn’t apply to reputation and comparison queries where third-party sites rank organically.
High LTV businesses should invest more aggressively in brand protection because the cost of losing a customer to a competitor or having them influenced by negative review sites is substantial. If your average customer is worth $50,000 over their lifetime, paying $50 per click to defend against comparison queries is economically rational.
For most B2B SaaS and high-consideration products, allocate approximately 15-25% of total paid search budget to comprehensive brand protection. Within that allocation, dedicate 40% to core brand defense (exact match), 25% to competitive comparison defense, 20% to reputation and review queries, and 15% to feature and niche question queries.
Track, optimize, and win in Google and AI search from one platform.
Brand protection through PPC isn’t just defensive marketing. It’s a competitive moat. When you control the narrative across branded search contexts, you ensure high-intent users see accurate information instead of competitor ads or third-party pages monetizing your brand equity.
The brands that win treat this as strategy, not maintenance. They segment branded queries by intent, build landing pages to match, monitor threats continuously, and defend high-value search real estate aggressively.
Start with an audit using the four-category framework. Close coverage gaps, align campaigns and landing pages to intent, and commit to weekly monitoring, monthly optimization, and quarterly strategic reviews.
If you don’t own your branded searches, someone else will.
Google Ads PMax placement reporting is now populating with data for more accounts, revealing Search Partner domains and impression counts for brand safety review.
The post Google Ads Surfaces PMax Search Partner Domains In Placement Report appeared first on Search Engine Journal.