Normal view

Yesterday — 9 March 2026Search Engine Land

Google Ads adds AI voice-over to Performance Max video ads

9 March 2026 at 21:34

Google Ads is set to enhance the viewer experience of Performance Max video ads with an innovative asset optimization feature. Leveraging advanced AI voice models, this update aims to infuse video ads with realistic voice-overs, ultimately enhancing user engagement and ad performance.

Why we care. Advertisers who don’t actively opt out by March 20, will have their video ads automatically enhanced with Google’s AI voice models, changing how their ads sound to viewers without requiring any creative production work.

How it works.

  • The feature only activates on videos that don’t already contain a voice track
  • Google’s AI selects text from advertiser-provided headlines and descriptions, then generates a realistic voice-over from that copy
  • The voice-over is layered onto the existing base video and saved as a new video asset

The catch. This is opt-out, not opt-in. The default setting means ads will be automatically eligible for voice enhancement unless advertisers proactively disable it.

Key dates. Advertisers can choose to exclude their ads from this feature until March 20th. To do so, they must opt out of the video enhancement control. After the opt-out period, all ads with video enhancement control enabled will automatically be eligible for voice-enhanced versions.

Action steps for advertisers. Advertisers can adjust their video settings by visiting their ads in Google Ads.

First seen. This update was shared by Paid Search expert Arpan Banerjee who shared the update on LinkedIn.

OpenAI updates privacy policy as ads expand in ChatGPT

9 March 2026 at 21:00
How to get cited by ChatGPT: The content traits LLMs quote most

OpenAI is updating its privacy policy with new details on ads, data usage and upcoming features across its products, including ChatGPT.

The update was shared with ChatGPT users and outlines how advertising will work inside ChatGPT — and what data advertisers can and cannot access.

Why we care. OpenAI’s update makes it clear that user privacy is a top priority: personal chats, histories, and details are never shared with advertisers. Ads can still be personalized using anonymized engagement signals, meaning brands can reach relevant audiences without compromising sensitive data.

This approach lets advertisers measure performance safely while building trust with users in a privacy-conscious environment.

Ads in ChatGPT Ads may appear for users on Free and Go plans, while paid tiers — Plus, Pro, Enterprise, Business and Education — will remain ad-free. OpenAI says ads will always be clearly labeled as sponsored and visually separated from chatbot responses.

The company also stresses that advertising will not influence answers generated by ChatGPT.

How ad targeting works. OpenAI says ads may be personalized using signals that stay within ChatGPT, such as ad interactions or the context of a user’s chat. However, the company says advertisers will not have access to conversations, chat history, personal details or user memories.

Instead, advertisers will only receive aggregated performance metrics such as total views or clicks.

Other privacy updates The revised policy also introduces optional contact syncing to help users find friends who use OpenAI services. Users can choose whether or not to enable this feature.

OpenAI also added new transparency around how long data is stored, how it is processed and what controls users have over it.

Safety and product changes. The policy update also references new tools and safeguards, including age prediction systems designed to create safer experiences for teens. OpenAI also added documentation for newer features and projects such as Atlas, Sora 2 and parental controls for teen accounts.

Bottom line. As OpenAI expands advertising in ChatGPT, the company is emphasizing strict boundaries around user privacy — promising advertisers performance insights without access to personal conversations or user data.

First seen. This update was first shared by Paid Media expert Arpan Banerjee who shared tips on this message on LinkedIn.

Google Marketing Live 2026 set for May 20

9 March 2026 at 20:40

Google has confirmed that Google Marketing Live 2026 will take place on May 20, when the company is expected to unveil its latest updates across advertising, AI, measurement and campaign automation.

The date surfaced in an email received by PPC News Feed owner Hana Kobzová from the Accelerate with Google program, which invited participants to submit entries for the Google Ads Impact Awards.

  • According to the message, winners of the awards will be announced during Google Marketing Live 2026.

Why we care. The annual event has become one of the biggest announcement days for advertisers using Google Ads. Google Marketing Live is where Google typically announces its biggest changes to Google Ads — including new AI features, campaign types and measurement tools that can directly impact how campaigns are built and optimized.

Many of Google’s most significant advertising updates each year are first revealed at this event, meaning it often shows where the platform — and advertisers’ strategies — are heading next.

The bigger picture. The event will land during the same window as Google I/O 2026, scheduled for May 19–20. While I/O focuses on Google’s broader ecosystem — including AI, Search and developer technologies — announcements there often influence the direction of advertising products.

What to watch. Expect updates tied to AI-driven advertising, automation and new ways to measure performance across Google’s platforms. For marketers, the event often sets the tone for where Google’s ad strategy is heading for the rest of the year.

First spotted. Kobzová shared the update on PPC News Feed

Dig deeper. Google Marketing Live 2025.

AI assistants now equal 56% of global search engine volume: Study

9 March 2026 at 20:08
AI mobile usage

AI tools now generate 45 billion monthly sessions worldwide — about 56% of search engine volume, according to a study by Graphite.io CEO Ethan Smith.

  • The analysis combines web traffic and mobile app usage across major AI tools and estimates AI activity equals 56% of global search usage and 34% in the U.S.
  • Much of this growth is occurring in mobile apps such as ChatGPT, Gemini, Perplexity, Grok, and Claude.

Why we care. AI is expanding discovery, not shrinking search demand. Total usage across search engines and AI assistants has grown 26% globally since 2023. In other words, it’s not SEO vs. GEO — you need both LLM visibility and traditional rankings.

The details. The report analyzed usage across the five largest LLM products — ChatGPT, Gemini, Perplexity, Grok, and Claude — and compared them with the six largest search engines. Key findings:

  • AI platforms generate 45 billion monthly sessions worldwide.
  • In the U.S., AI accounts for 5.4 billion monthly sessions.
  • 83% of global AI usage occurs inside mobile apps (75% in the U.S.).
  • ChatGPT dominates AI usage, representing 89% of global AI sessions.
  • When isolating search-like prompts (“asking”), AI usage equals 28% of search worldwide and 17% in the U.S.

The report excludes prompts categorized as “doing” or “expressing.” According to OpenAI research, about 52% of prompts are information-seeking, the closest equivalent to traditional search queries.

Between the lines. Most projections comparing AI to search use web traffic alone, typically comparing Google.com visits with ChatGPT website traffic. That misses most AI usage.

  • The analysis argues these comparisons underestimate AI activity by 4–5x because most usage occurs in mobile apps.
  • It also includes multiple LLMs and multiple search engines rather than comparing only Google and ChatGPT.

What to watch. Google still dominates discovery, but its share of search-related activity fell from 89% in 2023 to 71% in Q4 2025, the report estimates.

  • Global AI usage appears to have plateaued since July 2025, while U.S. usage continues to grow rapidly — up roughly 300% year over year by December 2025.

The report. AI Is Much Bigger Than You Think

Why PPC teams are becoming data teams

9 March 2026 at 18:00
Why PPC teams are becoming data teams

Like many people, you’re worried about losing your job to AI.

Where do your “old school” PPC skills fit as AI agents take over more of the work?

Relax. It’s not that binary. The focus is shifting toward data and strategy.

From the outside, it looks like media buying is being automated away. But let’s set the record straight: it isn’t. The role is shifting (again).

I’ve been working in PPC for over 15 years, and there’s nothing to be afraid of. The real question is: are you riding the wave or being left behind?

Let’s map the current PPC landscape: ad network automation and, most importantly, where PPC teams create value today — the critical skill sets and team structure required to compete.

The return of the technical PPC team

A decade ago, technical PPC agencies differentiated through developing scripts, handling data at scale, and managing complex structures. Then automation matured. Everybody started leveraging Performance Max or Advantage+ campaigns because they’re much easier to set up and run.

As a result, many teams shifted toward strategy and creative.

With AI, though, it’s easier than ever to produce good-enough creatives or analyze massive datasets and output what looks like a good strategy. Now don’t get me wrong, those outputs won’t be perfect but:

  • It’s free (sort of) and fast.
  • The quality level isn’t bad at all (not great either).

From a client perspective, this means the average creative-focused or strategy-nerd agency is out of the game. Those teams need skills AI can’t replace.

So rejoice, PPC people: the technical edge is back. It has morphed into something different for sure. But it’s time to bring back the spreadsheet junkies from the 2010s. They’re the right ones to drive PPC again.

Doubting that? Let’s rewind a little bit and look at the necessary skill set.

The PPC edge: From spreadsheet skills to data nerds

What successful PPC agencies now sell is dramatically different than a decade ago. But the same core mindset resurfaced.

Why?

Let’s look at the core performance drivers these days:

  • Integrating down-funnel data into strategy.
  • Building a data infrastructure to support said strategy.
  • Feeding the right signals to ad algorithms.
  • Building systems to operate at scale, including creatives.

See the pattern? You can’t prompt your way out of a broken data model. This is where your edge remains and what clients value.

The good news is that automation increases the value of technical literacy. It doesn’t reduce it.

Who do you call to handle technical literacy? The old PPC marketers. The ones who loved manipulating paid search ads using custom Excel macros they built, or managing hundreds of thousands of product feed items. They have the right mindset: they love automation, data, and math — and they love PPC.

Dig deeper: How to build a paid media team in the AI age

So who should be on your team, whether in-house or agency-side? Here are four essential roles. No single person can cover the entire scope — you need a team.

1. Data engineer

This role basically builds and maintains the infrastructure. Although located after the tracking specialist in the data supply chain, it’s the most central role. That’s why it comes first.

We operate in a complex, multi-platform world: think CRM integration with Google Ads. Or merging online and offline datasets to map the customer journey and drive strategy.

Without a complete data model, your strategy becomes a vague gut feeling that often needs a reality check. The role of the data engineer is to lay the foundation to avoid this situation whenever possible.

Conversely, without this role on your team, you’ll perform repetitive manual exports, get inconsistent numbers across teams, and end up with slow decision cycles.

What is the data engineer’s scope?

Building a data infrastructure basically follows an ETL process: extract data, manipulate it, and make it usable in a reporting tool (think Looker Studio, Power BI, or Tableau).

Here are a few tasks that illustrate that overarching goal:

  • Build data pipelines from ad platforms, analytics or CRM tools to the data warehouse (to get spend, revenue and other data into the warehouse).
  • Structure tables for those sources and “join” (merge) them to answer specific use cases.
  • Maintain those datasets and create automated QAs, including refresh schedules.

What skill sets and tools does the data engineer use?

Generally speaking, since we live in a Google-first world, we hear a lot about BigQuery, Google’s data warehousing solution. There are other solutions, such as Microsoft Azure. However, the main skill set you’re looking for is coding — more specifically, SQL and Python.

The goal here is to use those languages to structure tables within the data warehouse (using SQL) and create data pipelines (using Python).

2. Tracking and measurement architect

Some people consider this to be the same role as data engineers. I strongly disagree.

To me, this role’s sole focus is to protect signal quality. It’s the one person who faces very tight deadlines when things go wrong: you can’t afford to lose conversion data for more than a couple of days. And it’s not retroactive: when tracking is down, conversions are lost forever.

Ad platforms’ performance stands on the shoulders of conversion data. If you don’t get enough of those quality events, you’ll be at a serious competitive disadvantage.

You typically notice this when CPAs fluctuate without explanation or when your in-platform data varies drastically from your “source of truth” (GA, CRM and other systems). Tracking and measurement architects stabilize bidding, increase event match quality and get more data into Google Ads.

What is the tracking architect’s scope?

They design data collection mechanisms that are both complete and regulation-compliant (hello, GDPR):

  • Align tracking with privacy compliance.
  • Design client- and server-side tracking.
  • Implement GTM and server containers.
  • Co-manage Conversions API integrations with the data engineer.
  • Co-ensure deduplication logic with the media buyer.

What skill sets and tools does the tracking architect use?

Although most PPCs have dabbled with Google Tag Manager, very few have actually set up server-side tagging infrastructure. That’s an easy way to distinguish “regular” PPCs from a tracking specialist. However, they should also be comfortable with Consent Mode frameworks, CAPI, and related tools.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

Get the newsletter search marketers rely on.


3. Data analyst

If the data engineer builds the pipes and the tracking architect protects the signal, the data analyst decides what the data means.

It’s the role most impacted by AI. Granted, you can do a lot with AI, but don’t underestimate how impactful a great data analyst is.

The wrong interpretation can waste millions of dollars in a blink of an eye. Fully replacing data analysts with AI would be a gross mistake.

For example, ROAS in Google Ads doesn’t equal contribution margin. Meta Ads CPA doesn’t equal customer lifetime value.

Without a strong data analyst, you risk misinterpreting data and going down the wrong rabbit hole. Think cutting campaigns that look inefficient short-term but drive long-term value. Or reporting different “truths” to marketing and finance — you don’t want that.

What is the data analyst’s scope?

People outside the field think they build Power BI or Looker Studio dashboards. That’s just the tip of the iceberg. Data analysts also:

  • Design data models aligned with business KPIs (this step kind of overlaps with data engineers at times).
  • Run analysis — think cohort performance, churn rates, profitability, and diminishing returns.
  • Challenge platform narratives.

What skill sets and tools does the data analyst use?

I tend to think of data analysts like translators: you can speak another language somewhat fluently, but that doesn’t make you qualified to interpret at scale. Same with data analysts: you may understand numbers to an extent, but you probably still need an analyst.

SQL literacy is often required to query the warehouse directly. Spreadsheet modeling also remains critical for scenario planning. The key skill is statistical reasoning. Understanding sample size, variance, and bias prevents false conclusions.

4. CRO and experimentation lead

Once all that data is clean, available, and analyzed, CROs leverage it to improve the economics of every visitor. Improving conversion rate, lead quality, and the overall customer journey creates a compound effect.

The simple way of proving CROs’ worth is to understand that a landing page that converts at 1.5% instead of 3% means you’ve doubled your CPA. Nobody wants that. And that’s where CROs come in. Instead, you want to scale efficiently, not push more money toward a leaky bucket.

From a PPC standpoint, CROs strengthen both performance (better conversion rate) and signal quality (more conversions), which helps smart bidding.

What is the CRO’s scope?

Contrary to common belief, CRO doesn’t (solely) mean landing page. This role operates across the full funnel:

  • Mapping the journey from impression to revenue.
  • Identifying online friction points using heat maps and session recordings.
  • Structuring testing roadmaps instead of random experiments.
  • Collaborating with creative and product teams on offer positioning.

What skill sets and tools does the CRO lead use?

The entry stack I see most often is GA4 and a heatmap tool such as Hotjar. However, it can get much pricier with tools such as ContentSquare. The stack scales depending on the client’s needs and budget.

The skills that matter most are:

  • Just like data analysts, a deep understanding of math and statistical reasoning (think pre-calculated sample sizes).
  • A structured mindset, clear hypotheses, and business-level success metrics.

Dig deeper: Agentic PPC: What performance marketing could look like in 2030

From media buyers to data teams

The modern PPC team looks less like media buyers and more like a hybrid between marketing, data, and product. The advantage goes to teams that structure these capabilities deliberately.

Winning PPC teams are the ones who understand algorithms, but more importantly, the data and economics behind them. If your team masters infrastructure, signal design, analysis, and experimentation, AI becomes leverage. If not, it becomes a liability.

ChatGPT ads are coming — here’s how you should prepare now

9 March 2026 at 17:00
ChatGPT ads could become the first new demand-capture channel in decades

OpenAI has begun testing ads in ChatGPT for a limited set of U.S. users, with placements clearly labeled as sponsored. The platform’s internal economics suggest it’ll be available to everyone sooner rather than later.

When it does, advertisers will have access to a rare new channel for demand capture. But advertisers should enter this space with their eyes wide open.

For ChatGPT advertising to be successful, consumer behavior will need to change. And even if it does, ChatGPT won’t expand the advertising market. It’ll redistribute it.

Why ChatGPT is moving into ads

The fact that ads have arrived on ChatGPT should come as no surprise. By some estimates, a large language model (LLM) query costs 10 times as much as a traditional search query. With 2.5 billion prompts every day, ChatGPT’s expenses add up quickly.

What’s different isn’t the business model shift itself. It’s the data environment. Users have spent years feeding personal information, questions, and ideas into ChatGPT. In many ways, the platform knows more about its users than any comparable advertising tool. The big question now is how ChatGPT will harness this data to target users.

ChatGPT could become a new demand-capture channel

Advertising historically relied on generating demand: repeating a message enough times that buyers eventually acted. Search changed that by meeting buyers at the moment of intent.

ChatGPT has the potential to follow the search model, but with more context. It’s easy to envision a scenario where someone asks which security camera will work with their existing system. The platform already knows everything about the user’s security system, so it delivers the correct answer and a link to purchase.

When this happens, ChatGPT will be the first new demand-capture channel to emerge since Google launched pay-per-click ads nearly two decades ago.

But right now, there are a few significant barriers preventing this from happening.

For starters, most current AI queries lack purchase intent. Instead, they’re mostly informational: lists of Super Bowl halftime performers, storm-preparation tips, and workout routines. Compare that with existing platforms like Amazon and Google, which have spent decades training users to search with intent.

Even when users do shop through AI, there’s an attribution problem: consumers often use ChatGPT for research, then complete the purchase on Amazon, Google, or directly on a brand site. That breaks clean conversion tracking and makes “proof” harder than “impact.”

These challenges aren’t impossible to overcome. Google went through the same process early on as it transitioned from a homework tool to a shopping platform. But it took time.

ChatGPT will also need time to train consumers to use AI for shopping. So expect to see ChatGPT begin running commercials designed to train consumers to move from research queries to purchase-oriented ones.

While the possibility of a genuinely new demand-capture advertising platform is undeniably exciting, be realistic about its true potential.

Dig deeper: OpenAI quietly lays groundwork for ads in ChatGPT

Get the newsletter search marketers rely on.


Market share reality check

AI can do many things exceptionally well, but it won’t expand the advertising pie. ChatGPT ads won’t suddenly introduce a surge of new consumers into the market. Ecommerce purchases will continue to grow at the same rate regardless of which new advertising platforms come online.

Instead, ChatGPT will capture a portion of the existing advertising share from Google, Meta, and Amazon. Consequently, advertiser budgets will likely shift rather than grow significantly.

ChatGPT’s largest competitors won’t give up market share without a fight. Google, in particular, has its own AI platform, Gemini, and an existing group of active advertisers it can draw from. These are powerful competitive headwinds for ChatGPT, which is recruiting its first group of advertisers from scratch.

Competition will be fierce among AI platforms as they race to reach profitability, and market consolidation seems inevitable. But even in that environment, ChatGPT has an opportunity to do something other platforms can’t.

The differentiator: Hyper-personalization

AI queries already lean heavily toward information gathering. Users employ these tools to help them plan everything from vacations to workout routines to tough conversations with their bosses. Taken together, AI platforms can learn more about individual users’ tastes and preferences than any other tool.

This capability unlocks hyper-personalization at scale.

Knowing everything that it does, AI can return perfectly tailored results with a one-click purchase option. Google and Amazon can’t match this capability because they still rely on users searching for particular specs, product names, or model numbers to deliver results.

There’s risk here. Hyper-personalization can feel invasive.

Some users will opt out entirely, just as some consumers avoid always-on devices in their homes. Meta ran into this dynamic years ago as public backlash forced changes in targeting and data practices.

This is where the distinction between demand capture and demand generation matters. Demand capture advertising generally feels less intrusive because it’s tied to a user’s explicit request. Most consumers will appreciate getting exactly what they ask for when they want it. But they’ll likely revolt if highly personalized and unsolicited ads start following them around the web.

If AI platforms can maintain that boundary, the convenience of hyper-personalization will ultimately win out for most users.

Dig deeper: ChatGPT ads collapse the wall between SEO and paid media

What you should do now

While OpenAI has already begun reaching out to select advertisers, it could be a year before we begin seeing widespread advertising on ChatGPT or other AI platforms. However, you should be prepared to move whenever that moment arrives.

So watch for official communications from OpenAI about ChatGPT advertising and, when possible, sign up for platform notifications.

In the meantime, you can make these few practical moves:

  • Align internally on measurement expectations: If the channel starts as research-heavy, last-click ROAS may understate performance. Build room for assisted conversions and incrementality.
  • Pressure-test mobile UX and checkout friction: Demand capture punishes slow experiences. If AI shortens the path to purchase, your site has to close quickly.
  • Plan conservative early tests: Being an early adopter carries risk (immature controls, evolving placements), but it also creates an edge: faster learning on a genuinely new demand-capture surface.

New demand-capture channels don’t come along often. ChatGPT advertising could become one of them, but the winners won’t be the brands that rush in blindly. They’ll be the ones who enter with a clear thesis, realistic measurement, and a strategy built around trust.

The Digital Markets Act promised fairer search. It’s failing.

9 March 2026 at 16:00
Why the EU’s Digital Markets Act may be hurting search competition

SEO professionals don’t agree on much. But over the past decade, we’ve come together around the conviction that Google has abused its dominant position, that it systematically favors its own products over better alternatives, and that something must be done to create fairer competition in search. 

In 2022, the European Union passed the Digital Markets Act (DMA), a sweeping regulation designed to curb the power of tech giants. It came into force in March 2024.

Industry groups celebrated. Trade publications ran optimistic headlines about a new era of digital fairness.

In 2024, I wrote that it was “a much-needed piece of legislation.” Two years in, the evidence is clear: The DMA will do more harm than good.

Well-documented abuses 

The Digital Markets Act arose from understandable frustrations with well-documented abuses. 

Google spent years ranking its own shopping service at the top of search results while systematically burying competitors like Foundem and Kelkoo on page four, where nobody would ever find them. 

The company’s internal documents, uncovered by EU investigators, revealed that Google Shopping “simply doesn’t work” on its merits, so Google gave it an algorithmic boost unavailable to anyone else. 

The travel industry watched as Google Flights consumed the market share of innovative startups like Hipmunk, which had offered genuinely better user experiences by showing total trip costs, including baggage fees and connections. 

Hoteliers saw Google Hotels siphon away direct bookings. Local businesses watched as Google prioritized its local pack over organic results.

The pattern was unmistakable: Google identified lucrative verticals, launched competing products, then used its search monopoly to guarantee their success.

These weren’t competitive advantages but unfair tactics, and the EU was right to identify them as such. It took over 10 years to fine Google £2.1 billion for the shopping search abuse alone. The DMA was supposed to fix this by setting clear rules upfront, forcing gatekeepers to treat all services equally before abuses could take root. 

For those of us who had watched clients lose traffic to Google’s vertical search engines despite having superior content, the promise was intoxicating: Finally, algorithmic neutrality. Finally, fair competition based on content quality rather than corporate ownership. Finally, a chance for the next generation of search-dependent businesses to compete.

Dig deeper: EU puts Google’s AI and search data under DMA spotlight

What users actually experience

Yet, two years into implementation, the reality looks nothing like the promise. The most comprehensive assessment comes from Nextrade Group, which surveyed 5,000 European consumers across twenty member states in mid-2025. 

The findings? 

Two-thirds of respondents reported needing more clicks or more complex search queries to find what they need online. Among frequent searchers, precisely the users most valuable to our clients, 61% said searches now take up to 50% longer than before the DMA.

Forty-two percent of frequent travelers reported that flight and hotel searches had worsened significantly. More than 40% said they would actually pay to restore the functionality they had before March 2024. 

When users are willing to pay for something they previously received for free, regulation has failed catastrophically.

The European Centre for International Political Economy conducted a separate survey of 3,500 consumers across Central and Eastern Europe and found similar results. 

Eighty percent had never heard of the DMA, it solved problems they didn’t know existed, yet 39% reported that routine online tasks had become more cumbersome since early 2024. 

Why does it matter? 

As SEO professionals, we must confront this truth: Users preferred the integrated Google experience we spent years complaining about.

Before the DMA, searching for “hotels in Paris” displayed an interactive map with photos, ratings, real-time availability, and prices — all accessible without leaving the search results page.

That integration has been dismantled because Google Search and Google Maps are designated as separate core platform services, and their seamless cooperation constitutes prohibited self-preferencing. 

Users must now click through to separate services, repeat their searches, and lose context. Regulators call this fair competition. Users call it a worse internet.

The business impact: Worse metrics across the board

The business metrics support what consumers report feeling. Following the DMA’s implementation, click-through rates on Google Hotel Ads decreased by 30% in affected European regions compared to unaffected markets. Direct bookings through Google Hotel Ads fell by 36%. This is all despite theoretically fairer visibility in search results. 

These are businesses losing revenue because the mechanism connecting searchers to services has been deliberately degraded. 

Meanwhile, Google’s search monopoly remains entirely intact. 

The company still processes over 90% of European search queries. The difference is that now the search experience delivers measurably worse results for users and measurably worse outcomes for businesses paying for visibility.

The enforcement problem: Fines don’t work

The DMA requires Google to treat competing vertical search services (flight comparison sites, hotel booking engines, shopping aggregators) with the same prominence as its own offerings. 

In response, Google tested a version of its hotel search that removed maps, removed structured listings with photos and availability, and displayed only 10 blue links. Users hated it. 

Hotels saw a traffic crater. Google documented the catastrophic user satisfaction scores and presented them to the Commission as evidence that integration serves user needs, not just Google’s interests. 

The Commission found itself in an impossible position: Force Google to maintain the worst experience in the name of fairness, or acknowledge that some integrations genuinely benefit users even when they advantage Google’s products.

Google responded to preliminary findings of non-compliance by making incremental adjustments that preserve the substance of its advantage, while creating just enough ambiguity about whether it’s following the rules. 

When the Commission objects to one implementation, Google proposes another that differs in form but not effect. This process can continue indefinitely because the underlying problem, Google’s monopoly in search, remains untouched. 

For a company with annual revenues exceeding $300 billion, regulatory fines are simply a cost of doing business. The Commission fined Google €2.4 billion for shopping search abuses and breaking antitrust rules. The company paid and continued operating largely as before. It will do the same with DMA fines. 

The uncomfortable reality is that you can’t regulate a monopoly into behaving competitively. You can only break the monopoly itself.

Get the newsletter search marketers rely on.


The speed problem: Regulation can’t keep pace

The European Commission must monitor 23 core platform services across seven gatekeepers, while each company releases updates continuously:

  • Algorithms change daily 
  • Features launch weekly
  • Product roadmaps evolve quarterly

By the time the Commission identifies a potential violation, conducts workshops with stakeholders, issues preliminary findings, allows the company to respond, and publishes a final decision (a process taking 12-18 months), the underlying technology and business models have moved on. 

Google launched AI Overviews in Europe one week after receiving preliminary findings of non-compliance for self-preferencing in traditional search. The company essentially announced that, while regulators debate whether Google Flights should rank above Kayak, Google is moving to a fundamentally different search results page where AI-generated summaries replace links entirely. 

The DMA contemplated regulating 2024’s search landscape. Google is already building 2027’s.

What should regulators do instead?

While I’m not a regulator, I have been doing SEO for 15 years. In my opinion, regulators should redouble efforts to address actual structural monopolies rather than impose rules on how platforms must operate. 

The DMA tries to regulate platform behavior while leaving monopoly power intact. This is like trying to stop water from flowing downhill by prescribing which route it must take. The water will find another path, and everyone gets wet in the process.

If Google’s dominance in search truly stifles competition, perhaps the solution isn’t to regulate how it displays results but to break its monopoly altogether. The United States has considered requiring Google to divest Chrome; such structural remedies might succeed where behavioral rules have failed.

If the concern is that Google leverages search dominance to advantage its advertising business, separate the two. 

If the worry is that controlling both the search algorithm and the content (YouTube, Google News, Google Shopping) creates irresolvable conflicts of interest, then require differentiation. 

These actions would be slower, more legally complex, and more politically difficult than passing the DMA. They would also actually work.

In short, regulators should focus on creating conditions for competition rather than micromanaging every product decision. That means enabling genuine data portability so users can switch services easily, taking their search history and preferences with them. 

This also means using traditional antitrust enforcement aggressively for the largest abuses, like Google systematically burying competitors on page four, exclusive deals that lock out rivals, and acquisitions designed to eliminate nascent threats. 

The geopolitical reality

The DMA’s first two years have demonstrated that ex-ante rules are no faster — investigations still take 12-18 months — and far less effective than traditional enforcement. 

The geopolitical consequences threaten to undermine European interests far beyond digital markets. In December 2025, the Trump administration threatened retaliation against the EU for what it characterized as discriminatory targeting of American technology companies. The Office of the United States Trade Representative explicitly named European companies, including Spotify, Siemens, SAP and DHL, as potential targets for new restrictions. 

From Washington’s perspective, the DMA looks less like competition policy and more like industrial policy disguised as regulation. 

Whether that characterization is fair matters less than the political reality: Brussels finds itself caught between domestic pressure to demonstrate tough enforcement and external pressure that threatens broader trade relationships.

Dig deeper: Google outlines risks of exposing its search index, rankings, and live results

The wrong solution to a real problem

The DMA promised to enable the next generation of search-dependent businesses. It promised to stop Google from using its search monopoly to advantage its vertical products. It promised fairer competition for hotels, airlines, ecommerce sites, and the entire ecosystem of businesses that depend on organic search traffic. 

Two years in, Google’s monopoly remains intact, user experience has measurably degraded, business metrics have worsened, and no meaningful new competition has emerged. 

For those of us who spent years documenting Google’s abuses and advocating for intervention, this failure is spectacular. 

If regulators can’t find ways to break up long-standing monopolies (now over two decades old for some platforms), what hope is there to address emerging challenges in AI search, voice search, or whatever comes next? 

Young companies have a right to compete in digital markets. Regulators must create conditions where genuine competition is possible, not regulate away the symptoms of monopoly while leaving its foundations untouched.

We were right about the problem. The DMA is simply the wrong solution.

Organic search is fundamentally disrupted. Here’s what to do about it. by Brightspot

9 March 2026 at 15:00

If your organic traffic is down but impressions are up, AI is likely citing your content without sending clicks. If both are down, you’re being ignored. Either way, the search behavior your marketing strategy was built on has changed, and waiting for traffic to rebound isn’t a strategy.

This is the reality you’re facing in 2026. According to KEO Marketing:

  • 73% of B2B websites saw significant traffic losses between 2024 and 2025, with an average 34% year-over-year decline. 
  • The impact isn’t evenly distributed. If your content is primarily informational, you’ve likely been hit harder, with some sectors seeing organic traffic drop 15% to 64% since AI Overviews launched. 
  • News publishers are especially exposed, with Google referrals down 33% globally in the 12 months ending November 2025.

These aren’t normal fluctuations. They reflect a structural shift in how people find information online, disrupting business models built on website traffic at the foundation.

What is driving the shift in organic discovery? 

Organic clicks are declining for two overlapping reasons. You need to understand both because each requires a different response:

  • Google has engineered zero-click behavior for years through featured snippets and knowledge panels. These SERP features answer queries directly on the results page, so you don’t need to click through to get an answer. Ten years ago, about 25% of searches ended without a click. Today, it’s more than 65%. AI Overviews — now appearing in ~16% of desktop searches and ~41% of mobile searches — have dramatically accelerated this trend.
  • A growing share of users is bypassing traditional search entirely. Nearly 52% of U.S. adults now use AI tools regularly, and about 28% of employed Americans use AI at work. When someone asks ChatGPT or another LLM a question, they usually get an answer without visiting any website. Your content may inform that answer, but you get no traffic and no attribution.

What metrics should I consider when measuring AEO?

Traditional content marketing KPIs (impressions, clicks, CTR, sessions, bounce rate, and page views) no longer show you how discoverable your brand is. They measure behavior on your site, not how you perform in AI answers that now intercept much of your traffic upstream.

Five metrics matter most for AI visibility:

  • Citations in AI responses measure how often your owned content is directly cited when an LLM answers a query. A citation signals three things: your content is relevant, it’s structured so LLMs can parse and retrieve it efficiently, and your domain has enough authority to be trusted.
  • Brand mentions are different from citations. LLMs often mention brands without citing owned content, pulling from review sites, forums, third-party articles, and competitor content. A mention without a citation means the broader web is talking about you, but your content isn’t the source. That distinction helps you decide where to invest.
  • Share of voice compares your citation and mention frequency against competitors across a defined set of category-relevant prompts.
  • Brand sentiment tracks whether AI responses frame you favorably, neutrally, or negatively.
  • AI-influenced traffic measures how much of your traffic comes from LLM referrals. Early data suggests this traffic converts three to five times higher than other sources, making it worth tracking even at low volume.

Several tools now let you track these metrics at scale without manually prompting LLMs. They’re worth exploring. 

But even a simple benchmark — prompting major LLMs with your target queries and tracking where and how you appear — is better than not measuring at all.

How should I optimize my content for AEO?

Winning visibility in AI search doesn’t require an entirely new content playbook. But it requires retiring practices that no longer work and doubling down on principles that matter more than ever.

E-E-A-T remains the foundation

Experience, Expertise, Authoritativeness, and Trustworthiness were dominant signals in Google SEO before AI Overviews, and they remain dominant in AEO. LLMs prioritize sources that show real expertise and are trusted by other authoritative sources. 

If you earn citations from credible sites, publish content written by clear subject matter experts, and cover topics with depth and specificity, you’ll consistently outperform content that doesn’t — regardless of how well it’s optimized for other factors.

Structure and clarity have become non-negotiable

LLMs retrieve content by identifying passages that directly answer questions. If you organize content around clear questions and direct answers, use structured bullet summaries, and avoid dense paragraphs, you’re more retrievable than if you bury answers in narrative prose. 

This means making your information architecture legible to both human readers and LLM retrieval systems. Adding a Q&A section to existing content — or restructuring posts around clear question-and-answer pairs — is one of the highest-leverage updates you can make right now.

Human-written, human-led content has a measurable advantage

After Google’s latest core update, mass-produced AI content saw an 87% drop in rankings and citation frequency, and keyword-optimized content fell 63%. LLMs are getting better at detecting AI writing patterns and deprioritizing that content.

The pressure you felt in 2025 to produce volume with AI created a quality problem that’s now visible in performance data. The strongest strategy is quality over quantity. If you use AI, use it to draft and edit—not to generate final content. Add a review step to flag generic phrasing or a synthetic tone, whether through AI-detection tools or human editors.

Recency matters for AI citation

Answer engines look at publication and update dates when choosing sources. A well-structured, authoritative piece from 2022 can be overlooked in favor of an updated version from 2025. 

Audit your high-traffic pages and hero assets for outdated content, and refresh them with current data and examples. It’s a quick win many teams miss.

Pitchy language will not get cited

If your content reads as promotional — leading with product claims and brand-forward language — answer engines will often deprioritize it in favor of more objective sources. 

That doesn’t mean you can’t mention your product or brand. It means you should write about it the way a neutral third party would: acknowledge tradeoffs, provide context, and let the facts make the case. Listicles and comparison articles work especially well here. 

AI systems respond to structured, objective comparisons—even when one option is clearly favored.

Outside of my owned channels, what content performs well in AEO? 

One clear pattern in how LLMs decide which brands to mention: they look for consensus across multiple sources, not just your content. If you appear only on your own blog, you’ll lose to a brand with fewer owned assets but stronger third-party coverage.

That makes your external content ecosystem a strategic priority. Reviews on G2, Capterra, Google, and similar platforms are often used in AI training. User-generated content on Reddit and other forums is heavily indexed. Third-party articles, tutorials, YouTube videos, and newsletter mentions all build the multi-source consensus that gets you cited in AI answers.

Content partnerships deserve focused attention. When you sponsor articles or newsletter placements with relevant publications, you do two things: drive referral traffic outside search and earn trusted external citations that boost AI visibility. Newsletter readership is growing as audiences seek curated, human-authored content. YouTube citations are especially strong and increasing, and ChatGPT shows a documented preference for citing authoritative video creators.

The goal isn’t to manufacture mentions. It’s to tell a consistent story about your brand across credible external sources so LLMs encounter that story repeatedly. Consistency across partners, review platforms, and third-party content compounds your AI share of voice.

How do I build landing pages that convert traffic better? 

With organic traffic down 30% or more, the visitors who reach your site are more valuable and more intentional than in past years. That makes conversion optimization on key landing pages more important.

The principle is simple: one offer, one message, minimal copy. 

Each landing page should have a single call to action and a single argument. If you have multiple conversion goals, create multiple landing pages — not one page trying to do everything. 

Your header should capture the full value proposition. Supporting points should be brief. A visitor should understand the offer and act without scrolling.

This differs from blog and thought leadership content, which should be detailed, well sourced, and structured for LLM retrieval. The two serve different purposes and require different standards. Conversion-focused landing pages aren’t the place for nuance or extended prose.

The takeaway

The traffic decline isn’t a temporary setback that will correct itself. Users are getting answers from AI instead of clicking through to websites, and that behavior will intensify. A content strategy built only around ranking for clicks is no longer enough.

What replaces it is a dual mandate: optimize to be cited by answer engines and build the external brand presence that gives LLMs reason to mention you consistently. These goals align with what you should’ve been doing all along — publishing clear, authoritative, well-structured content grounded in real expertise.

The brands that will win in AI-driven discovery are the ones doing the fundamentals well: building real credibility, earning trusted external mentions, and writing for readers instead of algorithms. 

That was always the right approach. AI search has simply made it mandatory.


Written by Tim Burke and Lauren Yanez

Google’s undocumented method to disavow a whole TLD

9 March 2026 at 14:09

John Mueller from Google said you can block a complete TLD, top-level-domain, using the link disavow tool. He said it is not something Google documents because “Given how big of a hammer it is, I don’t know if it’s something we should really suggest in the docs.”

How does it work. All you need to do is use the syntax “domain:abc” in the disavow file. John posted this one Bluesky saying:

  • “If you’re sure that it’s what you want to do, you can use “domain:abc” in the disavow file. Keep in mind that you can’t carve out specific domains if you like some, but if you find the TLD is almost only annoying spammers, it’ll save you time.”

He later added:

  • “Given how big of a hammer it is, I don’t know if it’s something we should really suggest in the docs. I’m sure all TLDs have some good sites.”

Why we care. If there is one TLD that is concerning to you, sure, you can go ahead and disavow the whole TLD. But it might be better to be more selective of how you use the disavow file and don’t just block TLDs at a whole.

For more on the disavow link file, see this help document.

Before yesterdaySearch Engine Land

10 Reddit comment frameworks that drive engagement without sounding like ads

8 March 2026 at 20:00
Reddit logo displayed on smartphone screen

Most people fail on Reddit because they write comments like ads. Reddit eats ads for breakfast.  It’s better to follow a Reddit comment framework that has been proven thousands of times to get engagement and increase visibility and awareness.  

The winning move? Be useful first, be human, then casually exist as a company.

Below are 10 proven comment frameworks we see working every single day for our clients. These aren’t scripts: they’re thinking patterns. Follow the structure, swap in your context, and your comments will feel native instead of needy.

1. The ‘been there done that’ comment

When to use it: Someone is struggling or asking how to do something you already solved.

Framework:

  • Start with personal experience.
  • Share the mistake you made.
  • Share what finally worked.
  • Optional soft mention at the end.

Example:

  • “I ran into this exact issue last year. We tried brute forcing it at first and wasted a ton of time. What finally worked was narrowing the scope and fixing one variable at a time. Once that clicked, everything sped up. We ended up building a small internal tool for it, but honestly the mindset shift mattered more than the tool itself.”

Why it works: You’re relatable first, helpful second, promotional last. Reddit rewards vulnerability over authority.

2. The counterintuitive insight

When to use it: A thread where everyone is repeating the same advice.

Framework:

  • Acknowledge the common advice
  • Gently challenge it
  • Explain why it fails
  • Offer a smarter alternative

Example:

  • “A lot of people say to just throw more money at ads here, but that actually made things worse for us early on. The real unlock was fixing the messaging before scaling anything. Once we did that, even small campaigns started working. That lesson ended up shaping how we approach this for clients now.”

Why it works: Reddit loves contrarian thinking when it’s earned through experience, not just hot takes.

Dig deeper: How to build an organic Reddit strategy that drives SEO impact

3. The tactical mini playbook

When to use it: Someone asks how to do something step by step.

Framework:

  • Give a short numbered list (three to five steps maximum).
  • Keep it practical and actionable.
  • Stop before it turns into a course.
  • Mention your company as context, not pitch.

Example:

  • “What worked for us looked like this:
    • 1. We picked one channel instead of five
    • 2. We tracked only one metric for thirty days
    • 3. We documented what actually moved the needle. 
  • After doing this a few times, we realized most people skip step two. That insight is basically why we built our process the way we did.”

Why it works: Clear value without overwhelming anyone. People can implement immediately.

4. The mistake warning

When to use it: Someone is about to make a common and expensive mistake.

Framework:

  • Validate their plan first.
  • Warn them about one specific pitfall.
  • Explain exactly how to avoid it.
  • Light credibility hint without bragging.

Example:

  • “This can work, but one thing to watch out for is scaling too early. We made that mistake and burned a few months before realizing it. If I were doing it again, I would test manually first before automating anything. That lesson came from doing this across a lot of campaigns.”

Why it works: You sound like a guide who’s walked the path, not a salesman with an agenda.

5. The data point drop

When to use it: A discussion that’s heavy on opinions and light on facts.

Framework:

  • Drop one real, specific data point.
  • Explain what it changed for you.
  • No links unless someone asks.
  • Keep the number believable, not boastful.

Example:

  • “One interesting data point from our side: When we switched from generic responses to context-specific replies, engagement nearly doubled. Same audience, same platform, different framing. That small change ended up influencing how we now coach others to comment.”

Why it works: Reddit respects numbers when they’re not flexy. Specific beats vague every time.

Dig deeper: A smarter Reddit strategy for organic and AI search visibility

6. The question flip

When to use it: You want to add value without preaching or taking over the conversation.

Framework:

  • Answer their question briefly.
  • Ask a smarter follow-up question.
  • Let the thread continue naturally.
  • Don’t hijack the conversation.

Example:

  • “This usually comes down to timing more than tools. Out of curiosity, are you trying to solve this for growth or retention? The advice changes a lot depending on that.”

Why it works: You move the conversation forward instead of hijacking it. Shows you’re thinking strategically.

7. The ‘I disagree but respectfully’ comment

When to use it: You genuinely disagree with the top comment or popular opinion.

Framework:

  • Acknowledge their point has merit.
  • Explain your different experience.
  • Offer an alternative perspective.
  • Stay humble and curious.

Example:

  • “I get why this approach works for some teams. We actually saw the opposite result when we tried it. In our case, simplifying the workflow beat adding more features. Might depend on team size, but worth testing both approaches.”

Why it works: You avoid Reddit flame wars while still standing out from the echo chamber.

Get the newsletter search marketers rely on.


8. The tool neutral recommendation

When to use it: Someone asks what tools or services to use.

Framework:

  • Mention multiple options first.
  • Explain when each makes sense.
  • Include yours as one of many choices.
  • Focus on fit, not superiority.

Example:

  • “There are a few ways to do this depending on your budget. Some people go fully manual, others use spreadsheets, and some use dedicated platforms. We landed on building our own because of volume, but for most people starting out, simplicity wins over features.”

Why it works: You don’t look biased even when you are involved. Builds trust through honesty.

Dig deeper: 4 ways to use Semrush to discover Reddit opportunities

9. The lessons learned summary

When to use it: Someone asks if something is worth trying or worth the investment.

Framework:

  • List two to three things that worked
  • List one to two things that didn’t work
  • End with a grounded, practical takeaway
  • Keep it balanced and realistic

Example:

  • “What worked for us was consistency and context-awareness. What didn’t work was blasting the same message everywhere. The biggest lesson was that Reddit rewards effort more than polish. Once we leaned into that philosophy, results followed naturally.”

Why it works: Balanced honesty builds trust fast. Shows you’ve done the work and learned from failures.

10. The quiet authority comment

When to use it: You want to establish credibility without saying exactly who you are.

Framework:

  • Speak calmly and confidently
  • Avoid hype words and superlatives
  • Reference patterns, not individual wins
  • Let experience speak through your perspective

Example:

  • “We see this question come up a lot in our work. Usually, the issue isn’t the platform but how people enter the conversation. Threads that already have momentum respond very differently than empty ones. Adjusting for that context alone fixes most engagement issues.”

Why it works: You sound like someone who has seen this movie before. Authority through pattern recognition, not bragging.

How to subtly recommend your  company without getting banned

Here’s the golden rule: Your company is context, not the point.

  • Good: “We ended up building this internally, which changed how we approach it now”
  • Bad: “Check out our product it does exactly this”

The magic happens in your profile. When your comment gets upvoted, people click through to see who you are. That’s where the real conversion happens: not in the comment itself.

The frameworks above work. But having someone implement them consistently? That’s what turns Reddit into a real growth channel.

By implementing the Reddit Comment Framework, brands can achieve greater visibility.

This article was originally published on LaunchClub (as 10 Reddit Comment Frameworks That Actually Win on You Visibility (Steal These for Your Brand)) and is republished with permission.

Google’s AI Mode is citing Google more than any other site: Study

7 March 2026 at 00:16
Google Search loop

Google’s AI Mode is increasingly citing Google itself — and often sending users back to another Google search, according to new SE Ranking research.

Why we care. AI search is meant to surface the best sources on the web. If Google increasingly cites itself, you may see fewer direct links and less traffic as more users stay inside Google.

The details. Google.com was the most cited source in AI Mode answers, accounting for 17.42% of all citations, SE Ranking found.

  • That makes Google.com the most referenced domain — more than the next six domains combined: YouTube, Facebook, Reddit, Amazon, Indeed, and Zillow.

Accelerating trend. In June 2025, Google cited itself in just 5.7% of AI Mode answers. That share is now tripled.

  • Nearly one in five AI citations now comes from Google. Including YouTube, Google-controlled properties account for roughly 20% of sources.

Self-preferencing on steroids. AI Overviews already link heavily to Google properties like Maps, Images, and YouTube. AI Mode appears to extend that approach by pushing users deeper into Google’s ecosystem, often through additional search results rather than external sites.

  • This keeps users interacting with Google surfaces where ads, reviews, and other monetized content appear.

What changed. Earlier AI Mode research showed Google mainly citing Google Business Profiles. That’s no longer the case:

  • 59% of Google citations now point to traditional Google search results.
  • 36.1% still reference Google Business Profiles.
  • Smaller shares link to Google Support (1.7%), Google Flights (0.1%), and other Google properties.
  • In many cases, AI Mode citations now show a mini search results panel beside the answer — effectively turning the citation into another search experience.

Industry differences. Google dominates citations across most topics. Some niches rely on Google even more:

  • Travel: 53.18% of citations
  • Entertainment & hobbies: 48.74% of citations
  • Real estate: 30.54% of citations

The only category where Google wasn’t the top source was Careers and Jobs, where Indeed appeared 3.1x more often than Google.

About the data. SE Ranking analyzed 68,313 keywords across 20 industries and more than 1.3 million AI Mode citations to measure how often Google.com appears as a cited source.

The report. Is Google stealing your clicks in AI Mode? (1.3M+ citations analyzed)

The latest jobs in search marketing

6 March 2026 at 23:53
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the only vertically integrated AI infrastructure company built from the ground up, we own and operate each layer of the stack — from electrons to tokens — to power the world’s most ambitious AI workloads. When you join Crusoe, […]
  • Job Description Direct Agents is on the search for an SEO Analyst in our NYC office, who will assess and develop SEO strategy for a variety of mid to large-sized clients, and act as SEO expert for both internal and external teams. Who We Are Direct Agents is not a traditional agency. We are an […]
  • At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
  • About J&Y Law J&Y Law is a leading California plaintiff’s personal injury and elder abuse law firm headquartered in Los Angeles with multiple offices statewide. We are dedicated to protecting the rights of those injured through negligence and delivering the highest quality legal representation and client service. Our culture emphasizes Client Service, Quality Work Product, […]
  • Job Description Content Marketing Manager Location: El Segundo, California, USA (HQ) Reports To: Director of Marketing Role Type: Full-Time, On-Site Compensation: $75,000-$90,000 annually About QuikStor QuikStor is the leading SaaS facility management platform for the self-storage industry, delivering a purpose-built, scalable system that serves as the foundation for intelligent automation and modern facility operations. We […]
  • (Hybrid) Description We are expanding our marketing team and seeking a Content Marketing Specialist to play a key role in executing our marketing strategy. You will own the creation and publication of content across blogs, social media, email campaigns, and the company website, helping bring DMC’s brand to life across digital channels. A portfolio of […]
  • Job Description Position Title: E-commerce and SEO Specialist Compensation Range: $55,000 – $75,000 Location: Hybrid / On-site – Englewood, CO About GOLFTEC Enterprises: GOLFTEC Enterprises is a dynamic, technology-driven leader in the golf industry, uniting two premier brands—GOLFTEC and SKYTRAK—with a shared mission: to help people play better golf. GOLFTEC, the world leader in golf […]
  • The Role Wpromote is seeking a Senior Technical SEO Manager dedicated exclusively to the Southwest Airlines account. This isn’t a typical SEO role — it’s an opportunity to shape how a leading travel company competes in a transforming search landscape. You’ll be a key player focused on organic discoverability, cross-channel collaboration, and measurable revenue impact. […]
  • Our digital marketing agency helps multi-location home service brands generate leads across dozens of local markets. Our flagship client is a PE-backed home services company operating 40+ locations across the U.S. and Canada under several brands, and we are expanding our SEO team to support rapid growth. We are looking for a hands-on SEO Manager […]
  • Job Description Salary: $45,000-$50,000 DOE Position Summary We are looking for a creative and motivated Content Marketing Specialist to join our team in the roofing industry. This role is ideal for someone with at least one year of marketing experience who thrives in a fast-paced environment and enjoys creating visually engaging, results-driven content. You will […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • New York, NY We are currently seeking a Paid Search Manager for a rapidly growing media agency in NYC. This is an amazing opportunity for someone to make a name for themselves in the industry with this progressive and growing agency. The position will work across various high-profile national and global accounts, supporting account teams […]
  • Lamark Media (“Lamark”) is an integrated digital marketing firm driven by a simple philosophy: create extraordinary marketing campaigns that yield positive, measurable results for their clients and strategic partners. Lamark’s methodology is to create a custom omni-channel strategy that leverages digital marketing assets like a portfolio which can be measured, optimized, and scaled for long-term […]
  • Paid Search Marketing Manager We’re currently hiring a Paid Search Marketing Manager to join our growing remote team. Lead high-scale SEM programs across Google Ads, Bing Ads, and Local Services Ads (LSA) for a rapidly growing multi-location business. You’ll own strategy + execution, turning analysis into performance gains through rigorous testing, optimization, and KPI-driven decisions. […]
  • Add3 seeks a Paid Search Marketing (SEM) Account Manager who will be responsible for account optimization and creation of new campaigns across multiple client accounts leveraging industry best practices. The individual will support overall efforts and deliver on client needs while recommending new opportunities for account growth. This position will report to the pod/account director. […]
  • The Role Wpromote is looking for a sharp Paid Social Manager to manage scalable full-funnel paid social advertising campaigns across a portfolio of clients. This is a hands-on role: build campaign strategy, execute high-quality activations, optimize to business KPIs, and partner with cross-functional counterparts such as Creative leads to test social-first creative as well as […]

Other roles you may be interested in

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Note: We update this post weekly. So make sure to bookmark this page and check back.

OpenAI’s big ChatGPT Instant Checkout plan just changed

6 March 2026 at 23:48
AI shopping

OpenAI is backing away from putting checkout directly inside ChatGPT. Instead, purchases will shift to retailer apps that connect to ChatGPT, The Information reported.

Why we care. ChatGPT aims to be more than a discovery engine. Right now, though, product discovery inside ChatGPT is gaining traction faster than purchases. That suggests AI-powered shopping is only influencing the consideration stage (at least for now), not driving conversions.

What happened. OpenAI had planned to let shoppers buy products directly from listings in ChatGPT search results. Instead, an OpenAI spokesperson said that Instant Checkout is moving to Apps, where purchases happen inside connected services rather than natively in ChatGPT.

  • The company will now prioritize product search and discovery inside ChatGPT.
  • It will also keep working with Stripe on the Agentic Commerce Protocol to support app-based transactions.

What changed: OpenAI found that users research products in ChatGPT but don’t complete purchases there. Only a small number of merchants were actively using native ChatGPT checkout, according to the report.

  • In September, OpenAI positioned Instant Checkout as a big commerce opportunity. At the time, it said U.S. users could buy from Etsy sellers inside ChatGPT, with plans to expand to Shopify merchants, add multi-item carts, and roll out beyond the U.S.

Meanwhile. Shopify president Harley Finkelstein said this week that only about a dozen Shopify merchants were using AI tools, despite Shopify supporting integrations with ChatGPT, Gemini, and Copilot. That’s tiny relative to Shopify’s overall merchant base.

What to watch. Can OpenAI make ChatGPT more valuable as a shopping discovery engine without owning the final transaction? Also, how does OpenAI’s commerce strategy intersect with its advertising ambitions? If transactions stay outside ChatGPT, monetizing product discovery through ads could become even more important.

Why this is happening. Two forces are slowing agentic commerce, according to Leigh McKenzie, director of online visibility at Semrush: infrastructure and trust. Real-time catalog normalization across tens of millions of SKUs is a decade-scale problem Google already solved with Merchant Center, and consumers still default to checkout flows they trust — Apple Pay, Google Wallet, and Amazon one-click.

The report. OpenAI Scales Back Shopping Plans for ChatGPT (subscription required)

Google’s Liz Reid: Search and Gemini may converge, or diverge further

6 March 2026 at 22:56
AI future paths

Google’s Liz Reid, VP and head of Search, drew a clearer line between Google Search and Gemini but said it’s still unclear whether the products will converge, diverge further, or be superseded.

The big picture. Reid said Search is an information product focused on helping people connect with the web, while Gemini is centered more on assisting with productivity and creation. She added that the boundaries are fluid, especially as AI products evolve quickly and agentic experiences reshape how people use the internet.

What she’s saying. In short, Reid said Search and Gemini share technology but have different product “north stars.” They could overlap more over time, but the eventual long-term direction is still open. Here’s what she said in an interview on Access Podcast:

  • “I don’t know the answer is the short answer.”
  • “I think what we see is some areas they’re converging more and some areas they’re diverging more, right? And like and so what are they going to net out? Like do the areas that diverged eventually all come or do the areas that diverge become even bigger over time? I think we’ll see.”
  • “So I don’t know in in all honesty, but I think we are right now at a point where depending on what angle you look at, you’d think they’re getting closer or they’re getting further apart.”
  • “Who knows, maybe agents will mean like the right product is neither of the two of them is a third product altogether that they merge into. I don’t know yet.”

Gemini vs. Search. Here’s the distinction Reid made:

  • On Gemini: “Gemini’s focus is on sort of being this assistant and so it tends to lean in more heavily on things like productivity or creation, right?”
  • On Search: “Search is more information based and it believes that often in those information use cases you also want to connect and hear from other people. And so how do you bring out the web?”

Agents and the web’s future. Reid also said Google expects a future with more agent-to-agent internet activity, not just humans browsing directly.

  • “I certainly think the there will be a world in which sort of agents are doing a lot of interaction on the internet, not just people.”
  • “I do think probably means there’s a world in which a lot of agents are talking with each other, and not just with humans going forward as we evolve.”

Google vs. ChatGPT. Reid pushed back on the idea that AI is a simple winner-take-all battle between Google and ChatGPT.

  • “I don’t know, by the way, that we’re going to end up in a world where there’s only one product, right?”
  • “I think what we’re seeing is like simultaneously people are adopting more tools and search is growing, right? because the the possibility of the tech is just allowing many more questions.”

Trusted sources. Reid also said Google wants to do more to surface sources users trust or pay for.

  • “I think one thing Google is trying to do a lot more of and we’ve taken small steps so far but want to do more. How do you help when there is that relationship?”

She pointed to Google’s Preferred Sources feature and broader subscription-aware experiences:

  • “If you love this source and you do have a relationship with it then that content should surface more easily for you on Google.”
  • “We should surface the the one that they’re paying for and not the six that they can’t get access to more.”

Why we care. Reid’s comments suggest Google hasn’t settled on Search’s long-term role in an AI-first ecosystem. So keep watching closely as AI assistants, agents, and search results evolve.

The interview. What happens to Google when AI answers everything? with Liz Reid

💾

Google’s search chief says the line between web discovery and AI assistants is still unsettled as agents and new behaviors reshape the web.

Google contacts advertisers with a mandatory EU political ads deadline

6 March 2026 at 20:47

Google is reaching out directly to advertisers via email, requiring them to confirm whether their campaigns contain EU political ads — with a hard deadline of March 31st.

Why we care. This isn’t optional. EU regulation now requires Google to verify political ad status across all active campaigns, and advertisers who don’t act before the deadline could face compliance issues.

What’s happening. Google is asking every advertiser to declare whether their existing campaigns include EU political ads. The requirement applies to all current campaigns and must be completed by March 31, 2026.

How to comply: Google has outlined three ways to submit the confirmation:

  • Campaign level — Go to Campaign Settings and select “EU political ads” to confirm individual campaigns.
  • Multiple campaigns — Go to the Campaigns tab and use the “EU political ads” option to confirm several at once.
  • Account level — Confirm for all new and existing campaigns in one go. Selecting “No” at account level automatically applies that answer to every campaign, including future ones. You can still override this for individual campaigns at any time.

Between the lines. The account-level option is the most efficient route for most advertisers who are confident none of their campaigns fall under the EU political ads definition. Google has made it straightforward to reverse or adjust the selection at any point, so there’s no risk in acting early.

The bottom line. Check your inbox — Google is contacting advertisers directly. If you run campaigns targeting EU audiences, log in and complete the confirmation before March 31st to stay compliant.

First seen. This update was spotted by Paid Search expert, Arpan Banerjee, who shared the details of the comms on Linkedin.

How structured data supports local visibility across Google and AI

6 March 2026 at 20:00
Why schema matters more for local SEO in the AI search era

Until a few years ago, schema helped search engines extract basic facts and display visual enhancements like star ratings and sitelinks. 

However, in the AI-driven search world, schema plays a different and fundamental role for local SEO, helping Google and other AI systems understand who you are, what you do, where you operate, and how confidently your information can be reused.

Improving rankings isn’t as relevant. Now, schema helps reduce confusion for Google and reinforces your business as a stable, trustworthy local entity across traditional search, local packs, AI Overviews, rich results, and external AI platforms.

Let’s dig into how schema helps local SEO in the AI search world.

How Google handles conflicting structured data

Google triangulates across multiple data points to understand a business and pull information into a search result:

  • On-page content.
  • Internal linking and site structure.
  • Google Business Profiles.
  • Citations and directories.
  • Reviews and reputation signals.
  • Schema markup.

When these signals align, Google’s confidence in your information increases. When they contradict each other, your correct information might not be pulled into search.

When structured data contradicts on-page content, Google Business Profile data, citations, or reviews, Google doesn’t attempt to reconcile the difference — it discounts the markup and often ignores the information altogether.

For example, consider a law firm that marks up:

  • Operating hours that differ from their GBP.
  • “Free consultation” in their schema, but not on the landing page.
  • Attorneys who are no longer listed on the “Our Team” page.

Each of these creates friction, leading to mixed signals for AI systems and search engines. One conflict may be ignored, but multiple conflicts can compound and result in lost search visibility for the whole site. 

False positives: The silent performance killer

False positives occur when schema asserts something that isn’t fully supported by other signals. 

Common examples include:

  • Marking a business as a medical provider without appropriate credentials.
  • Applying Person schema to non-professionals.
  • Using Product schema for services.

False positives are particularly damaging in AI-driven systems. AI models are conservative when confidence is low — if information appears inconsistent or exaggerated, it’s less likely to be reused or cited. 

Review and rating schema

When review markup contradicts visible content, Google doesn’t “average” the signals, it ignores the schema altogether.

If you markup “5 stars” but your Google Business Profile shows “4.2 stars,” or if you mark up reviews that aren’t visible on the page, the signal gets confused.

Note: Google strictly prohibits marking up third-party reviews, such as those from Yelp, Google Maps, or Avvo, as your own Review schema. You can only markup reviews that are first-party, or collected directly by your site, and clearly visible to the user. For details, refer to Google’s specific guidelines on Self-Serving Reviews.

How other AI platforms use schema

Google is the most prominent platform, but AI is also integrated into assistants, such as Siri or Alexa, retrieval-based platforms, such as ChatGPT search, and much more.

To pull information, they need to determine if:

  • Two references describe the same business.
  • Information is current.
  • A source is authoritative.

While external AI platforms do not necessarily parse schema the same way Google does, structured data contributes to clearer entity representation across the web. 

Importantly, these other systems tend to be less forgiving than Google when data is inconsistent. But if confidence in the entity is low, the business will be excluded from search.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

Get the newsletter search marketers rely on.


What is the search environment for local businesses now?

To understand why schema matters more now than it did five years ago, it’s important to understand how fragmented search has become. 

Local businesses no longer only surface in a single list of 10 blue links (the SERP). They appear across multiple interfaces, often simultaneously:

  • Traditional organic search results.
  • Local packs and Maps results.
  • Knowledge panels.
  • Rich results and enhanced listings.
  • AI Overviews.
  • Conversational and agent-based AI platforms.

Schema doesn’t guarantee visibility on any platform — it helps AI systems decide if your business information is reliable enough to reuse. 

For example, when Google generates an AI Overview, it synthesizes information from multiple sources. Schema helps ensure Google understands exactly who you are and how your business information connects to your services, locations, and employees, so that your target audience can find you.

New SEO metrics for local businesses

Site performance is still often measured using metrics like keyword rankings, organic traffic, and conversions. These metrics aren’t wrong, but they are incomplete. 

Local businesses now need to think about:

  • Visibility in AI Overviews and AI-generated answers.
  • Stability in the local pack over time.
  • Accuracy and persistence in knowledge panels.
  • Correct attribution when AI systems summarize local providers.
  • Reduced volatility during core and local algorithm updates.

If a local service business appears more frequently in AI-generated answers for informational and service-related queries, their brand visibility will improve, but they may see organic clicks stagnate or decline. 

But there’s no need for panic.

In reality, what is happening is a shift in how demand is being fulfilled. In these scenarios, schema doesn’t create visibility. What it does is help ensure the business is represented accurately when it’s surfaced.

Dig deeper: GEO x local SEO: What it means for the future of discovery

Types of schema for local SEO

For local service-based businesses, a limited set of schema types is all you need to give your business visibility. Implementing too many types can lead to a bloated, templated markup that introduces contradictions.

Let’s look at an example law firm and how they might implement different types of schema.

Subtype schema

Subtypes help Google and AI systems categorize businesses correctly and align them with the right expectations. A personal injury firm, a corporate law practice, and a family law mediator should not all be described the same way.

Effective LegalService schema should clearly answer four questions:

  • Who the firm is.
  • What type of law they practice.
  • Where they operate.
  • How they can be contacted.

This markup aligns directly with what users see on the page, what exists in Google Business Profiles, and what appears in legal directories like Avvo or Martindale-Hubbell.

Example: LegalService markup

{
  "@context": "https://schema.org",
  "@type": "LegalService",
  "@id": "https://www.example-law.com/locations/dallas/#location",
  "name": "Example Law Group Dallas",
  "url": "https://www.example-law.com/dallas/",
  "telephone": "+1-214-555-0100",
  "priceRange": "$$$",
  "address": {
    "@type": "PostalAddress",
    "streetAddress": "100 Main St, Suite 400",
    "addressLocality": "Dallas",
    "addressRegion": "TX",
    "postalCode": "75201",
    "addressCountry": "US"
  },
  "geo": {
    "@type": "GeoCoordinates",
    "latitude": 32.7767,
    "longitude": -96.7970
  },
  "openingHoursSpecification": [{
    "@type": "OpeningHoursSpecification",
    "dayOfWeek": ["Monday","Tuesday","Wednesday","Thursday","Friday"],
    "opens": "08:30",
    "closes": "17:30"
  }],
  "sameAs": [
    "https://www.facebook.com/examplelawdallas",
    "https://www.linkedin.com/company/example-law-group",
    "https://www.avvo.com/attorneys/example-profile"
  ]
}

You can view the full list of specific subtypes in the Schema.org LegalService definition.

Organization schema

Organization schema defines the parent entity behind locations, practitioners, and services. LocalBusiness (or LegalService) defines the physical location. This distinction becomes critical as companies scale, rebrand, or operate across multiple markets.

Without a clear Organization layer, Google may treat each location as a standalone entity. That can lead to fragmented knowledge panels, inconsistent brand attribution, and inaccurate AI citations.

Example: Graph-based hierarchy

{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "Organization",
      "@id": "https://www.example-law.com/#org",
      "name": "Example Law Group",
      "url": "https://www.example-law.com/",
      "logo": "https://www.example-law.com/logo.png",
      "knowsAbout": ["Personal Injury Law", "Medical Malpractice"]
    },
    {
      "@type": "LegalService",
      "@id": "https://www.example-law.com/locations/dallas/#location",
      "name": "Example Law Group Dallas",
      "parentOrganization": { "@id": "https://www.example-law.com/#org" },
      "address": {
        "@type": "PostalAddress",
        "streetAddress": "100 Main St, Suite 400",
        "addressLocality": "Dallas",
        "addressRegion": "TX",
        "postalCode": "75201",
        "addressCountry": "US"
      }
    }
  ]
}

Dig deeper: Schema and AI Overviews: Does structured data improve visibility?

Person schema

For legal and professional service businesses, Person schema reinforces expertise and real-world credibility (E-E-A-T). Used incorrectly, it creates false authority signals that Google will ignore.

Person schema should only be applied when:

  • The professional has a visible bio on the site
  • Bar admissions and credentials are clearly displayed
  • Their relationship to the firm is real and current

This helps Google and AI systems associate legal expertise with the firm rather than just its content. It also reduces the risk of misattribution when AI systems summarize legal advice.

Example: Attorney bio markup

{
  "@context": "https://schema.org",
  "@type": "Person",
  "@id": "https://www.example-law.com/attorneys/jane-doe/#person",
  "name": "Jane Doe, Esq.",
  "jobTitle": "Senior Partner",
  "worksFor": { "@id": "https://www.example-law.com/#org" },
  "affiliation": { "@id": "https://www.example-law.com/locations/dallas/#location" },
  "alumniOf": "Harvard Law School",
  "knowsAbout": ["Tort Law", "Civil Litigation"],
  "sameAs": [
    "https://www.linkedin.com/in/janedoe-law",
    "https://www.statebar.tx.us/member/janedoe"
  ]
}

Service and product schema

For law firms, consultants, and agencies, Service schema, particularly the OfferCatalog structure, is more appropriate and accurate than Product.

Using OfferCatalog allows you to create a “menu” of services that AI systems can parse to understand the breadth of your expertise. This helps AI systems understand what the business actually offers without overreaching.

Example: OfferCatalog for legal services

{
  "@context": "https://schema.org",
  "@type": "LegalService",
  "@id": "https://www.example-law.com/locations/dallas/#location",
  "hasOfferCatalog": {
    "@type": "OfferCatalog",
    "name": "Legal Services",
    "itemListElement": [
      {
        "@type": "Offer",
        "itemOffered": {
          "@type": "Service",
          "name": "Personal Injury Consultation",
          "description": "Free case evaluation for auto accidents and workplace injuries."
        }
      },
      {
        "@type": "Offer",
        "itemOffered": {
          "@type": "Service",
          "name": "Medical Malpractice Litigation",
          "description": "Representation for victims of surgical errors and misdiagnosis."
        }
      }
    ]
  }
}

FAQPage schema

Originally, FAQPage schema helped search engines understand common questions and answers on a page. In an AI-driven search environment, well-written FAQs help define what a business does, what it doesn’t do, and what a user should expect. It helps AI systems as they look for boundaries, clarification, and intent resolution.

Example: AI-aligned FAQ schema

{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Do I have to pay a retainer for a personal injury case?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "No. We operate on a contingency fee basis, meaning you only pay legal fees if we win a settlement or verdict for you."
      }
    }
  ]
}

In AI Overviews, these answers may be paraphrased or summarized, but schema helps ensure the underlying meaning remains intact.

Schema maintenance: Why ‘set it and forget it’ fails

Schema is often implemented during a site launch or redesign, only to be ignored afterward. 

But businesses change constantly. Hours shift, locations open or close, staff turnover occurs, and services evolve. When schema isn’t updated to reflect these changes, inconsistencies are introduced that can erode information signals over time.

A sustainable schema strategy involves two steps:

  • Quarterly audit: Set a recurring calendar reminder to audit your schema code against your live site. Check for syntax errors, broken @id references, and deprecated properties.
  • Trigger-based updates: Establish a rule that whenever a “fact” changes in your business (e.g., you update your holiday hours on your Google Business Profile, or a partner leaves the firm), the schema should be updated immediately.

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

Schema is necessary in the AI search world

Structured data now acts as a trust signal, helping search engines and AI systems determine whether business information is accurate, consistent, and reliable enough to reuse at scale.

Schema that reinforces your correct information supports visibility across traditional search, local results, and AI-driven experiences. Inaccurate or outdated schema can hurt your company’s visibility.

Break down data silos: How integrated analytics reveals marketing impact

6 March 2026 at 19:00
Break down data silos- How integrated analytics reveals marketing impact

Do you think you’re able to answer the question every marketing leader dreads hearing from leadership: “Why isn’t our marketing effort doing more?”

How do you even go about answering that?

Let’s look at what I mean using a fictional location analytics company we’ll call Acme Area Analytics.

The Acme team reviews its reports. Nothing appears broken. Campaigns are running, leads are still coming in, and performance metrics are mostly stable. Yet sales momentum isn’t clearly accelerating, and it’s hard to pinpoint why.

Insights are scattered across site analytics, brand monitoring and SEO tools, CRM systems, and paid media dashboards. Each platform reflects part of the story, but none shows the full picture.

That fragmentation is exactly how well-intentioned “data-driven decisions” can go wrong. Let’s look at how that happens and how Acme, and you, can fix it.

When the data points in the wrong direction

In global, multi-channel campaigns like Acme Area Analytics’, the hardest moments are when nothing is obviously underperforming. Digital channels are running. Leads are coming in, and metrics are mostly stable, yet sales momentum is stalled and it’s unclear which lever to pull next.

At the same time, subtle signals raise concerns. Non-brand CPCs are creeping upward, and a competitor — Spotter Intelligence — is suddenly appearing more frequently in branded search.

Let’s say you’re part of the Acme marketing team. You go back to your reports and ask the question most marketers ask in this situation: Which tactic is underperforming?

When diving into the platform data, you uncover what looks like a clear answer: remarketing performance for your API has softened, conversion rates have dipped slightly, and efficiency has begun to decline.

On the surface, you have your answer. Spend should be pulled back to match demand because audiences have likely seen the creative too many times.

That decision could certainly make sense, and it’s what many teams actually end up doing. But it’s also often wrong. Why? Because you haven’t yet asked the right question.

The more useful question is harder to answer: “Is demand actually declining, or are we failing to create new interest upstream?”

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

The insight appears when you look across systems

The real issue becomes clear when you look beyond a single channel. The location analytics market still had strong growth potential, but your product was encountering a shortage of engaged audiences receptive to the message. That disconnect became clearer when you looked beyond paid media.

Site engagement trends in analytics and brand search behavior in Search Console suggested interest in your type of location AI wasn’t disappearing. It just wasn’t converting yet.

The focus had shifted from reach to engaged awareness, with a priority on attention and engagement, not just exposure. So your Acme team decided to introduce additional campaign layers, including new content designed to build relevance and trust.

Crucially, you didn’t see any improvement right away. Cost-per-lead efficiency continued to decline, and it looked worse after increased upper-funnel investment. From a platform-only view, this looked like the time to pull back.

But looking across systems changed how performance was interpreted. Engagement from awareness activity began feeding remarketing pools, but the impact wouldn’t surface immediately for a product with long sales cycles like your API.

During that gap, the Acme team maintained confidence in its strategy by sharing early signs of upstream momentum.  Only later did results begin to show up. Remarketing efficiency improved and higher sales volumes of the API were confirmed from integrated CRM data.

The takeaway for the Acme Area Analytics marketing team wasn’t just that “remarketing worked again,” or that upper funnel activity drives demand. It’s that the hardest marketing decisions are the ones you have to make — and hold — before success shows up in the metrics leadership typically trusts.  

Get the newsletter search marketers rely on.


Why the insight only appeared between dashboards

In our Acme example, each dashboard told a technically accurate story, but no single dashboard could fully articulate the whole picture.

  • Paid media dashboards reflected efficiency trends.
  • Analytics and Search Console showed shifts in engagement and demand.
  • CRM data lagged behind decisions by weeks or months.

Looking at any of those in a silo wouldn’t have allowed Acme’s marketing team to fully understand what was happening.

But we know that the insight didn’t live in any single view. When the question the team asked itself shifted to whether demand was moving effectively through the funnel, and dashboards were evaluated together in context, the decision changed.

This is what unsiloed analytics looks like in practice. It’s not about teams fighting over which touch led to the result, but recognizing that each part of a marketing plan plays a distinct and important role in creating momentum that grows demand and lifts sales.

Leadership wants proof. Pipeline and revenue might feel like the safest validation. But in complex, multi-channel programs, those are often lagging indicators of solid performance.

By the time pipeline clearly reflects demand creation, teams have often already pulled back awareness investment, cut channels that looked inefficient in isolation, and shifted budget toward short-term demand capture.

In the example above, waiting for proof would have meant that Acme reduced awareness and remarketing spend and possibly exited a market that would later show great promise.

Integrated data didn’t eliminate the risk of shifting investment from lead generation to awareness-building in a market that had declining metrics. Instead, it added credibility to the case for doing so.

Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era

The same pattern at a smaller scale

This dynamic isn’t limited to complex, multi-channel programs. You can see it even within a single platform when multiple tactics work together.

Let’s look at a scenario where Acme’s brand search impression volume increased by roughly 50% year over year while Share of Voice remained flat. That means more people have been searching for Acme as the company has invested across out-of-home and other digital campaigns. Acme’s Google campaign then harvested the demand created by other channels.

If Acme’s brand search had been evaluated only in terms of its media plan efficiency, this signal of growing demand would have been easy to miss. In context, it confirmed that Acme’s awareness efforts were working, even though attribution couldn’t perfectly assign credit to individual channels.

What changes when data is integrated

In these examples, integrated data — unsiloed data —  shifted the conversation.

Instead of Acme’s marketing teams debating budget cuts, they could monitor signs of early momentum, including longer time on site and rising brand search volume. Over time, that interest could be seen in the CRM as higher-quality leads that converted more frequently into closed deals.

The good news is that this doesn’t require new tools or perfectly stitched together data. It simply requires stepping back during planning and asking better questions about how potential customers signal interest as they consider your product.

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Seeing opportunity before it’s obvious

In my experience, the most valuable marketing insights come from understanding how different data points relate.

Unsiloing your data isn’t about proving causality or winning attribution debates. Instead, it’s about recognizing opportunity early enough to act on it and identifying which metrics suggest that demand is quietly being built in the background.

The teams that win aren’t only better at reporting results. They’re better at seeing momentum while it’s still forming and acting on it early.

‘Always be testing’ worked in 2016 — it’s risky in 2026

6 March 2026 at 18:00
‘Always be testing’ worked in 2016 — it’s risky in 2026

If I hear “always be testing” one more time, I might scream. It was great advice in 2016. In 2026, it’s a great way to light your budget on fire.

That mantra made sense when budgets were loose and platforms forgave a lot of chaos. Launch five audience tests simultaneously? Sure, why not! Swap out three creative variables at once? Go for it!

But the rules have changed. Our new reality has tighter budgets, longer learning phases, and signal fragmentation everywhere. One poorly structured test can distort your performance for weeks, not days. That performance hit compounds fast.

Modern experimentation is expensive and risky. Why pay that price when we have the power of agentic AI to help? And by help, I don’t mean slapping AI onto our existing process and asking it to generate more ad variants. That would just be an expedient way to light our budgets on fire.

Instead, it’s time to use agentic AI to design smarter experimentation systems.

The real cost of unstructured testing

In an “always be testing” era, it was all too easy to throw things to test at the scale Oprah gives out cars or Taylor Swift fills auditoriums. It often led to unstructured testing where we launched ideas on a Monday and checked results on Friday hoping for a lift. There was nary a risk model, overlap detection, or strategic sequencing in sight.

The costs of that approach are now exponentially higher. Take platform disruption. Algorithms crave stability. Industry benchmarks show ad sets stuck in learning phases often see CPAs 20-40% higher than stable sets.

Every time you significantly change creative, audience, or budget, you risk resetting that learning. If you’re running three overlapping tests that each trigger resets, you’re voluntarily paying a volatility tax on your entire media spend.

Then there’s waste. The majority of A/B tests deliver no statistically significant lift. If you aren’t ruthless about what deserves to run, you’re burning budget to prove most ideas don’t matter. “Always be testing” without guardrails turns into “always be destabilizing.”

From random tests to a real experimentation engine

The shift looks like this. Old approach: “AI, write me 10 new headlines.” New approach: “AI, design the smartest next experiment within our budget, risk tolerance, and current learning state.”

The reframe from creative generation to experimentation architecture is where real leverage lives.

Here’s a practical seven-step framework to turn testing from a tactical habit into strategic infrastructure.

Step 1: Set hard guardrails (humans draw the lines)

Before you let any AI near your experiments, lock in constraints. Without them, AI lacks proper context. With them, AI becomes a disciplined strategic partner.

Define and document five hard boundaries.

  • Budget allocation: Reserve a fixed percentage (e.g., 10%) explicitly for testing.
  • Maximum volatility: “No test can increase CPA by more than 15% for more than 5 days.”
  • Learning phase sensitivity: Document reset thresholds per platform.
  • Leading indicators: Use early signals (CTR, engagement drop-offs) to kill bad tests before they damage pipeline.
  • Brand risk: Define off-limits positioning (e.g., no discount-heavy testing in enterprise segments).

Document this in a single file (e.g., experimentation-guardrails.md) to teach AI the constraints that make ideas viable. Your AI agent must reference this before proposing any test.

Step 2: Let AI audit your experiment history

Most teams have the data sitting in spreadsheets, but never extract the lessons. Feed your last six months of test results into an AI agent and have it analyze variables changed, duration, performance delta, statistical confidence, and platform resets.

Ask it to find patterns, such as:

  • Over-tested variables: CTA buttons tested eight times with zero meaningful lift? That’s not a lever.
  • False failures: Many tests are declared losers simply because they never reached statistical significance. An AI agent can quickly assess statistical power and flag inconclusive results.
  • Volatility patterns: Often, your worst CPA weeks weren’t market shifts or a single bad creative, but rather the weeks where you launched three overlapping tests.

This is how AI becomes a true analytical partner.

Step 3: Write real hypotheses

Rather than jumping straight from idea to launch, use AI to help you enforce hypothesis discipline.

  • Weak: “Let’s test a new headline.”
  • Strong: “If we emphasize ‘faster time-to-value’ over ‘ease of use,’ we expect a 10-5% lift in demo requests from mid-market companies because win/loss analysis shows speed is their top decision criterion.”

Structured hypotheses create institutional memory. Six months later, when someone suggests testing “speed messaging” again, you’ll know exactly who it worked for and why. Yes, it feels like paperwork, but this discipline can protect your budget from algorithm chaos.

Step 4: Risk-score every proposed test

Budget isn’t infinite and neither is algorithm stability. Your AI agent should evaluate each proposed test across five dimensions and assign a risk score.

  • Budget impact (e.g., <5% vs >15%).
  • Algorithm disruption level (minor refresh vs new campaign).
  • Audience overlap.
  • Brand sensitivity.
  • Learning value.

High risk + low learning = Kill it. Low risk + high insight = Green light.

Example: Testing a radical new enterprise positioning statement is high risk in a paid conversion campaign. Instead, your AI agent might suggest validating it first via organic LinkedIn content or low-budget audience polling. Low risk. High signal.

Get the newsletter search marketers rely on.


Step 5: Pre-test with synthetic audiences

This is one of the most underused applications of AI in experimentation. Synthetic testing means simulating how different personas may react to messaging before spending media dollars, and the data backs it up.

A study involving researchers from Stanford and Google DeepMind found that digital agents trained on interview data matched human survey responses with 85% accuracy and mimicked social behavior with 98% correlation. 

This makes synthetic audiences surprisingly useful for early-stage signal gathering. While they don’t replace real-world data (at least not yet), they can act as creative QA.

Here’s how it works. Define psychographic archetypes.

  • The Skeptical CMO (burned by vendors, risk-sensitive).
  • The Growth VP (speed-obsessed).
  • The CFO (margin-focused).

Feed your proposed messaging into your AI system and ask, “How would the Skeptical CMO react to this?”

You might get feedback like: “The phrase ‘All-in-One’ triggers skepticism. It signals feature bloat. Consider reframing as ‘Integrated’ or ‘Modular.’”

That kind of signal costs pennies in API calls instead of thousands in paid testing.

Step 6: Sequence tests, don’t stack them

Changing audience, creative, and landing page in the same week teaches you almost nothing. Your AI agent should act like air traffic control: scan active campaigns, flag conflicts, and recommend sequencing.

A better flow:

  • Week 1-2: Audience test.
  • Week 3-4: Creative test on the winning audience.

If overlap is unavoidable, enforce clean holdout groups so you always have a source of truth.

Step 7: Build a living knowledge base

Treat tests like disposable experiments and you lose the compounding value. Have your AI auto-summarize every completed test: 

  • Why did it win? 
  • Who did it win with? 
  • How durable was the lift? 
  • What variables interacted?

Over time, this database becomes your moat. Everyone can buy the same targeting. Few teams have 100+ validated customer truths at their fingertips.

The bigger shift: From activity to architecture

“Always be testing” was a growth-era mindset. In 2026, the winning mindset is “always be compounding intelligence.”

Rather than more tests, build your competitive advantage through structured, risk-aware, insight-driven experimentation that protects algorithm stability and ties experimentation directly to revenue.

The next time your stakeholder asks why you aren’t testing more, show them your experimentation architecture and say, “We’re not just running experiments. We’re building an intelligence engine.”

Because intelligence compounds.

Why most video ads fail — and what video metrics actually matter

6 March 2026 at 17:00
Why most video ads fail — and what video metrics actually matter

Video advertising has never been easier to distribute. Platforms can deliver impressions and views at an enormous scale across YouTube, paid social, short-form video, and connected TV.

But distribution isn’t the same as effectiveness. Many campaigns generate impressive platform metrics while producing little measurable business impact.

The problem usually isn’t targeting, budget, or platform choice. It’s a deeper strategic issue: campaigns are optimized for outputs like views and impressions rather than outcomes like attention, persuasion, and action.

Most video ads fail because they misunderstand attention

Poor targeting, limited budgets, and platform choice are rarely the real problem. The bigger issue is that many video ads are still produced as if they’re television commercials.

In the early days of online video, distribution was the challenge. Getting a video seen at all felt like a win. Today, distribution is abundant. Attention isn’t.

Every major platform — YouTube, paid social, short-form video, connected TV — competes for fragments of cognitive bandwidth. Users arrive with intent, habits, and expectations that have nothing to do with your campaign. We plan for reach, while viewers respond to relevance.

I’ve sat in many meetings where success was defined by impressions delivered or views accrued. But when you look downstream — search lift, site engagement, conversion — the connection often disappears.

Platforms will reliably deliver impressions. Turning those impressions into memory, persuasion, or action requires a fundamentally different mindset.

Dig deeper: From Video Action to Demand Gen: What’s new in YouTube Ads and how to win

The first five seconds are the entire negotiation

Skippable formats changed video advertising permanently, but many advertisers still haven’t adjusted creatively.

Early in my career, I believed strongly in branding up front. Logos, product shots, music cues — everything that signaled professionalism. Those ads looked great in presentations. They underperformed in market.

A clear pattern emerged over time. Ads that opened with a recognizable problem, a provocative statement, or an unexpected visual held attention longer — even when branding appeared later. Ads that opened with branding signals were skipped almost reflexively.

View-through rate isn’t persuasion. A “view” simply means the platform’s minimum threshold was met. It doesn’t mean the message landed, the brand registered, or the viewer cared.

In multiple brand lift analyses, most measurable impact occurred before the skip button appeared. If the opening didn’t earn attention, the rest of the ad didn’t matter.

What works: treat the opening frame like a headline, not a preamble. Lead with tension, a question, or a familiar problem. Design for sound-off environments. If the first frame wouldn’t stop a scroll, nothing that follows will matter.

Higher production value often correlates with lower performance

One of the most counterintuitive lessons in modern video advertising: polished ads frequently underperform scrappier ones.

I’ve seen simple, phone-shot videos outperform meticulously produced studio spots across YouTube, paid social, and short-form platforms. Not because quality doesn’t matter — but because perceived authenticity matters more.

Audiences are exceptionally good at identifying advertising. When something looks like an ad, they disengage. When it looks like content, they give it a chance.

Algorithms reinforce this: they reward watch time, retention, rewatches, and shares. They do not reward lighting setups or production budgets.

I’ve seen brands “upgrade” social video to look more premium, only to watch performance decline. The creative looked better. The results were worse.

The goal isn’t to look amateurish. It’s to look like you belong.

Match the platform’s visual grammar. Prioritize clarity over polish. Use real people and authentic voices whenever possible.

Ads that feel native get watched. Ads that feel inserted get skipped.

Dig deeper: How to get better results from Meta ads with vertical video formats

Get the newsletter search marketers rely on.


Length is a creative decision, not a media constraint

“Shorter is better” is one of the most persistent — and misleading — rules in video advertising.

Six-second ads can work. So can 60-second ads. I’ve seen both exceed expectations, and I’ve seen both fail badly. The difference was never duration — it was justification.

Some messages can be delivered instantly. Others require context, proof, or emotional buildup. Forcing every idea into the same runtime produces predictable results: safe, bland, forgettable ads.

I’ve reviewed retention graphs where a 45-second ad held viewers longer than a 15-second version, because the story justified its length. I’ve also seen six-second ads lose half their audience in the first two seconds because they wasted the opening.

Test multiple edits, not just multiple lengths. Watch retention curves, not averages. Build modular narratives: hook, then value, then proof, then action.

The “right” length is however long it takes to make the viewer feel their time was respected.

Metrics are signals

Platforms provide more data than ever. The problem isn’t a lack of metrics. It’s confusing metrics with outcomes.

I’ve seen campaigns praised for high completion rates that produced no measurable business impact. Strong engagement coexisting with low conversion. Impressive view counts that delivered zero lift.

This happens because platforms optimize for their success metrics, not yours. If your goal is to maximize views, the platform can do that easily. If your goal is to influence consideration, preference, or action, things get more complicated.

One uncomfortable question I’ve learned to ask early: what would failure look like here? If the answer is vague, the campaign is already at risk.

Define success in business terms before launch. Tie video metrics to downstream behavior wherever possible. Use lift studies, holdouts, or assisted conversions when they’re available. If you’re running a brand-building campaign, measure brand lift. If you’re running a performance campaign, measure conversions.

Dig deeper: AI for video advertising: 5 best practices for PPC campaigns

The brief is usually where things go wrong

Creative is often blamed when video ads underperform. In reality, creative usually does exactly what it was asked to do. The problem is the brief.

Vague objectives produce generic ads. “Brand awareness” without context leads to unfocused messaging. “Make it engaging” isn’t a strategy.

Strong video ads almost always begin with clear answers to three questions: 

  • Who is this really for? 
  • What do they care about right now? 
  • What should they think, feel, or do differently after watching? 

When those answers are clear, creative decisions become easier. When they aren’t, the work is compromised before production begins.

The deeper diagnostic questions are worth keeping close: 

  • Are viewers actually paying attention, or just passively present? 
  • What are they feeling — and which specific creative choices are driving that response?
  • Will they remember the brand once the ad ends? 
  • What will they do next — share it, recommend it, search for the product, or buy?

I’ve seen entire campaigns improve simply because the brief forced alignment around audience insight rather than assumptions.

Distribution strategy is part of the creative

Another common mistake is treating creative and distribution as separate decisions. They aren’t.

The way an ad is consumed — fullscreen versus feed, sound-on versus sound-off, lean-back versus lean-forward — should shape how it’s made.

A video designed for connected TV shouldn’t simply be resized for mobile. A short-form ad shouldn’t be a truncated long-form story without rethinking the hook entirely.

I’ve seen strong ideas underperform because the creative didn’t match the placement. The concept wasn’t wrong. The context was.

Design with placement in mind from the start. Create platform-specific versions, not one-size-fits-all assets.

Accept that “reuse” often means “rethink,” not “repurpose.” Distribution constraints aren’t limitations — they’re creative inputs.

Dig deeper: How to dominate video-driven SERPs

Testing should answer questions, not just generate variants

Testing is indispensable. It’s also frequently misunderstood.

Running endless A/B tests without a hypothesis rarely produces insight. It produces noise.

The most effective testing focuses on variables that materially affect attention and comprehension: opening frames, narrative structure, on-screen text versus voiceover, proof points versus emotional appeals.

It’s also important to recognize what testing can’t do. Algorithms are excellent at optimizing toward measurable signals. They don’t understand brand equity, long-term memory, or cumulative effect. Testing should inform judgment — not replace it.

Ultimately, the only thing that matters for creative effectiveness tools is whether their predictions actually correlate to real media and sales outcomes — reliably enough to inform strategy and media decisions.

The question worth asking of any such tool is simple: How often does what it predicts will happen actually happen?

For example, I frequently cite data from DAIVID, an AI-driven creative effectiveness platform. Why? Because in independent testing, DAIVID’s predictions aligned with real-world outcomes more than 80% of the time — a meaningful foundation for making creative decisions with greater confidence before a campaign goes live.

Optimize for people

Platforms will change. Formats will evolve. Algorithms will shift in opaque and sometimes frustrating ways. But attention, curiosity, and trust remain stubbornly human.

The best video ads I’ve worked on weren’t optimized for view counts or completion rates. They were optimized for relevance. They respected the viewer’s time. They said something worth hearing.

Video ads don’t succeed because they follow platform rules. They succeed because they understand people. And that principle outlasts every algorithm update.

AI Max increases revenue 13% but drives higher CPA: Study

6 March 2026 at 00:09
Google Ads dashboard concept

Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerce’s Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.

Why we care. AI Max isn’t a minor update. It’s Google’s most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, that’s both an opportunity (possible growth) and a risk (an efficiency tradeoff).

By the numbers. The result of the analysis:

  • Median revenue: +13%
  • Median CPA: +16%
  • ROAS range: +42% to -35%

Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.

Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely won’t follow, Ryan concluded

What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction — bringing PMax-style automation into classic Search. The result is three core features:

  • Search Term Matching (broad match expansion plus keywordless targeting),
  • Text Customization (dynamic ad copy), and
  • Final URL Expansion (automated landing page selection).

Four pitfalls Smarter Ecommerce identified:

  • Broad match cannibalization: Up to 63% of the time, recycling existing coverage rather than finding new queries.
  • Competitor hijacking: In one account, AI Max scaled so aggressively into competitor brand terms that it consumed 69% of total Search impressions.
  • Reporting overload: Search term and ad combination reports can run to tens of thousands of rows, making manual auditing nearly impossible without automation.
  • Search Partner Network blowouts: One campaign saw half a million monthly impressions land on SPN at a 0.07% conversion rate, versus 3.04% on standard Google Search.

Between the lines. Google’s 14% uplift stat conspicuously excludes retail — an omission Ryan flags as significant for ecommerce advertisers. There’s also a deeper irony: you’re most likely to adopt AI Max if you’re already running Broad Match, DSA, and PMax — yet Google says those accounts will see the lowest incremental benefit.

What’s next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.

Ryan recommends activating AI Max’s keywordless features in your existing Search campaigns now and beginning to wind down DSA — not migrating it to PMax.

Ryan’s verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and don’t let FOMO around AI Overviews drive your decision.

The report. The Ultimate Guide to AI Max for Google Search

❌
❌