Normal view

Yesterday — 17 March 2026Search Engine Land

YouTube tests sticky banner after ad skip

17 March 2026 at 21:18
The Fujiwhara effect on YouTube: AI, Shorts, and the rise of duplicate content

YouTube is experimenting with a format that keeps ads visible even after users skip — potentially reshaping how advertisers think about skippable inventory.

What’s happening. YouTube is testing a sticky banner overlay that appears once a user skips an ad. Instead of the ad disappearing entirely, a branded card remains on-screen until the viewer actively dismisses it.

How it works. After hitting “skip,” users return to their video as normal, but a persistent banner tied to the original ad stays visible within the player, extending the advertiser’s presence beyond the initial skip.

Why we care. This test from YouTube creates a way to maintain visibility even when users skip ads, potentially increasing brand recall without requiring full ad views.

It also changes how skippable performance may be evaluated, as impressions and engagement could extend beyond the initial ad, giving brands more value from the same inventory within Google’s ecosystem.

Why it’s notable. Skippable ads have traditionally meant lost visibility once skipped. This format changes that dynamic by offering a second chance for exposure, even when users opt out of the full ad experience.

Impact for advertisers. The update creates an opportunity for extended brand visibility and recall, but could also influence engagement metrics and how users perceive ad interruptions.

The bottom line. If rolled out widely, the sticky banner test could redefine what a “skipped” ad means — turning it into continued, lower-friction exposure rather than a full exit for advertisers on YouTube.

First seen. This update was first spotted by Founder & CEO of Adsquire Anthony Higman who shared spotting it on LinkedIn.

Google adds video visibility to Performance Max reporting

17 March 2026 at 21:08
In Google Ads automation, everything is a signal in 2026

Google is incrementally improving metric visibility in Performance Max, giving advertisers more insight into how creative choices — particularly video — impact performance.

What’s happening. Google Ads has introduced a new “Ads using video” segment within Performance Max channel performance reporting, allowing advertisers to break down results based on whether video assets were included.

Why we care. Marketers can now compare performance across placements that used video versus those that didn’t, offering a clearer view into the role video plays across Google’s automated inventory.

It helps answer a key question in an automated environment: whether investing in video assets is driving better results, allowing you to make more informed creative and budget decisions inside Google Ads.

Between the lines. As video becomes more central across surfaces like YouTube and beyond, this update gives advertisers a way to validate the impact of investing in video assets within automated campaigns.

The bottom line. The new segment adds a layer of clarity to Performance Max, helping advertisers better evaluate video’s contribution without changing how campaigns are run inside Google Ads.

First spotted. This update was first spotted by PPC News Feed founder Hana Kobzova.

Google expands Personal Intelligence to AI Mode, Gemini, Chrome

17 March 2026 at 20:00
Google Personal Intelligence expands

Google is expanding Personal Intelligence across AI Mode, Gemini, and Chrome in the U.S., moving it beyond beta into broader consumer use.

Why we care. Personal Intelligence pushes Google further into fully personalized search, using first-party data like Gmail and Photos. That makes results harder to replicate, rank against, or track — especially in AI Mode, where outputs may vary based on user history, purchases, and behavior.

The details. Personal Intelligence now works across:

  • AI Mode in Google Search (available now in the U.S.)
  • Gemini app (rolling out to free users)
  • Gemini in Chrome (rolling out)

How it works. Users can connect apps like Gmail and Google Photos so Google can tailor responses using personal context. Examples Google shared include:

  • Shopping recommendations based on past purchases and brand preferences.
  • Tech troubleshooting using receipt data to identify exact devices.
  • Travel suggestions using flight details, timing, and past trips.
  • Personalized itineraries and local recommendations.
  • Hobby suggestions inferred from user interests.

Availability. These features are available only for personal accounts, not Workspace users, Google said.

Dig deeper. Google says AI Mode stays ad-free for Personal Intelligence users

Catch-up quick. Google introduced Personal Intelligence as a U.S.-only beta for Gemini subscribers in January. At the time:

  • It was limited to AI Pro and Ultra users.
  • It focused on Gemini, with Search integration “coming soon.”
  • The feature was opt-in and off by default.
  • This update delivers on that roadmap by:
  • Bringing it to Search AI Mode.
  • Expanding access to free users.
  • Extending it to Chrome.

Privacy and control. Google emphasized:

  • Users must opt in to connect apps.
  • Connections can be turned on or off at any time.
  • Models do not train directly on Gmail or Photos content.
  • Limited data, such as prompts and responses, may be used to improve systems.

Google’s blog post. Bringing the power of Personal Intelligence to more people

Google says AI Mode stays ad-free for Personal Intelligence users

17 March 2026 at 20:00

Although Google continues to test ads in AI Mode, users who connect apps to enable Personal Intelligence won’t see ads — and that isn’t changing right now, a Google spokesperson confirmed.

What’s happening. Google has been testing ads inside AI Mode in the U.S.

  • Early results: users find these business connections “helpful,” per Google.
  • But there’s a clear carveout: no ads for users who opt into app-connected, highly personalized experiences.

The details. Google today expanded Personal Intelligence in AI Mode as a beta to anyone in the U.S., allowing Gemini to generate more tailored responses by connecting data across its ecosystem, including Google Search, Gmail, Google Photos, and YouTube.

  • Opting into Personal Intelligence creates an ad-free experience inside AI Mode.

Why we care. Ads are coming to AI Mode, but Google is moving cautiously where personal data is deepest. Personal Intelligence experiences stay ad-free for now while Google works out the right balance.

What Google is saying. A Google spokesperson told Search Engine Land:

  • “There are currently no ads for people who choose to connect their apps with AI Mode. That isn’t changing right now.
  • “Over the past few months, we’ve been testing ads in AI Mode in the US. Our tests have shown that people find these connections to businesses helpful and open up new opportunities to discover products and services.
  • “In the future, we anticipate that ads will operate similarly for people who choose to connect their apps with AI Mode. Ads will continue to be relevant to things like your query, the context of the response and your interests.”

Bottom line. Personal Intelligence positions Google’s Gemini app as a more personalized assistant, setting the stage for future ad experiences built on richer, cross-platform user context.

💾

Google AI Mode will remain ad-free if you link apps, even as ad testing expands in its U.S. rollout of more personalized features.

Yahoo CEO: Google AI Mode is the biggest threat to web traffic

17 March 2026 at 19:38
Yahoo traffic pipeline

Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.

  • “I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
  • “Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”

Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”

Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:

  • “Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
  • “We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”

What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:

  • “You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
  • “There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”

Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:

  • “Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”

A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:

  • “You are tempting fate by opening up a way for consumers to access your product within a large language model.”
  • “The big bad wolf will come to your door and say everything’s cool.”

The interview. Yahoo CEO Jim Lanzone on reviving the web’s homepage

How nonprofits can build a digital presence that actually drives impact

17 March 2026 at 19:00
How nonprofits can build a digital presence that actually drives impact

For a long time, a nonprofit’s digital presence hasn’t been a “nice-to-have.” It’s the central hub for mission delivery, donor engagement, and advocacy.

Many organizations struggle with the technical and strategic foundations needed to turn a website and a few social accounts into a high-performing digital ecosystem.

The goal isn’t simply to “be online.” It’s to build reliable infrastructure, so your organization owns its narrative, protects its assets, and measures the impact of “free” digital efforts.

Here’s a practical look at the critical elements of managing a nonprofit’s digital presence — and the common pitfalls to avoid — based on my experience helping several organizations throughout my career.

If you help an organization with digital marketing and they aren’t following these practices, your first step should be getting their digital house in order.

1. Own your foundations: Domains and account control

Owning your name and your story are essential parts of a proactive online reputation management strategy and a critical aspect of managing an online entity. 

In my experience, the most overlooked risk in nonprofit digital management is the lack of direct ownership of technical assets.

A well-meaning volunteer or third-party agency often registers a domain or creates a social account using personal credentials. If that individual leaves the organization, you risk losing access to your primary digital channel — the domain you should own and control.

I’ve worked with several organizations that had to start over completely because they lacked control.

  • Domain ownership: Ensure the domain is registered in the organization’s name using a generic “admin@” or “info@” email address that multiple stakeholders can access. Set the domain to auto-renew and use a registrar that offers robust security features.
  • Website hosting and management: The organization also needs to control its website hosting and administration. Use a similar approach to the one recommended for domain ownership.
  • Social media governance: Again, use a similar process to the one described above to establish ownership of key social media channels. Grant volunteers access via delegation on individual channels rather than sharing passwords. This allows you to revoke access immediately if a staff member or volunteer moves on, protecting your brand’s voice and security.

Dig deeper: Google Ad Grants now lets nonprofits optimize for shop visits

2. Move beyond ‘winging it’: The editorial calendar

A common mistake for nonprofits is posting only when there’s an immediate need, which is often only when making a fundraising appeal. This “broadcast-only” approach often leads to donor fatigue and low engagement.

To build a community, you need a content plan that balances stories of impact with actionable requests.

  • The 70/20/10 rule: Aim for 70% value-based content (success stories, educational info), 20% shared content from partners or community members, and only 10% direct “asks.”
  • The editorial calendar: Use a simple tool, even a shared spreadsheet, to map out your themes and individual pieces of content for the month. This ensures you aren’t scrambling for a post on Giving Tuesday, that everyone knows what’s expected of them, and that your messaging and pace of content creation remain consistent across email, social, and your blog.

3. Tracking what matters (and ignoring what doesn’t)

Data is only useful if it informs future decisions. Many organizations get bogged down in “vanity metrics” like total likes or page views without understanding whether those numbers lead to real-world outcomes.

  • Set up conversion tracking: It isn’t enough to know that 1,000 people visited your site. You need to know how many of them clicked the “Donate” button or signed up for your newsletter.
  • Behavioral analytics: Use cost-free tools like Google Analytics 4 and Microsoft Clarity to see where people are dropping off in your donation funnel. If 50% of people leave the site on your “Ways to Help” page, you may have a UX issue or a confusing call to action.

4. Optimize for the ‘mobile-first’ donor

Most global web traffic is now mobile, and for nonprofits, this is critical. Donors often engage with your content on social media on their phones and expect a seamless transition to your donation page.

  • Speed and simplicity: Fancy header videos, sliders, and bloated images slow down your site, like the nonprofit example in this article about bad website design. Less is more when speed is of the essence. Reduce friction to make your website more usable. For example, if your donation page takes more than three seconds to load or requires more form fields than necessary, you’re leaving donations on the table.
  • Payment flexibility: Incorporate digital wallets like Apple Pay, Google Pay, or PayPal. Reducing friction at the point of donation is one of the most effective ways to increase your conversion rate. Many nonprofits use third-party tools to manage donations, so keep payment flexibility in mind when choosing a payment partner.

Dig deeper: Why now is the most important time for nonprofit advertising

Get the newsletter search marketers rely on.


Common pitfalls to avoid

Even well-intentioned nonprofits can undermine their digital presence with a few common mistakes.

Targeting ‘everyone’

One of the biggest mistakes is trying to reach everyone. A digital presence that tries to appeal to every demographic usually ends up appealing to no one. Define your “ideal supporter,” and tailor your language, imagery, and platform choice to them.

Neglecting accessibility

Accessibility is about inclusion. Ensure your images have alt text, your videos have captions, and your website colors have enough contrast for users with visual impairments. If a portion of your audience can’t interact with your site, you aren’t fulfilling your mission.

The ‘set it and forget it’ mentality

I often tell businesses to treat websites like any other business asset, and the same applies to nonprofits. Digital ecosystems require maintenance.

Links break, plugins need updates, and search algorithms change. A quarterly “digital audit” to check your site speed, broken elements, and SEO health is essential for long-term visibility.

Dig deeper: How to use Google Ads to get more donations for your nonprofit

Turning your digital ecosystem into a mission multiplier

A successful digital presence is built on the same principles as a successful mission: consistency, transparency, and clear communication. By owning your assets, planning your content, and grounding your decisions in data, you ensure that your digital ecosystem serves as a force multiplier for the people you’re trying to help.

5 competitive gates hidden inside ‘rank and display’

17 March 2026 at 18:00
ARGDW- 5 competitive gates hidden inside ‘rank and display’

If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.

The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.

The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently. 

A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”

The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.

Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.

The competitive turn: Where absolute tests become relative ones

The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.

In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward. 

The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.

At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?” 

Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.

You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.

The DSCRI ARGDW pipeline- Where absolute tests become relative

Multi-graph presence as structural advantage in ARGD(W)

The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.

The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.

This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.

Recruitment (Gate 6)- One piece of content, three separate knowledge structures

For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph. 

Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).

Annotation: The gate that decides what your content means across 24+ dimensions

Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.

At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.

Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.

Canel confirmed the Annotation gate back in 2020 on my podcast as part of the Bing Series, in the episode “Bingbot: Discovering, Crawling, Extracting and Indexing.

  • “We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
  • “My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”

So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.

Annotation classification runs across five types of specialist models operating simultaneously per niche: 

  • One for entity and identity resolution (core identity).
  • One for relationship extraction and intent routing (selection filters).
  • One for claim verification (confidence multipliers).
  • One for structural and dependency scoring (extraction quality).
  • One for temporal, geographic, and language filtering (gatekeepers). 

This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.

Annotation (Gate 5)- How the system classifies your content

Gatekeepers 

They determine whether the content enters specific competitive pools at all:

  • Temporal scope (is this current?).
  • Geographic scope (where does this apply?).
  • Language.
  • Entity resolution (which entity does this content belong to?). 

Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.

Core identity

This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment. 

For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.

Selection filters 

They add query routing: intent category, expertise level, claim structure, and actionability. 

For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.

Extraction quality

Think:

  • Sufficiency (does this chunk contain enough to be useful?)
  • Dependency (does it rely on other chunks to make sense?)
  • Standalone score (can it be extracted and still work?)
  • Entity salience (how central is the focus entity?)
  • Entity role (is the entity the subject, the object, or a peripheral mention?)

Weak chunks get discarded before competition begins.

Confidence multipliers 

These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.

Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.

An important aside on confidence

Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.

Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.

Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.

To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.

What happens when annotation fails you (silently)

Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.

I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.

Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version. 

The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.

When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.

Measuring annotation quality in ARGDW

Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.

The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.

That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”

Your brand SERP tells you exactly what the algorithm understood

These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.

  • Brand SERP shows incorrect entity associations: wrong competitors, wrong category, wrong geography.
  • AI résumé is noncommittal, hedged, or incomplete.
  • AI outputs underestimate your NEEATT credentials.
  • Knowledge panel displays incorrect information.
  • AI describes your brand using a competitor’s framing or category language.
  • Entity type is misclassified (person treated as organization, product treated as service).
  • AI can’t answer basic factual questions about your brand and offers without hedging.

If the algorithm can’t place you in a competitive set, it won’t recommend you

These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.

  • Absent from “best [product] for [use case]” results where you qualify.
  • Absent from “alternatives to [competitor]” results.
  • Absent from “[brand A] vs. [brand B]” comparisons for your category.
  • Named in comparisons but with incorrect differentiators or misattributed features.
  • Consistently ranked below competitors with weaker real-world authority signals.

For me, that last one is the most telling. Weaker brand, higher placement.

Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.

If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent

These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations. 

The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.

  • Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
  • Not surfaced when the AI explains a concept you coined or own.
  • Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
  • Named as a generic example rather than a recommended solution.
  • The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
  • Entity present in the knowledge graph but invisible in discovery queries on AI platforms.

The three taxes you’re paying with sub-optimal annotation

Three revenue consequences follow from annotation failure, one at each layer of the funnel. 

  • The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer. 
  • The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you. 
  • The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you. 

Each tax is a direct read of how well annotation worked — or didn’t.

For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as: 

  • BoFu failures point to entity-level misunderstanding. 
  • MoFu failures point to competitive cohort misclassification.
  • ToFu failures point to topic-authority disconnection.

Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”

For the full classification model in academic depth, see: 

Recruitment: The universal checkpoint where competition becomes explicit

Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.

Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”

The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction. 

The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).

The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.

The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.

The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.

Recruotment (Gate 6)

The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments: 

  • Search results are daily to weekly.
  • Knowledge graph updates are monthly. 
  • LLM updates are currently several months (when they choose to manually refresh the training data).

Grounding: Where the system checks its own work in real time

Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.

Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary. 

The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.

In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer. 

If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).

But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.

The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.

Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.

My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.

The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.

In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.

Get the newsletter search marketers rely on.


Display: Where machine confidence meets the person

Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).

Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.

This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.

UCD activates at display

You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.

The same content, grounded with the same confidence, presents differently depending on who is asking and why.

A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.

A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.

A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.

The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.

This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.

The framing gap at display

The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.

  • At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics. 
  • At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames. 
  • At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.

After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.

Won: The zero-sum moment where one brand wins and every competitor loses

Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses. 

The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.

Three won resolutions in the competitive context

Won always resolves through three distinct mechanisms, each with different competitive dynamics.

Resolution 1: Imperfect click

  • The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone. 
  • This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”

Resolution 2: Perfect click

  • The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment. 
  • This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.

Resolution 3: Agential click

  • The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint. 
  • The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.

The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure. 

Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to. 

Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.

Competitive escalation across the five ARGDW gates

The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.

Competitive narrowing
  • The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
  • Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
  • Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
  • Display reduces to finalists, often one primary recommendation with supporting alternatives.
  • Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.

ARGDW: Relative tests. The scoreboard is on.

Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.

  • Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
  • Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
  • Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
  • Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
  • Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).

After establishing the 10-gate AI engine pipeline, what’s next?

The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.

Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).

Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.

My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.

I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”

People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.

The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.

This is the fifth piece in my AI authority series. 

Why social search visibility is the next evolution of discoverability

17 March 2026 at 17:00
While everyone focuses on AI search, the real opportunity may be social search

Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.

But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.

Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.

Search behavior is diversifying

Recent research from SparkToro and Datos analyzed search behavior across 41 major platforms, including traditional search engines, ecommerce platforms, social networks, AI tools, and reference sites.

The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.

While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.

The research suggests search activity is roughly distributed as follows:

  • Traditional search engines: ~80% of searches, with Google alone at ~73.7%
  • Commerce platforms (Amazon, Walmart, eBay): ~10%
  • Social networks: ~5.5%
  • AI tools (ChatGPT, Claude, etc.): ~3.2%

Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

The industry is focused on AI and missing the bigger mainstream shift

Much of the search industry conversation today is focused on AI. Questions like:

  • How do I rank in ChatGPT?
  • How do I optimize for AI search?
  • Will AI replace Google?

They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.

I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.

AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.

But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:

  • Amazon receives more searches than ChatGPT.
  • YouTube receives more searches than ChatGPT.
  • Even Bing receives more search activity.

Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.

Social platforms are now search engines

For many users, social platforms are now core search destinations. People look to:

  • TikTok for recommendations, restaurants, travel ideas, and products.
  • YouTube for tutorials, reviews, and problem-solving.
  • Reddit for honest discussions and community opinions.
  • Pinterest for inspiration and visual discovery.

Each platform plays a different role in the discovery journey.

PlatformWhat people search for
TikTok/InstagramDiscovery and recommendations
YouTubeLearning, tutorials, and reviews
RedditReal opinions and community discussions
PinterestInspiration and planning

These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.

Social content is now appearing directly in Google results

As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.

Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.

Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:

  • Direct searches on social platforms.
  • Visibility within Google search results.
  • Influence within AI-generated answers.

Dig deeper: Social and UGC: The trust engines powering search everywhere

Get the newsletter search marketers rely on.


Social content is also powering AI search

Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.

That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.

Google’s AI Overviews often reference Reddit threads and YouTube videos.

Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.

This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.

A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.

The compounding discoverability effect

When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:

  • Rank in YouTube search.
  • Appear in Google search results.
  • Be referenced in AI-generated answers.
  • Be shared across social platforms.
  • Spread through private messaging and dark social channels.

Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.

And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Most brands still follow the old search playbook

Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.

Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.

While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.

When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.

Search everywhere: A new model for discoverability

Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.

Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.

Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.

That is the future of search. That is “search everywhere.”

Dig deeper: ‘Search everywhere’ doesn’t mean ‘be everywhere’

Google Ads Editor 2.12 adds creative control and campaign flexibility

17 March 2026 at 16:11
Google Ads auction insights

Google is expanding capabilities in Google Ads Editor to give advertisers more creative flexibility, automation control, and budget precision — especially as AI-driven campaign types continue to evolve.

What’s new. The 2.12 release introduces a wide set of updates across Performance Max, Demand Gen, and video campaigns, with a clear focus on scaling creative assets and improving workflow efficiency.

Creative expansion. Performance Max campaigns now support up to 15 videos per asset group, allowing advertisers to feed more variations into Google’s AI for testing. The addition of 9:16 vertical images also reflects growing demand for mobile-first formats, particularly across surfaces like short-form video.

Campaign upgrades. Demand Gen campaigns get several enhancements, including new customer acquisition goals, brand guideline controls, and hotel feed integrations. A new minimum daily budget and a streamlined campaign build flow aim to improve stability and setup.

Video & AI control. Updates to non-skippable video formats and real-time bid guidance give advertisers more control over performance, while new text and brand guidelines help ensure AI-generated assets stay on-brand and compliant.

Budgeting shift. A new total campaign budget feature allows advertisers to set a fixed spend across a defined period — ideal for promotions or seasonal bursts — with Google automatically pacing delivery.

Workflow improvements. Account-level tracking templates, better visibility into Final URL expansion performance, clearer campaign status filters, and bulk link replacement tools are designed to reduce manual work and improve account management at scale.

Why we care. This update to Google Ads Editor gives them more creative flexibility and control over AI-driven campaigns, especially in Performance Max and Demand Gen. Features like increased video limits, vertical assets, and total campaign budgets help you test more, scale faster, and manage spend more efficiently.

It also improves workflows and brand safeguards, making it easier to guide automation while maintaining consistency and performance across Google Ads.

Between the lines. The update continues a broader trend: as automation increases, Google is giving advertisers more ways to guide AI rather than manually control every input.

The bottom line. Google Ads Editor 2.12 is less about one standout feature and more about incremental gains across creative, automation, and control — helping advertisers better manage increasingly AI-driven campaigns within Google Ads.

How Google’s Universal Commerce Protocol could reshape search conversions

17 March 2026 at 16:00
How Google’s Universal Commerce Protocol could reshape search conversions

As Google rolls out AI Overviews, AI Mode in Search, and the Gemini ecosystem, we face a growing challenge: what happens when users get answers — and soon complete purchases — without leaving Google’s interfaces?

Enter Google’s Universal Commerce Protocol (UCP), now in beta.

UCP is designed to help brands to sell to consumers without leaving the Gemini or LLM experience. Consumers can check out within the LLM, add rewards points, and fully execute the transaction. Here’s an example flow:

Google UCP workflow example

How Google’s Universal Commerce Protocol works

At its core, UCP standardizes how consumer AI interfaces communicate with merchant checkout systems. When a user tells Gemini, “Find me a highly rated, waterproof hiking boot in size 10 under $200 and buy it,” UCP is the invisible bridge that allows the AI to securely fetch inventory, process the payment, and confirm the order.

While Google’s developer documentation leans into technical jargon like “Model Context Protocol (MCP)” and “Agent2Agent (A2A) interoperability,” the implications are remarkably straightforward:

  • It uses your existing feeds: UCP plugs directly into your existing Google Merchant Center (GMC) shopping feeds. The inventory data you’re already managing for your campaigns is the same data that will power these AI transactions.
  • You keep the data: Unlike selling on some third-party marketplaces, where you lose the customer relationship, UCP ensures you remain the merchant of record. You process the transaction, you own the first-party customer data, and you control the post-purchase experience.
  • Frictionless checkout: By enabling checkouts directly within Google’s AI ecosystem, UCP can reduce cart abandonment and increase conversion rates among high-intent shoppers.

Dig deeper: How Google’s Universal Commerce Protocol changes ecommerce SEO

Best practices for Google’s UCP

Like many LLM optimization recommendations, these steps come down to the fundamentals of managing your shopping feed and Merchant Center account.

Google outlined a few best practices. If you follow these four steps, you’ll be well-positioned for success.

1. Master your feed data hygiene

In an agentic commerce environment, your product feed is your primary sales tool. To ensure the AI accurately matches your products to highly specific user queries, you need to enrich your feed with granular details.

  • Write product titles that are 30 or more characters long.
  • Expand product descriptions to 500 or more characters.
  • Include Global Trade Item Numbers (GTINs), where relevant, to ensure accurate product matching.
  • Include three or more additional images alongside your primary product photo to engage visual shoppers.
  • Use lifestyle images, not just standard product shots on white backgrounds.
  • Ensure your image quality meets the standard of 1,500×1,500 pixels.
  • Categorize your inventory by product type and share key product highlights.
  • Prepare specific feed attributes required for UCP, such as returns, support information, and policy information.
  • Support Google’s Native Checkout when possible (checkout logic integrated directly into the AI interface). Google also offers another option called Embedded Checkout (an iframe-based solution for highly bespoke branding). This will work, but is suboptimal at this time.

Dig deeper: Google publishes Universal Commerce Protocol help page

2. Highlight convenience and trust signals

To set your brand apart when AI is helping consumers make immediate, confident purchasing decisions, you must pass trust and convenience signals directly through your feed. The data shows that these elements directly impact the bottom line:

  • Indicate clearly if your brand offers free shipping.
  • Share your shipping speed (next day, two-day, etc.).
  • Display your return policy.
  • Submit sale prices when available. Regardless, ensure the feed represents the most accurate pricing details.
  • Include product ratings.

Get the newsletter search marketers rely on.


3. Upgrade your technical infrastructure and SEO

The shift to UCP requires foundational updates to how your backend systems interact with Google. You must work hand in hand with their development and SEO teams to prepare for these AI search experiences.

  • Migrate from the Content API to the Merchant API to enable real-time inventory updates and programmatic access to data and insights.
  • Upgrade your tag in Data Manager and implement Conversion with Cart Data to effectively use first-party data in your campaigns.
  • Prioritize content-rich pages for indexing and crawling, and ensure structured data is always supported by visible content.
  • Create your Business Profile and claim your Brand Profile to highlight your business information and brand voice on Google platforms.
  • Have your development team explore and prototype with UCP open source on GitHub to map APIs for checkout, session creation, and order management.

4. Additional features and tools beyond UCP to consider

Google is actively rolling out pilot programs designed specifically for the agentic era. Be proactive in adopting these new solutions rather than waiting for wide release:

  • Prepare for the “Business Agent,” a virtual sales associate that acts like a brand representative to answer product questions right on Google.
  • Consider the “Direct Offers Pilot,” a new way for advertisers to present exclusive discounts directly in AI Mode.
  • Inquire about the “Conversational Attributes Pilot,” which introduces dozens of new Merchant Center attributes designed to enhance discovery in the conversational commerce era.

Dig deeper: Are we ready for the agentic web?

The future of search will happen within LLMs

The launch of Google’s Universal Commerce Protocol signals a significant shift. The SERP is becoming a transactional engine that increasingly operates within large language models.

UCP presents a meaningful opportunity. By removing friction between discovery and purchase, conversion rates could increase.

However, taking advantage of this requires stepping outside the Google Ads interface and working directly in your feed data and technical integrations, much like with Google Shopping. While this isn’t new, it’s becoming more important.

Ultimately, this comes down to the quality of your product data.

Before yesterdaySearch Engine Land

OpenAI tests Ads Manager as ChatGPT ad business takes shape

16 March 2026 at 20:56
From scripts to agents- OpenAI’s new tools unlock the next phase of automation

OpenAI is beginning to build the infrastructure for a formal advertising business around ChatGPT — but early performance signals suggest the company still has work to do to match established search platforms.

What’s happening. OpenAI started testing an Ads Manager dashboard with a small group of partners, according to confirmation shared with ADWEEK. The tool allows marketers to launch, monitor, and optimise campaigns in real time, similar to the campaign management platforms used across digital advertising.

Why we care. OpenAI is beginning to build a self-serve ads ecosystem around ChatGPT with a dedicated Ads Manager, as they prepare for AI assistants becoming a scalable channel. As conversational search grows, paid media marketers may need to think about visibility inside AI responses, not just traditional platforms like Google Search.

Early testing also means advertisers who participate now could gain first-mover insights into performance, formats, and optimisation strategies in a new advertising environment.

How it works today. Early testers currently receive weekly CSV performance reports that include metrics such as impressions and clicks. The reporting indicates the ads product is still evolving, with more advanced analytics and tooling likely to follow as the program develops.

The challenge: Early tests suggest click-through rates on ChatGPT ads trail those seen on Google Search, highlighting a key hurdle for OpenAI as it tries to prove the value of advertising inside conversational AI.

The cost of entry. Some early advertisers have reportedly been asked to commit at least $200,000 in spend, raising the stakes for OpenAI to demonstrate measurable performance and ROI.

Between the lines. Building an ad ecosystem requires more than ad inventory. Marketers expect robust reporting, optimisation tools, and predictable performance — areas where mature platforms like Google have years of advantage.

The bottom line. OpenAI is laying the foundation for a new ad platform inside ChatGPT, but convincing brands to shift budgets will depend on whether conversational ads can deliver results that compete with traditional search.

Google tests “Sponsored Shops” blocks in Shopping results

16 March 2026 at 20:29
Google shopping ads

Google appears to be testing a new “Sponsored Shops” format in Google Shopping results that highlights entire stores instead of individual products — a potential shift in how brands compete in Shopping ads.

What’s happening. Instead of displaying only single product listings, the new block groups multiple products from the same retailer into one sponsored unit. The format features the store name, several products from that shop, and signals such as ratings and brand presence, effectively creating a mini storefront directly inside the Shopping results.

Why we care. The new “Sponsored Shops” format in Google Shopping could shift competition from individual products to entire stores. Instead of winning visibility with a single SKU, brands may need stronger product feeds, better ratings, and broader assortments to appear in these store-level placements.

It also introduces multiple click paths within one ad unit, which could change how traffic flows between product pages and store pages. If the format scales, it may reshape how advertisers optimise campaigns across Google Shopping — prioritising brand presence and feed quality, not just product-level bids.

The big picture. The test suggests a move slightly up the funnel for Shopping ads. Rather than focusing solely on a single SKU, brands can showcase a broader product assortment and reinforce their store identity within one placement.

Why it’s notable. Store-level visibility means advertisers can highlight multiple products at once, increasing exposure per impression. It also strengthens brand presence by combining store name, ratings, and product range in one block.

For users, it makes discovery easier by allowing them to browse several items from the same retailer without navigating away from results.

Between the lines. If the format rolls out widely, it could reward brands with strong product feeds, high seller ratings, and clear brand trust signals. Merchants with well-structured feeds and competitive assortments may gain more visibility compared with those relying on a few individual product listings.

What to watch. One open question is how users will interact with the different clickable elements inside the ad unit. Marketing Operating Lead, Stephanie Pratt commented on this and what measurement split we may expect:

  • “It’ll be interesting to see the split of clicks on each part of the ad unit, and how much is on the brand name vs product and if that will confuse some consumers

The bottom line. If “Sponsored Shops” expands beyond testing, it could push Google Shopping toward more store-level competition — shifting strategy from purely product-level optimisation to building stronger brand presence within the Shopping ecosystem.

Fist seen. This update was spotted by PPC Specialist Arpan Banerjee who shared a screenshot of the update on LinkedIn.

What incrementality really means in affiliate marketing

16 March 2026 at 19:00
What incrementality really means in affiliate marketing

The words “incremental” and “incrementality” get thrown around in affiliate marketing, but they might not mean what they sound like. There may be no increase in actual sales, new customers, or revenue. Affiliate marketers who refer to incrementality often look at it only within the affiliate channel, not across your company as a whole.

To determine whether affiliates are truly incremental, ask a simple question: Would the sale have happened without the affiliate program?

The answer determines whether the partner is bringing you new customers and revenue or simply intercepting customers already in your checkout flow.

Why high-intent traffic doesn’t always mean incremental value

The word “incrementality” in affiliate programs is similar to an affiliate, an agency, or a network using “high intent” to describe the traffic. High intent means the person has a strong intent to purchase, which is a good thing. What is left out is whether that touchpoint would happen if there were no affiliate program at all.

High intent could be used by a coupon site where the touchpoint is a consumer already at checkout, going to Google and typing in “your brand + coupons.” If you close your affiliate program today, these same touchpoints will likely still happen. Your company saves money because you no longer pay commissions, network fees, manager salaries, or agency fees.

Yes, the traffic is high intent. It’s your customers already checking out of your shopping cart. It doesn’t get more “high intent” than that. The touchpoint may be low- or no-value because it happens whether you have a program or not, and you may be losing money on the sale because of it.

Note: Not all coupon sites or deal touchpoints are bad. Some shopping cart interceptions may add value (including brand + coupon), so don’t take action without testing. Use your data and test to see if the same or a similar amount of sales happens without an affiliate program before making decisions.

The more customers checking out of your own shopping cart, the more sales the affiliate in the top positions of Google make. The less you have, the less they make. They rely on you having your own traffic to intercept so they can make money, which is why they are sometimes called parasitic affiliates. And that’s where incrementality comes in.

What incremental sales and value actually mean

If this touchpoint isn’t bringing in new customers, and it happens even when you don’t have a program, are the sales incremental? This starts with defining what incremental sales and value are.

  • Incremental sales are sales that are introduced by the partner and that your company doesn’t have access to without the partner.
  • Incremental value is when the affiliate increases the value of the customer by doing things you can’t do without them, including increasing items in the cart, increasing order value, building consumer trust that results in more conversions, and helping move products you need to clear off a shelf through their own marketing efforts.

You, as the brand, can feature a coupon code, a deal, or a bundle without an affiliate program. If you have no program, you can submit those same deals to the sites showing up for your brand + coupons and get the same or a similar amount of sales with the increased AOV or items in cart. But you don’t have to spend money on network fees, commissions and affiliate manager salaries.

If a deal or bundle exists only on the partner’s platform (website, videos, password-protected communities, newsletter blasts, etc.) and it doesn’t appear for your brand on Google, YouTube, etc., their active community is what drives sales. That’s something you can’t do without them. The affiliate is adding incremental value.

Dig deeper: Where affiliates can get traffic beyond Google search

Here are a few types of affiliate content and programs that can add real incremental value.

Product and brand comparisons

There are two types of comparisons: brands and products. Comparing two products from any brand (e.g., bandages sold at most retailers like CVS, Walgreens, Amazon, and Walmart), the affiliate controls where traffic goes and which brand gets the sale. This may not be customer acquisition for big brands, since they already have millions of customers, but it’s high-value because without that affiliate deciding to send the customer to you, you don’t get the sale.

The person could be comparing two types of electronics or adaptors for a specific purpose. Then they decide which retailer to send the consumer to and explain why they recommend that one. They could mention the service guarantees, extra guides, prices, or social causes the brands support. Each of these helps convince the consumer to shop with their recommendation, increasing the incrementality and value.

If no brand is mentioned at all in the content, they can change out the affiliate links and destination at any time, so your brand can be cut out, and you lose. This is where the affiliate holds the power, as they control their traffic and add incremental value.

Brand comparisons get tricky. Comparing you and a competitor adds credibility because it’s a “trusted third party” who is putting their name on the line. They likely do help the customer make a decision, but it isn’t new customer acquisition, as the customer is already in your funnel. But it’s a value-adding touchpoint in the customer acquisition funnel.

Tip: If you have a non-affiliate doing the brand comparison, you’re more profitable because you don’t pay commissions on it in perpetuity.

For example, you pay a one-time fee of $500 for an unbiased and honest comparison vs. paying $2,000 in commissions over the course of the year. Your company is more profitable by $1,500 the first year and $2,000 each additional year until the comparison is no longer accurate or shows up for your brand vs. the competitor.

Then there’s the big incremental value add for small brands. By being added to a comparison with the two big brands, you gain access to their comparison traffic and their customer funnel. The credibility from their brands and the reviewer may build trust for your brand, and this comparison is likely to be customer acquisition and incremental in revenue, not to mention getting your competitors’ customers.

These types of partners include:

  • Review and comparison websites.
  • Listicle sites (SEO and PPC).
  • YouTubers.
  • Communities and forums with UGC and shopping guides.

Get the newsletter search marketers rely on.


Creators who do and don’t do reviews

Creators is a blanket term for anyone who creates content, including:

  • Social media influencers. 
  • YouTubers.
  • Bloggers.
  • Streamers.
  • Podcasters.
  • Others who build a following. 

They create top-funnel and high-value traffic and mid- and low-value traffic. 

I’ll break this section into two parts starting with the mid- and low-value.

Reviews only

When creators do a review only, the initial review gets distributed to anyone who subscribes, and this is top-funnel and builds trust. Then it gets tricky on incrementality.

Once the initial review is live and the subscribers have already viewed it, the top-funnel incrementality is over. Now, algorithms start to pick it up and show it for your own customers already in your funnel. Unlike the coupon example, where the sale is likely to happen just before the person clicks the pay now button, the customer review touchpoint isn’t as “high intent” yet.

This consumer is looking for validity and credibility before making a purchase. The reviewer provides credibility as a trusted third party and helps the consumer make a decision. When the algorithms show this review, it isn’t bringing you new customers, so there’s no full customer acquisition. But if you currently have only bad reviews showing up, and the affiliates have good reviews showing the benefits and presenting you in a good light, this can increase customer confidence, making the conversions happen. Not to mention it helps repair your brand reputation.

Affiliates will be faster to create review content than customers because they are incentivized with commissions. The same goes for non-affiliate ambassadors and influencers. Incrementality here is similar to comparisons.

If you pay an influencer or ambassador a one-time fee of $200 for the review, that’s the only cost. When you have affiliates doing the review and earning commissions, each affiliate earns on each one, which could be $500 in commissions each year, while network fees, affiliate manager salaries, bonuses, etc., cost your company more than the influencer or ambassador.

With that said, it’s easier to get affiliates to update their reviews and create new ones as your company updates, since they’re making money by keeping them up to date. You’d need to pay the influencer or ambassador again each time, unless they are in a good mood and decide to do it for free.

The ones that genuinely value their readers or visitors will do it free and quickly because they want to make sure their audience and visitors get accurate information. With that said, it’s almost impossible to do it for every brand they feature, especially if they’ve been around for 10 years. This is why a fee is normally required. It’s too much for any one or even four- and five-person team.

Stephanie Robbins from Right Side Up also shared a situation where a review can be highly incremental. New brands without a ton of branded search and without demand yet could benefit from review affiliates. By getting reviews going early in the company’s life, they have an established foundation for growth. These established reviews help block competitors from taking their branded search. Once the brand starts to pick up, it will need to replace affiliate reviews with non-affiliate reviews via SEO to save money.

Dig deeper: The best affiliate networks by need and use case

Non-reviews

Non-review creators are huge for incrementality, and there’s no shortage of them.

  • Listicle affiliates.
  • Tutorial creators.
  • Communities for like-minded people.
  • Apps that provide solutions.
  • Media buyers.

Listicle affiliates

These affiliates create “top ten” and “best” lists, including media companies, PPC affiliates, and bloggers with roundups and shopping guides. The ones that don’t optimize for your brand + reviews or bid on your branded terms in search engines are bringing you customers with a higher intent to purchase.

The consumer here knows they need something and is shopping, but they don’t know which brand to choose. Being on these lists builds trust and may reach a consumer who hasn’t heard of you (especially if you’re not one of the big names in the space).

Tutorial creators

You can see them on YouTube, Skool, and other platforms, teaching workshops and creating written guides on how to fix a roof, bake a cake, set up a server, or take care of a goldfish, which likely provides a lot of incrementality for your brand.

The ones that don’t add “with [Brand]” to the title (How to take a photo with a Canon camera vs. how to take a photo) and throughout the content have a captive audience that you can’t reach without them.

Because their traffic does not need your brand, they control who gets the referrals. Being in these guides brings you high-value and incremental customers. The conversion rates may be higher because the tutorial presold the product, and the creator put their name on the line by recommending you.

Community

This same form of trust comes from community moderators and the highly respected members. When people are there because they love sharing parenting advice, common passions for bird watching or cooking gluten-free meals, video game or tabletop game enthusiasts, or anything else, they trust the community. 

When the owner of the community says this is the brand to trust, that trust passes through, and the community shops. While they may not be new customers each time, they are incremental, and you get brand credibility, which is one of the hardest things to earn.

Apps

There’s no shortage of apps, and now that AI is powering features, affiliate sales are being made. Some apps may let you find celebrity styles you like and then use affiliate data feeds to find similar clothes and recommend them to the user. 

Others might have you snap a photo of your room, then use affiliate datafeeds to show what furniture could look like in it and let you mix and match to create your perfect space. These are high-value touchpoints with incrementality because the app controls where the person shops and pre-sells the items by giving them an experience with the products.

Media buyers

Media buyers purchase ads across the web, in communities, and other spaces. As long as they’re not buying ad space via the pages in your checkout, targeting your own website if you run ads on it, or using your brand as a target, they’re adding incrementality by reaching the audience your ads can’t reach.

Some have a lot of experience on third-party platforms, and others may have a budget when you’re already tapping yours out, so they work as an extension of your own efforts.

Dig deeper: How amplifying creator content strengthens trust and lowers media costs

Don’t confuse affiliate attribution with incrementality

Incrementality in affiliate marketing means the affiliate brings you a new customer and drives a sale that likely wouldn’t have happened without them or without the program at all. When an affiliate relies on your existing traffic, incrementality drops substantially. You’ll often hear terms like “high-intent traffic” used to make this sound more valuable than it is.

Use your data and your knowledge to determine what is right for your business and what incrementality means for you. Don’t rely on one channel alone.

Key takeaways:

  • When an affiliate drives revenue, increases cart value, and moves products without relying on the brand’s own traffic, they’re adding incremental revenue and customers to your business.
  • If the sales happen whether you have a program or not, there’s little to no incremental value (i.e., affiliates that only intercept your own customers already in your checkout process).

LinkedIn updates feed algorithm with LLM-powered ranking and retrieval

16 March 2026 at 18:38
LinkedIn AI feed algorithm

LinkedIn is launching a new AI-powered feed ranking system that uses large language models and GPUs to analyze post content and surface more relevant updates to its 1.3 billion members.

Why we care. Understanding how LinkedIn surfaces content is critical if you want your posts — or your brand’s — to be discovered. The new system prioritizes topical relevance and engagement patterns, LinkedIn said. Posts that demonstrate expertise and align with emerging professional conversations may travel farther across the network — even without existing connections.

The details. LinkedIn rebuilt much of its feed recommendation system using large language models, transformer models, and GPU infrastructure. The overhaul centers on two systems: retrieving relevant posts and ranking them in the feed.

Unified retrieval system. LinkedIn replaced several separate discovery systems with a single LLM-powered retrieval model.

  • Previously, feed candidates came from multiple sources, including network activity, trending posts, collaborative filtering, and topic-based systems.
  • The new approach uses LLM-generated embeddings to understand what posts are about and how they connect to your professional interests.
  • Now, LinkedIn can link related topics even when they use different terminology. For example, engagement with posts about small modular reactors could surface content about electrical grid infrastructure or renewable energy.

Ranking that follows your interests. After retrieval, LinkedIn ranks posts using a transformer-based sequential model. Instead of evaluating posts independently, the model analyzes patterns across your past interactions — including likes, comments, dwell time, and other signals.

  • This helps LinkedIn detect how your professional interests evolve and recommend content that reflects those shifts.

System performance and infrastructure. The system runs on GPU infrastructure designed to process millions of posts while keeping feeds fresh.

  • The architecture can update content embeddings within minutes and retrieve candidates in under 50 milliseconds, LinkedIn said.

Improving feed quality and authenticity. LinkedIn also announced updates to improve content quality:

  • Cracking down on automated engagement. LinkedIn is taking action against comment automation tools, browser extensions, and engagement pods that create inauthentic conversations. These tools violate platform rules and undermine real professional discussions, LinkedIn said.
  • Reducing engagement bait and generic posts. LinkedIn plans to show less content designed purely to drive comments or clicks — including posts asking people to comment “Yes” to boost reach, posts pairing unrelated videos with text to game distribution, and recycled thought-leadership with little substance.
  • Helping new members personalize their feeds faster. LinkedIn is testing an “Interest Picker” during signup that lets new users choose topics such as leadership, job search skills, or career growth, helping deliver relevant content from day one.

Why entity authority is the foundation of AI search visibility

16 March 2026 at 18:00
Why entity authority is the foundation of AI search visibility

The webpage is no longer the unit of digital visibility.

For years, we’ve built our digital presence on a foundation of URLs and keywords, but that infrastructure was designed for a highway that AI has now bypassed.

In the search everywhere revolution, the most powerful atomic unit is the entity — a well-defined, machine-readable representation of a concept, product, organization, or person.

The brands establishing AI-era dominance are engineering entity authority. To survive the shift from traditional search to generative discovery, we must move beyond the page and focus on entity linkage to build a foundation of AI visibility.

From pages to entities

The evolution: From strings to things to systems

To navigate this landscape, we must recognize that we have moved past simple information retrieval. We’re witnessing a three-stage evolution in how the web is indexed and understood.

  • Phase 1 (Strings): Traditional SEO optimized for keyword strings. Success was matching queries to text on a page.
  • Phase 2 (Things): Modern search understands entities. Knowledge graphs allow engines to recognize that a brand, a founder, and a product are distinct, related “things.”
  • Phase 3 (Entities): AI-driven systems now operate on structured ecosystems of entities. The goal is no longer to rank for a term; it’s to become the verified authority within an interconnected system of entities and executable capabilities.

In this third phase, the search engine has become a reasoning engine. It looks at your content and the logical role your brand plays within a broader ecosystem.

Dig deeper: The enterprise blueprint for winning visibility in AI search

The machine imperative: The comprehension budget

This evolution is driven by a cold economic reality: the comprehension budget. AI systems read and compute content.

Every time an engine attempts to resolve an ambiguous brand or an implied relationship, it burns expensive GPU cycles. Understanding your content is a resource-heavy calculation.

If your data is unstructured or inconsistent, you force the AI to overspend this comprehension budget. When the computational cost of grounding your facts exceeds the limit, the model defaults. It hallucinates based on probability, substitutes a cheaper competitor, or ignores your entity entirely.

To win, you must provide a comprehension subsidy. Deep, nested Schema.org markup pre-processes your data, shifting the burden from expensive deep inference to fast, economical knowledge graph lookups. In a world of finite compute, the most efficient entity is the one most likely to be cited.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

From SEO to GEO: Relevance engineering

Traditional SEO has shifted and created a new discipline — generative engine optimization (GEO) — moving from keyword targeting to relevance engineering, where interconnected semantic structures enable machines to interpret, verify, and reuse trusted information.

GEO focuses on maximizing your inclusion in AI-generated answers across platforms like ChatGPT, Perplexity, and Google’s AI Overviews. This requires:

  • Structuring content for machine readability.
  • Answering conversational queries with high intent.
  • Establishing authority across trusted third-party ecosystems.
  • Ensuring entity consistency (avoiding “entity drift”).

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Architecture: Knowledge graphs and deep schema

Most enterprise sites have some structured data deployed, but basic, fragmented schema — the kind used only for rich snippets — is functionally inadequate for AI.

When markup is applied page by page without nested relationships, the AI encounters isolated data islands. It sees a product here and an organization there, but no declared connection. This forces the AI back into an expensive inference loop.

The content knowledge graph

The architectural solution is a content knowledge graph: an interconnected network of entities built in Schema.org vocabularies and expressed in JSON-LD.

A correctly implemented content knowledge graph maps your entities hierarchically: Organization → Brand → Product → Offer → Review.

Nested schema

The ROI of schema:

  • 300%: The potential improvement in LLM response accuracy when enterprise CKGs provide factual grounding.
  • 20-40%: The traffic lift seen by sites deploying deeply nested, error-free advanced schema.

Dig deeper: Why entity search is your competitive advantage

Critical properties for trust

To achieve global authority, two properties are non-negotiable:

  • @id: Creates a consistent identifier that connects related entities across your website, ensuring AI understands they belong to the same source.
  • sameAs: Links your entity to authoritative external references (Wikipedia, Wikidata, etc.). This process, known as entity disambiguation, signals to AI exactly who you are in the global knowledge ecosystem.

To implement a content knowledge graph that survives the scrutiny of AI models, you must move from tactical tagging to entity governance. This playbook establishes a single source of truth that AI systems can verify at scale.

Get the newsletter search marketers rely on.


The 5-step implementation playbook

Here’s the strategic deep dive into the five-step implementation.

The 5-step implementation playbook

1. The semantic audit: Cleansing the foundation

Before deploying a single line of code, you must conduct a semantic audit to define your core entities (e.g., organization, products, people, locations) that will build your entity knowledge graph.

  • The goal: Eliminate duplicate or conflicting attributes.
  • The depth: All business information must be cleansed and manually validated against authoritative sources before publication. AI trust is built on consistency. If your website contradicts your Google Business Profile, you create “Entity Drift,” which lowers your confidence score.

2. Strategic type mapping: Precision over generalization

Success requires leveraging the full breadth of the Schema.org vocabulary — which now supports over 800 specific types.

  • The depth: Stop using generic types like Article. Use TechArticle, MedicalWebPage, or FinancialService.
  • Property saturation: Beyond types, use specific properties like mentions, hasPart, and about to clarify what the content is truly for. Incomplete markup forces AI systems back into the expensive “inference loop,” increasing the risk of exclusion.

3. Deep nested relationships: Building the MVG

Fragmented schema creates data islands. You must implement deep nesting to fully trace your business’s lineage.

  • Minimum viable entity graph: For legacy sites, start with the triangle of trust:
    • Home page: Full Organization schema.
    • About page: AboutPage schema linking back to the Organization @id.
    • Contact page: ContactPage with ContactPoint specifics.
  • The architecture: Group relevant secondary entities under a main entity. For example, an AggregateRating or an Offer should never exist in isolation. They must be nested hierarchically within a Product entity block.

4. The trust layer: Disambiguation and external linking

To achieve global authority, you must signal to AI engine platforms that your entity is recognized by the world’s most trusted knowledge bases. 

  • The circle of truth: Use the sameAs property to link your entities to Wikipedia, Wikidata, LinkedIn, or the Google Knowledge Graph. This will help corroborate and lead to entity amplification.  
  • Entity amplification: This external linking acts as an authority transfer mechanism. It “collapses” identity ambiguity before the AI even begins its inference. When high-trust sources confirm your facts, your citation likelihood increases because the AI no longer has to expend its comprehension budget on verification.

5. Operationalize validation: Defeating schema drift

At enterprise scale, manual updates are a liability. You must treat schema as an ongoing operational discipline.

  • The governance pillar: Implement automated validation within your publishing workflow.
  • Real-time signals: Use IndexNow or real-time indexing integrations to push updated schema to search engines the moment content changes.
  • The agentic layer: Proactively include schema actions (like BuyAction, ReserveAction, ScheduleAction, or OrderAction). This makes your brand “machine-callable,” ensuring that when an AI agent wants to act, your services are structured and ready to be triggered.

Dig deeper: From search to AI agents: The future of digital experiences

Governance and the agentic web: From discovery to delegation

The current AI search experience — summarized text answers — is merely a transitional phase. We’re rapidly moving toward an agentic ecosystem, where AI agents inform users and act on their behalf. The AI agent queries your structured entity graph to find executable functions.

The callability layer: Schema actions

To survive this shift, your entities must be more than just “readable.” They must be callable. Implementing schema actions — such as BuyAction, ReserveAction, ScheduleAction, or OrderAction — is how you declare your brand’s operational capabilities to the machine.

If these actions aren’t explicitly defined in your code, your brand becomes a dead end. An AI agent might mention your product, but if it can’t verify price, availability, or a booking path through structured data, it will bypass you in favor of a competitor that is agent-ready.

Defeating schema drift: The governance mandate

At enterprise scale, the greatest threat to visibility is schema drift. This occurs when your human-visible content (e.g., prices, stock, hours) evolves, but your machine-readable schema remains static. When AI systems detect this inconsistency, they lower your confidence score. Reduced confidence leads to zero citations.

To maintain agentic readiness, you must establish four governance pillars:

  • Entity ownership: Assign clear accountability for maintaining canonical definitions.
  • Template-level integration: Ensure schema updates automatically as CMS content changes.
  • Automated validation: Monitor and flag data inconsistencies in real time.
  • Real-time indexing: Use protocols like IndexNow to push updated entity signals to engines immediately.

Bottom line: In the agentic web, inconsistency is invisible. If your structured data is outdated, you’re functionally removed from the transaction layer.

New KPIs for generative AI: Measuring success in AI-driven search

As the customer journey becomes an algorithm-driven narrative, we must shift from measuring traffic to a page to measuring share of model. To dominate the agentic web, your dashboard must evolve to track how AI perceives, trusts, and socializes your brand entities.

  • Share of model (SOM): This is the new share of voice. It measures the percentage of time your brand or entity is included in generative responses for specific category queries.
  • The AI visibility score and citation likelihood: In an AI-first ecosystem, backlinks (endorsements) are giving way to citations (confirmations), and your citation likelihood rises when trusted third-party entity graphs consistently validate your facts and your schema mirrors them precisely.
  • Brand accuracy and grounding quality: Measure the delta between your declared schema (prices, specs, service areas) and AI-generated descriptions — the goal is a 1:1 match to prevent entity drift and ensure AI represents your brand accurately when it acts or recommends.

The entity-first mandate for AI visibility

The transition from page-based to entity-based strategy is a present operational priority. Brands building content knowledge graphs today are building structural trust advantages that compound as AI systems learn to rely on established authorities.

The page was never the point. The entity — and the trust AI places in it — is what determines who gets found next.

Key takeaways

  • From strings to things to systems: Traditional SEO focused on keyword strings. AI focuses on entities. Your goal is no longer to rank for a term, but to be the verified authority for a concept.
  • Efficiency is currency: AI systems operate on a comprehension budget. The easier you make it for a machine to parse your data (via structured schema), the more likely you are to be cited.
  • Citations are the new clicks: Visibility is now measured by share of model. If an AI assistant recommends you without a click, you’ve still won the top of funnel influence.
  • Governance is revenue protection: Schema drift (outdated data) is a silent revenue leak. Inconsistency leads to a “confidence penalty,” causing AI models to hallucinate or bypass your brand entirely.
  • Callability = survival: As we move toward the agentic web, your brand must be callable. If your services aren’t defined by schema actions, AI agents can’t execute transactions on your behalf.

7 organic content investments that drive ecommerce ROI

16 March 2026 at 17:00
7 organic content investments that drive ecommerce ROI

The rules of organic content are shifting from a “publish more” to a “prove more” mindset. Search results increasingly answer questions directly through AI summaries, shopping features, and other SERP integrations. Visibility alone doesn’t resolve buyer uncertainty.

For ecommerce brands, organic visibility now requires recognition and trust amid the noise on the SERPs. The 2026 game is both simpler and more demanding. Invest in organic assets that:

  • Reduce buyer uncertainty.
  • Are machine-readable.
  • Compound across multiple discovery surfaces.
Google SERPs showing results for "gaming headset noise canceling"

The forces shaping organic content’s ROI in 2026

Today’s search is defined by three forces changing how content performs.

AI discovery is normal now

Generative AI has become a standard part of the organic search results through features like Google’s AI Overviews and AI Mode. These generative AIs answer broader questions directly, often pulling in citations from web content. 

AI Overviews were designed to help people get the gist of a topic quickly, providing a jumping-off point to explore links. However, time has shown they also contribute to fewer direct clicks on traditional search results, as users might get their answer entirely from the AI summary. 

So, if you want your ecommerce brand to earn organic visibility, you need content that AI will cite and that users will trust.

Shopping-first SERPs reward structured product data

Nowadays, Google’s search results are saturated with shopping features (e.g., product carousels, price comparison snippets, “Popular Products” lists, and more). Sometimes, they look more like the search results on an ecommerce site than a traditional organic SERP.

Google shopping result - cat eye sunglasses for women

These discovery surfaces are powered by structured product data and merchant feeds. Product pages must communicate clean data to Google. 

Product results depend on the quality of the attributes you provide. Google recommends that ecommerce sites include structured data on product pages and share complete product feeds for richer search appearances. 

The bottom line is that you need to invest in your product data infrastructure. When Google can reliably understand what you sell, it will showcase your products more prominently, helping you attract more qualified shoppers. 

Discovery is multi-platform

The traditional funnel, where a customer Googles something and clicks your link, is evolving especially for Gen Z. Search now takes place on social media in huge numbers.

Approximately 86% of Gen Z internet users report searching on TikTok weekly, almost as many as use Google. This means your potential customers might discover products through a TikTok video or an Instagram Reel long before they ever see your website. 

Here’s the pattern I see with ecommerce:

  • Someone is scrolling on a social media app.
  • They see your Reel, post, or ad.
  • They don’t buy at that moment.
  • Later, they Google you, or they Google the exact thing they saw.
  • They land on your site.

This is demand creation. Keep in mind that these types of results are showing up on Google, too. 

Meanwhile, AI platforms are already part of the discovery process. Social search behavior is here, so think of platforms like YouTube, TikTok, and Instagram as extensions of Google.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

7 organic content investments that will pay off in 2026

So, where exactly should ecommerce teams focus their content resources? 

1. Upgrade the money pages first 

Start with the pages that directly drive revenue (e.g., your product detail pages (PDPs), collection pages, and other high-intent landing pages). 

Make these pages conversion-ready. Go beyond the basic title, image, and price by adding content blocks that answer buyer anxieties. 

For example, your PDPs should include clear information on sizing/fit, compatibility, materials, care instructions, warranty, shipping and return policies, and genuine FAQs from real customers. 

To do this, find conversational queries through Google Search Console and look at one-star and two-star reviews, either on competitor products or your own, to see the exact questions, complaints, and doubts buyers have.

Alternatively, you can get full clarity on the three types of obstacles that every single client has and focus on the emotional one.

For each pain point, ask:

  • What’s the obvious pain point? (surface-level problem)
  • What’s the hidden pain point? (what they’re really worried about)
  • What’s the emotional pain point? (the core feeling driving the decision)

Here’s an example scenario: Imagine a mother who works remotely and has a baby who refuses to sleep:

  • Obvious: “I can’t find time to get the baby to nap.”
  • Hidden: “I don’t want to pay for something that might not work.”
  • Emotional: “I feel like a bad mom if I can’t manage this.”

That last one — the emotional obstacle — is the strongest. People buy relief. They buy confidence. They buy the feeling that things will be okay.

On category pages, add filters that guide users (e.g., “Shop by size, color, or use case”), highlight top sellers or award-winning products, and include comparison links (e.g., “Best for X vs. Y”). 

Try to enrich these pages so that a customer who lands on them has all the info they need to feel confident making a purchase.

The goal is a page that precisely matches the user’s intent and resolves uncertainties.

2. Focus on visual search optimization

We live in a visual search world. Consumers are searching with images and even combinations of images and text. 

As Google itself noted, “… consumers are using their voices to find answers on the go, and their cameras to explore the world around them.” Search has expanded beyond the traditional text box. This shows ecommerce’s huge opportunity to invest in visual content optimization.

Throughout 2025, there were over 100 billion visual searches via Google Lens and related visual tools, with one in five of those searches driven by someone looking to buy a product they saw. Up to 39% of consumers have used Pinterest as their search engine, per an Adobe study, and Instagram is clearly moving in the same direction. 

Shoppers are using images to find ideas, compare products, and determine what to buy. This means you need to optimize your ecommerce images and videos for organic search just as rigorously as your text content. 

  • Short-form videos and image carousels are what people watch most on Instagram and TikTok, and now that content is becoming easier to find through search.
  • Instagram now allows keyword searches for posts, meaning alt text and caption keywords can help your posts appear in searches like “best winter boots.”

Treat every image and video as a piece of searchable content. 

Dig deeper: 10 advanced ecommerce SEO tips that boost rankings and revenue

3. Feed Google the right product info: Schema and Merchant Center

Structured data and product feeds aren’t optional. If you want Google to feature your products in shopping results (and pull correct info into AI answers), you need clean product data.

Start with the product pages. Add Product schema on every PDP and include all the basics: name, description, image, brand, SKU, price, currency, availability, and offers. If you show reviews on the page, mark up reviews and ratings, too. 

If shipping cost, delivery time, or variants matter for the purchase, include that information as well. Only use FAQ/HowTo/Review schema when the content is actually on the page.

Next, treat the Google Merchant Center feed like an SEO asset because Google does. Keep it accurate: use titles that match the product, correct categories, accurate price and stock information, and no mismatches with your PDPs. 

After you fix errors in Merchant Center, improve the feed by adding attributes like size, color, and material. Turn on automatic updates so Google can handle small changes. When Google can clearly read what you sell, it shows your products more often, and the clicks received are higher intent.

Get the newsletter search marketers rely on.


4. Build first-party ‘proof’ content (reviews, UGC, expert testing, etc.)

Create content that credibly demonstrates the quality and performance of your products. This includes:

  • Customer reviews and ratings on the site.
  • Content your team creates that demonstrates first-hand experience with the products. 

For reviews, consider improving your review prompts to get more detailed feedback. For example, you can ask customers specific questions about fit, durability, or how they’re using the product.

Find ways to highlight these insights on the PDP (e.g., a summary of common pros and cons). This kind of content signals to Google and users alike that the site offers genuine insights. A shopper is more likely to convert when they see real evidence, and this directly leads to higher conversion rates. 

If you publish in-depth product review articles or videos on your site, you can capture search queries for “[Product] review” or “is [Product] worth it,” because Google will “see” the first-hand expertise.

Additionally, ecommerce brands can create their own original testing and use-case content. This might be blog articles or video snippets where the brand tests the product’s claims or compares it to alternatives.

Essentially, brands should think like an in-house influencer evaluating their product. 

Dig deeper: How to make ecommerce product pages work in an AI-first world

5. Create decision-support content that feeds the money pages

Νot all customers search for a specific product. Many start with broader questions. Capture these early-stage shoppers by creating both comparison and buyer’s guide content that funnels to your product pages. 

If shoppers aren’t sure what to choose, use formats that reduce confusion and give them a clear path forward, like quizzes or selectors (e.g., “Find your ideal [product] in 60 seconds”) and criteria-led guides (e.g., “How to choose a [category]: 7 factors that matter”). 

If they’re comparing options, help them narrow the shortlist with head-to-head comparisons (e.g., “[Product A] vs [Product B]”) and “best for” hubs (e.g., “Best [category] for small spaces” or “Best [category] under $X”). 

And if they’re scared of making the wrong choice, publish risk-reducing content like “mistakes to avoid” articles and “who it’s not for” pages (e.g., “Don’t buy [type] if you have [constraint]”). 

Each of these content pieces should be seen as an extension of your sales funnel: Design them to link directly to your relevant categories or products

This type of content is the bridge between informational queries and purchase-ready sessions. 

6. Strengthen retention with community content

One of the smartest content investments an ecommerce brand can make is in content created by real people, whether that’s your customers, your employees, or trusted influencers. 

The reason UGC works so well is that it doesn’t feel like marketing. This isn’t surprising when you consider user behavior: People trust people.

Brands should encourage and showcase UGC at every turn. This can mean reposting customer photos showing them using your product on social media, integrating reviews and customer images into your product pages, or running challenges to generate buzz. 

The key is to treat your customers as a content engine. 

Another trend is employee-generated content, or in simpler words: leveraging your team to humanize the brand.

Forward-thinking ecommerce brands have employees take the stage in content, whether it’s a product development engineer doing a “behind the scenes” video, retail staff modeling new apparel on TikTok, or your founder writing thought-leadership articles. This insider perspective is paying off because it blends expertise with authenticity. 

Beyond individual pieces of content, ecommerce brands should invest in building communities around their products and niche. A great example is Instant Pot’s official Facebook group, which has over 3 million members. This community of passionate users shares recipes, tips, and excitement about using the product, which means they generate endless organic content for the brand.

The best part? The group keeps existing customers engaged and serves as social proof to potential buyers. More brands are realizing that a community = continuous organic marketing. 

Here’s one more reason to invest in social proof and community: It can influence your search rankings. 

Google result - facial steamer

Google’s recent updates indicate that brand mentions across the web, engagement on social media, and UGC signals can all contribute to SEO. 

Dig deeper: Why ecommerce SEO audits fail – and what actually works in 30 days

7. Own your audience: Blogs, email newsletters, and content hubs

While we’ve talked about discovery on external platforms, another area for organic content investment is your own channels.

First, content-rich blogs or resources on your site are still a powerful organic asset. Yes, the content mix has shifted toward video and social, but consumers and search engines still value in-depth written content for certain needs. 

According to a recent HubSpot marketing report, blog posts are the third-most-popular content format among marketers. That shows blogs are still very much in play, even if they’re not the hottest format. The key is to evolve the blog strategy: 

  • Focus on quality over quantity.
  • Target long-tail keywords and questions that your customers ask.
  • Incorporate rich media into posts to keep them engaging.

Next, email newsletters. The value of email lies in its ability to directly reach a highly engaged audience. Unlike social media, where your reach can be limited by algorithms, emails land straight in your subscribers’ inboxes, giving you full control over messaging and design. 

Keep in mind that your subscribers have opted in voluntarily, showing a clear interest in your content or offers. Investing in email marketing tools, hiring good copywriters, and designing emails with careful attention is worth it. 

Finally, content diversification within your owned media can pay dividends. This includes: 

  • Interactive content (quizzes, calculators, etc.).
  • Podcasts or audio content.
  • Even tools or apps that provide utility (which in turn produce content or data users engage with). 

The key here is aligning the content with what your customers care about. A smart organic content plan could look like this:

  • Put real effort into short-form videos.
  • Keep investing in blog and SEO content.
  • Build community and collect user-generated content (reviews, photos, Q&A).
  • Stay consistent with email and your newsletter.

These channels work better when they work together.

A blog post can become social posts and newsletter content. Customer reviews and photos can be used in emails and on product pages. Videos can be added to blog posts and category pages.

When you connect everything, your content becomes one system that keeps bringing people in and turning them into customers.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What to deprioritize (and why it’s riskier now)

Just as important as where to invest is knowing what content tactics to avoid. 

SEO blog content at scale

If your strategy is to publish lots of generic blog posts just to target keywords, stop. Especially if that content is automated, templated, or written with minimal effort. You’ll spend time and money, and you will get zero results.

Google has strengthened its spam policies against scaled content abuse, which includes content farms and auto-generated pages made only to win rankings.

Anything that looks like manipulative ‘SEO trickery’ or reputation abuse

Google is cracking down on tactics where sites leverage shady methods to rank. For example:

  • Buying expired domains and filling them with content to gain website authority.
  • Mass-publishing AI-written pages with no quality control.
  • Fake reviews, review stuffing, or any attempt to game ratings.

If it looks like a shortcut, it’s probably risky. In short, deprioritize quantity-over-quality approaches and any borderline spammy shortcuts. The direction is clear: Google wants originality, real value, and content made for people.

Be present, valuable, and everywhere

Ecommerce brands should invest in a multi-channel content strategy that prioritizes quality and is truly user-centric. 

You need to show up wherever customers search and measure success through visibility, engagement, trust, and sales. The best investment with the greatest ROI is content that’s both genuinely helpful and strong enough to reuse across different channels.

How to avoid 11 common SEO interview mistakes and land your next job 

16 March 2026 at 16:00
How to avoid 10 common SEO interview mistakes and land your next job

Over the past decade, I’ve reviewed hundreds of resumes, conducted countless interviews, and led numerous technical tests for SEO candidates. 

Along the way, I’ve met many exceptional professionals — but I’ve also noticed a recurring pattern of common interview mistakes that can hold even the most talented candidates back.

Below are 11 common mistakes I’ve observed in SEO interviews — and how you can easily avoid them.

1. Projecting arrogance instead of confidence 

Confidence is great! While imposter syndrome is common in SEO, it’s important to maintain realistic confidence in your skills and experience. However, there is a fine line between projecting confidence and appearing arrogant. 

For example, talk about your successes, such as:

  • Complicated projects you navigated.
  • Great results you achieved.
  • Buy-in you gained. 

Be clear about what you achieved and how. Show off your theoretical knowledge. Discuss ideas and theories with your interviewer. 

Don’t assume they will agree with you, though. This can be arrogance.

SEO isn’t a “one-size-fits-all” practice. You may have different experiences from your interviewer, leading to different conclusions. This is fine. It happens in SEO all the time.

Some people make the mistake of thinking it’s OK to argue and dismiss others’ opinions. This rarely works well in any workplace and can be especially harmful during an interview.

When I interview, I look for team players — confident in their knowledge yet humble and open to learning. They embrace new evidence and contribute to discussions that elevate the entire team’s understanding, including their own.

If you stray too far into arrogance during an interview, you may come across as difficult to teach or lead and not open to feedback.

2. Giving hazy details about projects and successes

Interviews are your time to shine. They let you showcase some of your best work. Another mistake I’ve seen in interviews is assuming interviewers can fill in the gaps.

Candidates talk about a project or website they have worked on, but fail to convey its significance. They mention website migrations, expecting non-SEO interviewers to understand the complexities involved. They discuss turning around a traffic slump without giving any data. Avoid this. 

Make sure to give the specifics. There’s a good acronym for constructing interview answers called STAR. It stands for:

  • Situation: What was the issue or opportunity you were facing?
  • Task: What was your role or responsibility in this and the goal you were working toward?
  • Action: What did you do to address the situation?
  • Result: What happened because of your actions? What successes, learnings, or results can you share?

Using this method, you may find it easier to hit all the salient points that give the interviewers clarity and perspective. Try to choose examples that have an outcome that you’re proud of or can at least explain what made it fall short.

Dig deeper: How to become exceptional at SEO

3. Ignoring the question

Candidates sometimes don’t have time to think of an answer to the question or feel they don’t have one. They try to talk around the question and bring it back to something they feel more comfortable discussing.

If an interviewer asks, “Talk about a time when you faced a complex website migration and what you did?” or “How would you handle a stakeholder not signing off on your recommendations?” that’s exactly what they want to know. 

Avoid going off on a tangent and ensure you address the question directly. Often, interviewers have a list of questions they ask each candidate.

They may even use these to compare candidates. If you’re not directly answering them, you put yourself at a disadvantage.

Instead, take some time to think about the answer. Explain that you want to answer well and need a minute to organize your thoughts. If you don’t have an experience relevant to a question or have not encountered something before, explain that to the interviewer. 

Tell them you haven’t “migrated a website before,” but mention what you would do in that situation. If you make something up, passing it off as a situation you faced, you risk being exposed. 

You may be asked for details you can’t provide, or you may realize that a savvy interviewer has been researching the company or website as you talk about it. 

4. Not addressing your audience well

Building rapport with interviewers is key to a successful interview. Answer their questions clearly so they can recognize your knowledge and experience.

To do that well, you need to understand your audience. You should address their questions using the language and tone they are using and gauge their level of SEO knowledge. 

It may be tempting to impress non-SEO stakeholders with industry jargon, but if they don’t know what it means, they won’t understand the impact of what you’ve done.

Similarly, if you’re being interviewed by the head of SEO, relying on jargon or complex-sounding projects without substance can risk being seen as insincere or unqualified.

5. Being disrespectful of the progress of the site(s)

If you are talking to another SEO at the company or agency, don’t assume they are negligent in not addressing that JavaScript issue you’ve noticed on their site. 

Don’t think their SEO approach is basic; there is still an obvious area for expansion. Be respectful. It’s OK to acknowledge that you noticed these issues with their sites, but assume you aren’t telling them anything they don’t already know.

Chances are, some procedural or technical blocks are stopping them from fixing it. Enquire about that instead. It will give you some insight into what challenges you may face if you do go on to work there. 

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

6. Being unprepared for the types of questions asked

Interviews are nerve-wracking. It’s understandable if your mind goes blank when asked to share specific examples of your work or knowledge.

One of the most frustrating mistakes I see in interviews (and have made myself!) is forgetting the details of the perfect example of a project that would have answered an interviewer’s question.

A good way to avoid this is to come prepared with projects or challenges that exemplify some core areas of SEO that you are likely to face in the role. Look at the job listing again and see what experience they hope candidates will have. 

Given the scope, seniority, and complexity of the sites, consider the situations and tasks you may face in that role. For example, if you are interviewing for a senior technical SEO role, you may want to prepare examples of projects you’ve worked on that included:

  • A challenging crawling, indexing, parsing, or rendering issue.
  • A large, complicated technical SEO project that you needed to gain buy-in from stakeholders for.
  • A sudden drop in traffic or rankings that needed investigation.
  • A website migration that you had a leading role in.

If you’re interviewing for an SEO account manager at an agency, you may want to prepare for times when:

  • You had to explain to stakeholders the drop in performance and the planned remedial action.
  • Present an SEO proposal to a group of people with varying SEO literacy and explain how you helped them get on board with the plan.
  • You presented at a client pitch, the work you put into the pitch, and how you onboarded that client.

Come prepared with example projects you can adapt. 

  • Think of a successful project and how you made it work. 
  • Give an example of an unsuccessful project and what you would do differently. 

This may mean writing notes about these projects and key points, such as tasks and results, to jog your memory. Essentially, you want to have a few well-detailed and thought-out examples that you can adapt using the STAR method on the fly at the interview.

Get the newsletter search marketers rely on.


7. All talk, no substance

Waffle. Meandering. Stalling for time out loud. Whatever you want to call it, this is possibly one of the most common mistakes I’ve seen in interviews. Starting to answer the question before knowing what you are going to say. 

Again, it’s understandable. We feel like we need to answer the question as soon as it is asked. In reality, though, it’s OK to take some time to think it through first. 

Listen to the question and address that directly. Consider it a school assignment where you get a mark for every point you hit. Structure your answers clearly to help interviewers find the information they’re looking for.

Sometimes the waffling comes from a poorly asked question. Perhaps it isn’t entirely clear what the interviewer is asking. Don’t fall into the trap of trying to answer a question you don’t fully understand.

It’s OK to ask clarifying questions. If you still don’t have an answer, you can explain that it isn’t something you have encountered or even heard of. However, this gives you something to go away and look into.

You could even ask the interviewers what they think about the topic or what they would do in the situation you mentioned. Most interviewers seek team members who are willing to learn and expand their knowledge.

In the best case, they will see your willingness to learn and grow from others around you. Worst case, you have another side of SEO or interviewing techniques to study for the next role you apply for.

8. Trying to bribe or threaten interviewers

This should go without saying, but I’ve encountered it in interviews before. 

  • Don’t threaten or try to bribe your interviewers. It’s highly unlikely that if an interview is going badly, the promise of a link from your friend’s blog to their company’s website will turn it around. 
  • Don’t promise them that if they hire you, they will get access to the secrets to your “guaranteed SEO approach” if you have not been able to demonstrate your competency through the questions they’ve asked. 
  • Don’t threaten a negative SEO attack on them or their competitors. 
  • Avoid suggesting they only wanted to interview you to steal your ideas. 
  • Don’t be rude or dishonest. You won’t get the job, and you won’t be kept in the database of possible future candidates.

9. Contacting everyone in the company to get an ‘in’

Another mistake I’ve seen is a candidate getting too enthusiastic about standing out from the crowd.  In doing so, they contact anyone in the company they can to make themselves known.

It’s great to show that you are interested in the company and the role. If the interviewers have said it’s OK for you to contact them after the interview, it is absolutely fine. 

However, be considerate when contacting interviewers outside the interview process. It may come across as keen, but do it too much, and it can become difficult for people to respond, especially if they aren’t directly involved in the interviewing process.

Follow up sparingly and with the right people, but be mindful of how busy interviewers are when running hiring processes. Your keen attitude may be too much if it’s not appropriate. 

10. Being dishonest about your level of involvement in the project

Be truthful about your level of involvement in a project. Don’t claim you worked on a project just because it happened at your agency at the same time you were working there. 

As soon as interviewers start asking in-depth questions about the project, your lack of knowledge will be apparent. Instead of it sounding impressive, you’ll come across as lacking knowledge and depth in your answer.

Focus your answers on the impact that you had on a project. Talk to what others did and how it fit into the whole approach, but don’t take credit for their work. This is important because interviewers want to know where your competencies lie. 

It’s OK to talk about what you learned from others during the project and how you might use that insight in future work. It isn’t OK to claim that it was your idea when it wasn’t. 

Dig deeper: 8 tips for SEO newbies

11. Giving ‘Google lies’ as an answer to an interview question

This is an SEO-specific interview mistake. Unfortunately, it’s quite common. I see it often during technical portions of interviews. When candidates are asked to think through how they would approach a situation, or explain why an approach may not work.

They don’t necessarily know why Google ignored a canonical tag. Or why a page that is blocked in the robots.txt is still indexed. So they panic and start blaming Google for lying about its practices and bot behavior.

I’ve heard a lot of sweeping statements during interviews about how you can’t believe Google spokespeople. How they outright lie to us to disguise how the bot and algorithm mechanisms work. Whether you agree with those statements or not, they are a poor way to get around not knowing the answer to a technical question.

If you don’t know why a page has been indexed even though it is blocked in the robots.txt, the answer isn’t to claim “Google ignores the robots.txt and just says they don’t.”

Yes, the SEO world is full of conspiracy theories and genuine questions about the integrity of the industry’s larger players. It’s good to question the status quo through experiments and thought exercises. 

However, the better way to approach an interview question like that would be to think around the issue. Let’s assume Google isn’t lying — what could be the reasons the page has been indexed despite being blocked in the robots.txt? 

If you start your interview answers from a place of assuming there is a logical answer to them, you are more likely to get to the right conclusions. This is a much better way of approaching SEO in general, rather than assuming you’re being lied to!

Ace your SEO interview and leave a lasting impression

By avoiding these common mistakes, you can present yourself as a confident, prepared, and team-oriented candidate. With the right approach, you’ll be better positioned to impress interviewers and land your next SEO role.

Breaking Through Creative Ops Bottlenecks: Your 2026 Technology Roadmap by Canto

16 March 2026 at 15:00
Two colleagues reviewing content on a tablet with graphics showing a digital asset library, approval status, and marketing analytics icons.
Two colleagues reviewing content on a tablet with graphics showing a digital asset library, approval status, and marketing analytics icons.

Are you watching your team’s creative operations buckle under mounting pressure? You’re not alone. As project complexity skyrockets and client demands intensify, creative leaders face an unprecedented challenge: scaling operations without sacrificing quality or burning out teams. 

The solution isn’t working harder, rather, it’s working smarter with technology that transforms your entire content lifecycle. Here’s how forward-thinking creative operations leaders are building resilient, scalable workflows that thrive in 2025’s demanding landscape.

A confused woman shrugs in front of an orange background with floating digital media icons, including documents, images, video, music, a speaker, and a magnifying glass over a calendar.

The perfect storm facing creative operations

Creative teams are caught in a maelstrom of expectations and pressures. Research shows that 77% of marketing teams report increased project volume year-over-year, while 45% struggle to keep up with increasing content demands for various channels. Meanwhile, client expectations for faster turnarounds and higher-quality output continue unabated. 

Consider this scenario: Your team juggles 15 active campaigns across multiple channels, each requiring dozens of asset variations. Reviews pile up in email threads, designers waste hours hunting for approved brand elements and project managers lose visibility into actual campaign progress. 

This chaos isn’t just frustrating, it’s expensive. Teams spending excessive time on administrative tasks rather than creative work see productivity drop by up to 40%.

Why traditional approaches fall short

Many creative leaders attempt to solve these challenges by adding headcount, or by implementing rigid processes that chafe at the creative drives of artists and designers. But throwing additional resources at systemic problems isn’t a guaranteed fix.  

For many teams, the real issue lies in disconnected workflows and siloed tools. When your creative software doesn’t communicate with your project management system, and your digital asset management exists in isolation from approval processes, you’re fighting an uphill battle against inefficiency. 

What you need is an integrated marketing and creative ecosystem that connects every stage of your content lifecycle.

The technology stack that transforms operations

Illustration of a central checklist document surrounded by colorful hexagon icons representing workflow features such as settings, analytics, network connections, user interaction, customer support, automation, and payments on a beige and orange background.

Digital asset management: Your content foundation
Modern digital asset management (DAM) systems serve as the central nervous system, the single source of truth for creative operations. But not all DAM platforms are created equal. Look for platforms that offer:

  • Intelligent organization and search: AI-powered search, tagging and categorization features that make finding assets easy for all users, not just admins.
  • Version control: Automatic tracking of asset iterations with clear approval status, as well as automated sunsetting features. 
  • Brand compliance: The importance of brand compliance can’t be overstated. Consistent branding across all platforms can increase revenue by 23%. Built-in style guides and templating tools can prevent off-brand content.
  • Global accessibility: Cloud-based access and multi-language capabilities that support distributed teams and external partners.

Seamless creative tool integration
Your designers live in Adobe Creative Cloud, Figma and Canva, but the briefing and project data for your campaigns live elsewhere. This disconnect creates unnecessary friction and increases time to market. Advanced integrations between platforms should bridge this gap by:

Embedding project context: Bringing project briefs, deadlines, task assignments and feedback directly into creative applications.
Automating file management: Syncing creative files with project management systems without manual intervention.

Intelligent approval workflows
Traditional approval processes rely on email chains and manual tracking. Modern workflow automation transforms this chaotic process by:

  • Dynamic routing: Automatically sending assets to the right reviewers based on project type and complexity.
  • Parallel reviews: Enabling simultaneous review by multiple stakeholders to compress timelines.
  • Contextual feedback: Providing annotation tools that eliminate ambiguous comments.
  • Escalation management: Automatically flagging delayed approvals to prevent bottlenecks.

Project management that actually manages
Generic project management tools often fail creative teams because they don’t resonate with creative workflows. Purpose-built solutions offer:

  • Creative-specific templates: Pre-configured workflows for common project types.
  • Resource planning: Visual capacity management that prevents team overload.
  • Real-time collaboration: Integrated communication that keeps discussions contextual.
  • Performance analytics: Insights into team efficiency and project profitability.

Building scalable workflows: A strategic approach

Laptop displaying a digital asset management interface with image thumbnails, overlaid by a “Create a Workflow” form for organizing media files, on an orange abstract background with workflow icons.

Start with process mapping
Before implementing technology, map your current content lifecycle. Identify every touchpoint from initial brief to final delivery. Where do assets get stuck? Which handoffs create delays? This analysis reveals your biggest pain points and prioritizes technology investments.

Implement incrementally
Don’t attempt a complete overhaul overnight. Start with your biggest bottleneck — often asset management or approval workflows. Success with one component builds momentum and buy-in for broader transformation.

Design for scale from day one
As you implement new systems, design workflows that can handle 3x your current volume. This forward-thinking approach prevents future growing pains and ensures your technology investment pays long-term dividends.

Measure everything
Establish baseline metrics for key performance indicators:

  • Asset request fulfillment time.
  • Project completion rates. 
  • Review cycle duration. 
  • Team utilization rates.

Track these metrics throughout your technology implementation to demonstrate ROI and identify areas for continued optimization.

The human element: Change management for creative teams

Technology alone doesn’t transform operations — people do. Successful implementations require careful change management:

  • Involve your team: Include designers and project managers in technology selection and workflow design. 
  • Provide comprehensive training: Invest in proper onboarding that goes beyond basic functionality. 
  • Create champions: Identify early adopters who can mentor others and troubleshoot issues. 
  • Iterate based on feedback: Regularly gather input and adjust workflows based on real-world usage.

Looking ahead: The future of creative operations

Smiling person using a laptop with graphics of a dollar sign, magnifying glass, and charts around them, representing online finance or digital analytics.

The most successful creative operations leaders aren’t just solving today’s problems — they’re preparing for tomorrow’s opportunities. Emerging technologies like AI-powered content generation and predictive project planning will further transform creative workflows. 

Organizations that build flexible, integrated technology stacks now position themselves to rapidly adopt these innovations. Those stuck with legacy systems and manual processes will find themselves increasingly left behind.

Your next steps

The question isn’t whether to modernize your creative operations technology — it’s how quickly you can begin. Start by auditing your current tools and identifying the biggest gaps in your workflow integration. 

Consider piloting a comprehensive digital asset management solution that integrates with your existing creative tools. Look for platforms offering robust approval workflows and project management capabilities that can scale with your growth. 

Remember: every day you wait, your competition gains ground. The creative operations leaders who act decisively today will define the industry standards of tomorrow. Are you ready to transform your creative operations from a bottleneck into a competitive advantage? The technology exists — now it’s time to implement it strategically and watch your team’s potential unfold.

Chloe Varnfield talks sneaky Google Ads settings and tanking performance

13 March 2026 at 23:49

Chloe Varnfield, a digital marketing specialist at Atelier Studios with nearly eight years in PPC, joined me to share the mistakes that shaped her career — and the lessons every advertiser should take from them.

When Google sneaks settings past you

Chloe’s first story centers on Google’s account-level automated assets setting — a feature so well hidden that many advertisers don’t know it exists until a client sends a screenshot asking why their headline looks completely wrong. The setting, buried behind a three-dot menu, defaults to “on”, meaning Google can automatically generate and serve headlines advertisers never wrote or approved. The takeaway: always audit your account-level settings, and treat every Google update as a potential default you’ll need to turn off.

Why you should never make changes on a Friday

A client asked Chloe to narrow their campaign’s location targeting mid-call. She made the change quickly — and accidentally excluded the UK entirely while targeting only the desired regions. Campaigns stopped delivering. It took three days of head-scratching before she audited the full campaign and found the culprit. The lesson she now swears by: never make significant changes on a Friday, and when something stops working, go straight to a full audit rather than waiting for the algorithm to “fix itself.”

The time she listened to a Google rep — and tanked performance for two months

Chloe’s most costly story involves a campaign that was performing at its best in years. A Google rep recommended switching bid strategy from Maximise Conversions to Maximise Conversion Value. She made the switch — and performance collapsed. For small to medium-sized businesses that already struggle to hit the conversion volume thresholds needed for smart bidding to work effectively, changing bid strategy is a high-stakes decision that shouldn’t be made on the spot. It took two months to recover, with the pressure of a major seasonal sale looming. She fixed it — but the lesson stuck: don’t let enthusiasm or a rep’s insistence override your judgment. Sit on big decisions. Trust your gut.

The account mistakes that still happen in 2026

When auditing inherited accounts, Chloe consistently sees the same three problems: broken or absent conversion tracking (sometimes still pulling from Universal Analytics), broad match applied to brand campaigns — which makes it impossible to know whether results are genuinely driven by non-brand keywords — and accounts with zero negative keywords. These aren’t minor structural issues. They directly distort performance data and waste budget.

On honesty, client relationships, and not spiralling

Across all three of her own stories, Chloe’s client relationships survived because she communicated transparently — explaining what had gone wrong, what she was doing to fix it, and what the next step would be if that didn’t work. Her advice to anyone mid-crisis: breathe, be kind to yourself, stay calm, and remember that no one has died. The ability to fix problems under pressure is what builds expertise — and fixing something difficult often becomes your proudest professional moment.

The AI mistake too many marketers are making

On AI, Chloe is clear: using it to generate ad copy or proposals without reviewing or editing the output is lazy and obvious. AI should make you faster, not replace your judgment. Always put your own voice and review back into whatever it produces.

💾

Chloe Varnfield shares sneaky Google settings, Friday mistakes, and Google rep advice that tanked her campaigns — and the hard-won lessons that came after.

SerpApi asks court to throw out Reddit scraping complaint

13 March 2026 at 21:14
Reddit vs SerpApi

SerpApi is asking a federal court to dismiss Reddit’s lawsuit over alleged scraping of Reddit content from Google Search, saying Reddit is trying to use copyright law to control user posts and public search results.

  • The motion follows Reddit’s amended complaint filed in February.
  • SerpApi says the filing still fails to show copyright ownership, circumvention of technical protections, or concrete harm.

SerpApi’s argument. SerpApi CEO Julien Khaleghy, in a blog post today, argued the lawsuit fails for several reasons:

  • Reddit doesn’t own most of the content at issue. Its user agreement states that users retain ownership.
  • Reddit holds only a non-exclusive license to user posts.
  • The snippets cited in the complaint (e.g., dates, addresses, short fragments) aren’t copyrightable.
  • SerpApi accessed Google Search pages, not Reddit itself.

DMCA. Khaleghy said Reddit claims SerpApi violated the Digital Millennium Copyright Act (DMCA) by circumventing technical protections. SerpApi disputes that claim, saying it retrieves the same search results visible to anyone who enters a query in Google. Khaleghy argued that:

  • SerpApi doesn’t break encryption or bypass authentication.
  • Accessing public webpages isn’t “circumvention” under the DMCA.
  • Reddit is trying to enforce copyright protections it doesn’t own.
  • Reddit’s privacy policy states that public posts may appear in search results.

Catch up quick. Legal fights over search scraping and AI data have intensified in recent months:

Why we care. The case tests whether companies can extract information from Google’s search results without violating copyright or the DMCA. The outcome could affect SEO tools and AI training data.

What’s next. The court must decide whether Reddit’s amended complaint can proceed. If the judge dismisses the case with prejudice, Reddit’s claims against SerpApi in this lawsuit would end.

SerpApi’s blog post. Reddit’s Lawsuit is a Dangerous Attempt to Expand Platform Power

Beyond keywords: Mastering AI-driven campaigns

13 March 2026 at 19:00
Google Ads dashboard concept

The days of building campaigns around long lists of keywords are fading. Today, AI-powered Google campaigns and features like Performance Max (PMax) and AI Max are changing the rules.

These keywordless campaigns lean on automation, audience signals, and machine learning to find new opportunities, often faster and at greater scale than humans can.

At SMX Next, three PPC pros — Nikki Kuhlman, VP of search at Jumpfly; Brad Geddes, founder of Adalysis; and Christine Zirnheld, director of lead gen at Cypress North — explained where PMax and AI Max fit into your broader campaign strategy, where humans still make the difference, and how to strike the right balance between automation and control.

AI Max for Search: Best practices and what not to do

AI Max for Search is not a new campaign type. It’s a one-click opt-in setting within existing Search campaigns.

Without requiring you to switch to broad match, it expands your keywords — similar to broad match or Dynamic Search Ads — using your landing pages and other site assets. It then personalizes the ad copy and landing page the searcher sees.

The evolution from traditional setup

In the old setup, you might have used a keyword like “skincare for dry sensitive skin” that sent users to a moisturizer page with generic ad copy because you couldn’t capture every variation. With Google’s current matching, a specific ad group no longer guarantees that keyword will trigger that ad group.

AI Max for Search addresses this by generating ad copy based on the search query, making it more relevant and directing users to a landing page that better matches their needs.

Success with blog content

One area where AI Max for Search is seeing success beyond the norm is blog content. While DSA campaigns traditionally excluded blogs, AI Max for Search can now serve blogs as landing pages—and they’re converting. The key is that these blogs guide readers to specific products, not just general content.

The generated headlines are compelling and longer than what traditional RSAs allow, creating a more engaging user experience.

Best Practices for AI Max for Search

Do:

  • Use it on existing campaigns with history and data, not brand new campaigns
  • Test it as a 50/50 experiment instead of an outright change
  • Use it on brand campaigns with brand inclusion capabilities
  • Apply it to campaigns not hitting budget that could use more volume
  • Review landing pages and utilize URL exclusions (individual or rule-based)
  • Use landing page inclusions at the ad group level
  • Review search queries regularly and add negative search terms
  • Enable both text customization and final URL expansion for maximum value
  • Turn off AI Max at the ad group level when specific ad groups drive poor traffic

Don’t:

  • Use it on brand new campaigns without data
  • Change all campaigns at once without testing
  • Use it on brand campaigns without name recognition or brand inclusion ability
  • Apply it to budget-constrained campaigns
  • Turn off both URL expansion and text customization — if you’re not using both features, stick with broad match and smart bidding
  • Assume it works universally — test on individual campaigns

Your action plan

Week 1: Pick a search campaign to test (brand with brand inclusion, with budget capability, needing more volume). Review landing page URLs and add inclusions or exclusions.

Week 2: Review search queries and add negatives.

Week 3: Continue optimization and turn off AI Max at the ad group level as needed.

Experiment checklist:

  • Ensure enough volume for a 50/50 experiment
  • Give the experiment 6 weeks to 2 months
  • Set up a custom experiment if you need to enable brand inclusion or update settings
  • For one-click experiments, change confidence level to medium and turn off auto-apply

Match type performance: What the data shows

A comprehensive study analyzing over 16,000 campaigns revealed surprising insights about match type performance across different bidding strategies.

Match type basics

  • Exact match: Should match only when the search term has the same intent as your keyword. Misspellings and word order haven’t mattered for years — focus on user intent.
  • Phrase match: The search intent should match your keyword, but could have additional information around it, whether modifiers, phone numbers, or websites.
  • Broad match: Shows for anything related to the search intent. The key difference is that broad match uses additional signals that exact and phrase don’t, such as content on the landing page, other keywords in the ad group, and, most powerfully, previous search history for that user.

Performance by bidding strategy

Max Conversion strategies (Max Conversions, Max Conversion Value):

Most campaigns using max bid strategies have under 30 conversions per month, giving machines limited data to work with. The findings:

  • Exact match has the best click-through rates and conversion rates
  • Broad match had the worst conversion rates but the best return on ad spend
  • Broad match also had lower CPA than phrase match
  • Phrase match performed worst overall

Recommendation: Start with exact match, then skip phrase match entirely and layer in broad match if you have more budget to spend.

Target Bid Strategies (Target CPA, Target ROAS):

Most campaigns using these strategies have over 30 conversions per month, with many at 50 or 100+, giving machines substantially more data. The findings:

  • Exact match is again the best match type
  • Phrase match comes second
  • Broad match is third
  • Phrase match performs better with more data

Recommendation: Start with exact match, layer in phrase match with more budget, then add broad match if additional budget is available.

The phrase match puzzle

Why does phrase match perform poorly with limited data but better with more data?

Broad match uses additional signals, particularly previous search queries, to determine bids. When conversion data is limited (under 30 conversions monthly), broad match’s ability to leverage previous search history makes it much stronger than phrase match.

However, with sufficient data (50–100+ conversions), Google can properly match phrase match keywords using machine-learning pattern matching.

Brand vs. non-brand considerations

When you combine brand and non-brand data, exact match becomes even more powerful, delivering significantly higher click-through rates, higher conversion rates, lower CPAs, and much higher return on ad spend. That’s why segmenting keywords by brand and non-brand is crucial when determining your match type strategy.

Ecommerce exception

For ecommerce companies, broad match (and sometimes phrase match) can produce higher average order values than exact match. When someone searches for a specific product, and you carry that exact item, conversion rates are high, but they’re usually buying a single product with a lower checkout value.

When shoppers haven’t decided on a product, they tend to match broader keywords and build larger carts — resulting in lower conversion rates but higher order values.

Performance Max for lead generation

There’s a common misconception that Performance Max only works for ecommerce and is too difficult for lead generation. That couldn’t be further from the truth.

The critical success factor

The biggest mistake you can make—one you should avoid entirely—is optimizing campaigns for form submissions alone. If you treat every form submission as your campaign goal, you’ll end up with spammy submissions and frustrated sales teams.

The solution: integrate your Google Ads account with your CRM and import bottom-of-funnel leads—sales-qualified leads (SQLs), marketing-qualified leads (MQLs), opportunities, or even customers if the sales cycle is short.

When you tell Google Ads what you actually want and set it as your campaign goal, Performance Max can cast a wide net while still bringing in qualified prospects.

Available controls for regulated industries

Performance Max has significantly more controls now than at launch, making it viable for highly regulated industries:

  • Brand exclusions: Exclude all brand traffic from Performance Max campaigns
  • Campaign-level negative keywords: Exclude unwanted search terms directly
  • Search term reports: See what’s triggering your ads and exclude accordingly
  • Channel reporting: View spending and performance across different networks
  • Page feeds: Control where you send traffic on your site
  • Final URL expansion toggle: Turn it off completely if needed
  • Text enhancement controls: Optional feature that can be disabled entirely
  • Text guidelines: Specify words to avoid (e.g., “discount” or “directory”)

Device control: The secret weapon for B2B

One of the most underutilized levers for B2B and regulated industries is device control, introduced at the beginning of 2025. You can turn off any device from your Performance Max campaign.

A B2B SaaS example demonstrates the impact: Before device segmentation in January, the account had 224 SQLs from desktop at an acceptable CPA, but 33 from mobile at $319 CPA (above goal). After creating separate mobile campaigns with more aggressive target CPAs, they achieved 190 desktop SQLs and 37 mobile SQLs in a shorter month, with mobile CPA dropping to $204 and overall Performance Max CPA declining from $238 to $204.

Real Performance Max results for B2B SaaS

Despite lower conversion rates from Performance Max compared to search campaigns (due to broader reach), the results speak for themselves. In September 2025, one B2B SaaS account achieved:

  • Search Campaigns: 150 SQLs at $237 CPA
  • Performance Max: 204 SQLs at $220 CPA

Performance Max cast a wider net with cheaper CPCs, bringing in not just more leads but more sales-qualified leads at a lower cost.

How they did it:

  • Optimized for SQLs, not form submissions
  • Set lower target CPAs in Performance Max than search (to control spend while casting wider net)
  • Created separate campaigns for off-hours to control weekend spending
  • Turned off final URL expansion and text enhancements (client preference)
  • Implemented separate mobile and tablet campaigns with aggressive target CPAs

AI Max for Search in lead generation

AI Max for Search brings the power of Performance Max to the search network, where bottom-of-funnel intent is strongest. This is especially valuable for lead generation accounts that spend on other networks in Performance Max but don’t generate leads from them.

Early results: Higher ed financial services

A higher education financial client (loan products) showed promising early results:

Approved applications (primary KPI):

  • Standard Search: 86 approved applications at $660 CPA
  • AI Max: 70 approved applications at $579 CPA

AI Max brought in qualified leads cheaper despite the highly competitive keyword environment.

Down-funnel performance

Beyond the initial conversion action (soft credit check), AI Max showed superior performance throughout the funnel:

  • 42% of AI Max form submissions resulted in soft pulls vs. 36% for standard search
  • 9.9% of AI Max form submissions resulted in bookings vs. 5.58% for standard search

AI Max isn’t just bringing more qualified prospects at the top—lead quality remains higher throughout the entire funnel.

How they did it:

  • Optimized for approved applications, not form submissions
  • Set lower target CPAs in AI Max than standard search
  • Used high-performing bottom-of-funnel keywords with broad match types
  • Kept final URL expansion and text enhancements disabled (still worked well without them)

Win with AI without losing control

PPC success requires embracing AI-driven campaigns while maintaining strategic human oversight. Whether you use AI Max for Search, Performance Max for lead generation, or adjust match types based on bidding methods and data volume, the key is understanding how these tools work and applying best practices aligned with your business goals.

The data is clear: exact match remains powerful across scenarios, but phrase and broad match perform differently depending on bidding strategy and data volume. For lead generation, the game changer is optimizing for true bottom-of-funnel conversions rather than form submissions, combined with strategic device controls and proper campaign segmentation.

The future of PPC depends on knowing when — and how — to apply automation and control for maximum impact.

💾

PPC experts explain how AI Max and Performance Max reshape search, when automation wins, and where human strategy still drives better leads.

Why surface-level SEO tactics won’t build lasting AI search visibility

13 March 2026 at 18:00
Google search monolith crumbling

A recent Harvard Business Review piece echoes the shift we’re sseeing in the SEO industry: at a macro level, LLMs and Google’s AI-powered SERP features, such as AI Overviews, aren’t just creating a zero-click environment, but also changing user journeys and behavior.

They’re collapsing what used to be multi-touch customer journeys into a single synthesized answer.

For a more visual and emphatic metaphor, the monolith of “Search” is crumbling.

When that happens, brands lose many of the touchpoints they once owned, and your marketing strategy must change accordingly. HBR captures this moment well, arguing that marketing now has a new audience and that algorithms increasingly shape first impressions.

That said, while the article points in the right direction on the broader trend, its tactical advice is generic and falls back on shallow tactics.

Much of the guidance returns to familiar marketing playbook ideas that sound strategic and innovative but lack real operational depth. That gap matters for the longevity and sustainability of visibility.

The narrative may be easy for you to understand and repeat at the executive level, but it glosses over the deeper structural changes you must actually make to adapt to the new search ecosystem.

The problem with flock tactics

The HBR article centers on schema, authorship signals, and branded concepts. These recommendations risk becoming what I call “flock tactics.”

These ideas spread quickly because they’re easy to explain, but they offer little lasting competitive advantage once everyone adopts them.

Schema 

Schema has been one of the most debated topics in LLM and AI optimization. Microsoft Bing confirmed it uses schema for its LLMs, but the relationship between Google’s models and third-party LLMs isn’t as straightforward.

While it isn’t necessarily wrong to recommend schema as part of your overall search optimization activities (SEO and AI), positioning it as a table-stakes tactic ignores diminishing returns once competitors implement similar markup and it becomes standard.

Another gap is the role of external knowledge systems, such as Wikidata or authoritative publishers. Much of the information LLMs rely on comes from those sources rather than a single company’s website.

This is less linear to understand, explain, and demonstrate as a single line item on an activity tracker, but these are nuances you now have to deal with, whether you like it or not.

What’s also missing is any exploration — or even a nod — to how models ingest and prioritize structured data compared with the many unstructured signals they rely on.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

E-E-A-T — shallow authorship signals

Attaching the names, credentials, and biographies of real experts follows familiar E-E-A-T logic and represents reasonable hygiene.

The problem is that the treatment remains superficial. It risks pushing you to focus on cosmetic signals such as bios, headshots, and credential lists without strengthening the underlying expertise pipeline.

There is a meaningful difference between placing an author bio on a page and cultivating a genuine expert entity whose work appears in conferences, third-party publications, standards committees, or academic collaborations.

Only the latter produces signals that models are more likely to recognize and trust.

Vanity concepts

The article also suggests creating branded frameworks or concepts — for example, something like “The Acme Index” — to help models associate ideas with your company. In theory this sounds appealing, but in practice it’s extremely difficult to execute.

Unless those ideas spread into the trusted datasets LLMs tend to prioritize, they rarely gain traction.

You need those concepts and frameworks adopted and discussed by entities other than yourself, including academic journals, technical standards, widely used software ecosystems, and other prominent entities in your category.

What often results instead is a proliferation of branded labels that remain largely invisible to the models they were meant to influence.

The structural blind spots

Beyond these tactical issues, the analysis overlooks deeper structural challenges. It treats AI primarily as an external platform shift.

The implication is that you must simply adapt to it rather than actively shaping your own environment.

Internalizing AI infrastructure

HBR never seriously considers the possibility of building AI into your own infrastructure. You can deploy assistants, RAG systems, and domain-specific agents within your own products and customer experiences.

These systems operate in logged-in, transactional contexts where first-party data and controlled interfaces still matter enormously.

In those environments, traditional concerns such as site architecture, structured data, and product design remain deeply relevant, though they operate differently from public search optimization.

It’s not just SEO

The discussion also frames SEO primarily as a page-ranking problem tied to discovery.

That perspective misses the broader shift toward entity-level knowledge management (things, not strings).

Visibility within LLMs increasingly depends on how well you structure entities, taxonomies, and knowledge graphs, and on how those systems connect with external data sources.

Most LLMs don’t process data at the petabyte scale Google uses to understand entity relationships. There is a strong correlation that when something ranks well on Google, third-party LLMs often correlate and “trust” Google’s guidance on which brands to show, for what, and when.

HBR’s phrase “engineering recall” points directly to this deeper data engineering work, yet the implications aren’t expanded.

LLM model heterogeneity

Another major omission is the diversity of AI systems themselves.

Different AI assistants and models rely on different training datasets, refresh cycles, retrieval mechanisms, and safety layers.

That heterogeneity means you can’t assume a single optimization strategy will work across all AI surfaces.

It also doesn’t explore the risk of broad-stroke approaches. If you try to increase visibility within AI models without accounting for safety filters, attribution errors, or hallucinations, you may gain visibility in ways that are inaccurate or reputationally damaging.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Surface-level tactics won’t build AI visibility

HBR’s article works well as a high-level explanation of how AI is changing marketing. It helps you understand that traditional SEO alone is no longer enough and that you must consider how AI systems see and describe your brand.

As a practical guide, however, the advice is thin. Most recommendations focus on surface-level tactics that many companies will quickly copy, reinforcing the echo chamber of flock tactics that are easy to sell and quantify, but risk narrowing your focus to short-term wins at the expense of longer-term strategy.

The real challenge is deeper. You need clear entity definitions, structured knowledge systems, reliable data in trusted sources AI models use, testing across how different models represent you, and AI-powered experiences within your own products.

“Winning” in the AI era will depend less on cosmetic SEO improvements and more on the harder structural work behind the scenes.

Only 15% of pages retrieved by ChatGPT appear in final answers: Report

13 March 2026 at 17:50
AI search fan out

ChatGPT retrieves far more webpages than it cites. A new AirOps analysis found that 85% of discovered sources never appear in the final answer.

Why we care. If you want your content cited in AI-generated answers, discovery isn’t enough. Most retrieved pages never become visible to users.

Key finding. In AI answers, retrieval doesn’t equal citation. Your page can rank and be retrieved yet still lose the citation to a source that better matches the prompt or supporting context.

  • This shifts optimization toward earning selection inside the AI synthesis process—not just appearing in search results, per the report.

By the numbers:

  • 82,108 citations appeared in final responses.
  • Only 15% of retrieved pages were cited.
  • 85% of pages surfaced during research never appeared in answers.

Citation rates also varied by query type:

  • 18.3% for product discovery queries
  • 16.9% for how-to queries
  • 11.3% for validation searches

Fan-out queries. ChatGPT often expands prompts with additional internal searches while generating an answer, creating what the report calls a “second citation surface.” Across the dataset:

  • 89.6% of prompts triggered two or more follow-up searches.
  • Fan-out searches expanded 15,000 prompts into 43,233 queries.
  • 32.9% of cited pages appeared only in fan-out results—not the original prompt.
  • 95% of fan-out queries had zero traditional search volume.

Google ranking correlation. High Google rankings strongly correlated with citations:

  • 55.8% of cited pages ranked in Google’s top 20.
  • Pages ranking in Position 1 were cited 3.5 times more often than pages outside the top 20.

About the data. AirOps analyzed 548,534 pages retrieved across 15,000 prompts to examine how ChatGPT expands queries and selects citations.

The study. The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations

Stop paying for traffic: The enterprise CMO’s guide to ROI-driven SEO

13 March 2026 at 17:00
Vanity metrics vs revenue

The standard agency reporting call is broken. Budgets are under extreme scrutiny, yet you still invest in vendors that celebrate arbitrary traffic gains while your sales pipeline stays flat.

Optimizing for raw traffic volume is a legacy mindset that hides real commercial performance. The new mandate is to build an acquisition engine that influences buyers and protects your profit and loss (P&L) long before the transaction.

To survive as a marketing leader today, you must ruthlessly challenge your internal teams and external agencies. Stop accepting reports on operational output and demand hard financial accountability: pipeline contribution, customer lifetime value (LTV) to customer acquisition cost (CAC) ratios, and reduced paid media dependency.

The new path to purchase: Why traffic is bleeding your budget

Chasing top-of-funnel informational traffic is a trap. If the users clicking your links aren’t actively buying, you’re paying for vanity metrics, not business outcomes.

This happens because many buyers now use large language models (LLMs) to conduct deep research before they reach a search engine’s transactional layer. If you aren’t the cited authority during that AI-driven research phase, you’re invisible by the time buyers finalize their purchase decisions.

The 7.48% reality: The power of the educated buyer

The contrast in traffic quality is staggering when you look at the data. Across our enterprise client base, traditional organic search converts at 2.75%, while AI search converts at 7.48%.

LLMs function as the ultimate trust proxy for today’s consumers. When tools like Gemini, ChatGPT, or Perplexity synthesize dozens of reviews, whitepapers, and Reddit threads to recommend your enterprise software, users trust the LLM’s consensus more than a branded blog post.

AI engines arm consumers with comprehensive data, comparisons, and consensus. By the time a user clicks your AI citation, they’ve already made their decision based on your authority and are prepared to transact.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

From found to cited: Architecting the default recommendation

Want to capture this 7.48% conversion rate? Your entire approach to digital asset creation must evolve. The strategy no longer centers on ranking among a list of links, but on being cited as the definitive option.

To win the AI consensus, you must translate your marketing strategy into structured capital management.

  • The old way: Publishing a 2,000-word blog post on top supply chain trends that generates 5,000 monthly visitors who bounce after reading and add zero value to your pipeline.
  • The new way: Build a generative engine optimization (GEO) hub—a dedicated supply chain cost calculator page with proprietary data tables, expert author schema tagging your lead engineers, and strict answer-first formatting.

LLMs require consensus and verifiable facts to generate confident answers. By structuring your digital assets with proprietary data and verifiable entities, you become the default recommendation.

This approach may yield only 500 highly qualified visitors, but it gives LLMs what they need to cite you in vendor comparison prompts and captures buyers at the exact moment of commercial evaluation.

Strategic ROI: Using citation authority to reduce ad spend

It’s time to stop viewing SEO as a siloed traffic generator. You must treat organic citation authority as a strategic financial lever to reduce overall CAC.

Align your organic assets with your highest-CAC paid campaigns. When organic search owns the AI Overview, your paid team can confidently pull back defensive ad spend.

Here’s how to leverage paid and AI search:

  • IF your brand becomes the default AI recommendation for a high-cost commercial category, THEN your paid team must aggressively reduce defensive brand bidding to slash overall cost per acquisition (CPA).
  • IF paid search identifies a highly profitable long-tail query, THEN SEO must prioritize building a structured asset to organically capture that exact demand in the future.
  • IF an LLM cites your competitor as the superior enterprise solution, THEN your paid team must immediately deploy targeted, bottom-of-funnel conquesting ads to intercept that user before the transaction, while the organic team rapidly engineers a proprietary data asset to win back the consensus.

The monthly cannibalization review: Your immediate action item

If your Head of Search and Head of Paid Media aren’t in the same room once a month mapping organic citations against paid brand bidding, you’re burning capital.

Align your teams and channels. Routinely audit where you’re paying for clicks on terms where you already own the AI citation and the top organic spot.

Treat this cannibalization review as a strict financial audit. Identify wasted defensive ad spend and immediately reallocate those dollars toward net-new market expansion.

The enterprise scorecard: 3 questions to ask your agency tomorrow

To regain control of your P&L, you must challenge your vendors to step up. Ask your agency these three questions tomorrow morning to see if they’re true business partners or order-takers.

1. What’s our citation share of voice for our highest-margin categories?

Challenge your team to map their organic efforts directly to the AI research phase of your most profitable products.

The answer you should hear: “We’ve mapped your 50 highest-margin queries. By securing the primary AI citation for these, we’ve generated $1.2 million in pipeline this quarter at a 3:1 LTV:CAC ratio.”

2. How is our citation strategy directly reducing our paid media CAC?

Require teams to prove how their organic authority captures demand that would otherwise require paid ad spend.

The answer you should hear: “By capturing the definitive AI citation for [category], we paused paid bidding on those terms. This reduced our blended CAC by 18% and saved $45,000 in defensive ad spend — which we’ve immediately reallocated to net-new market expansion.”

3. Are our digital assets structured for LLM extraction?

Push your teams to explain their strategy for AI-driven search models. It’s no longer enough to publish standard web pages.

The answer you should hear: “We’ve restructured your core commercial pages away from standard marketing copy, deploying answer-first’ frameworks, proprietary data tables, and expert author entities to ensure LLMs confidently extract and recommend your brand. This structural shift has increased our inclusion in commercial AI Overviews by 40% this quarter, directly feeding our bottom-of-funnel pipeline.”

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Demand commercial outcomes, not operational output

In a tough economy, SEO is a measurable business unit that must defend its budget with revenue data. Don’t accept operational output as proof of commercial success.

Audit your reporting frameworks immediately. Stop accepting vanity metrics as evidence of success. Demand pipeline impact, LTV:CAC ratios, and a resilient acquisition engine.

Any agency or internal team unwilling to tie its work directly to your P&L will become obsolete. Your job as an enterprise leader is to ensure your brand is cited as the authority long before the transaction begins.

Google Search Ads in 2026 require a different kind of audit

13 March 2026 at 16:00
Google Search Ads value redistribution

Brandon Ervin, Director of Product Management for Google Search Ads, recently discussed campaign consolidation, AI Max, and what advertiser control looks like in 2026 on Google’s Ads Decoded podcast. The conversation was serious and informed, and reflected a product team that understands advertiser concerns and is actively working to address them.

But the podcast is also incomplete. The gap between what Google said and what advertisers actually experience from their sales organization is large enough to warrant a direct response.

Ervin’s team is doing genuinely good work, but the platform’s structural incentives haven’t changed. Google’s evolving product is creating problems faster than it can solve them. Performance is now measured on economic standards, shaping how a search ads audit is performed.

Recent improvements to Google Search Ads

Recentish improvements are genuine:

  • Brand exclusions in Performance Max and Demand Gen.
  • Site visitor and customer exclusions from PMax campaigns.
  • Network-level reporting within bundled campaigns.
  • Improved search term visibility.
  • Brand and geo controls inside AI Max at the ad group level.
  • Semantic modeling that doesn’t anchor on campaign or ad group IDs, reducing learning period risk during consolidation.

These are meaningful. They are also solutions to issues introduced by bundling, opacity, and aggressive automation rollout.

These products have been mercilessly shopped to advertisers since 2021, and the controls that make it usable arrived years after the sales push began.

The ability to separate brand from non-brand traffic inside PMax/AI Max should not be framed as innovation. It restores a fundamental distinction that previously existed by default. The ability to see network performance inside a bundled campaign is not an expansion of control. It restores visibility that was removed.

An audit must ask whether new tools are genuinely expanding control or merely reintroducing baseline transparency.

Table stakes: What everyone agrees on

Before the real audit begins, the fundamentals. These are uncontroversial and should already be in place:

  • Run full ad extensions (sitelinks, callouts, structured snippets, image, call).
  • Use automated bidding with intentional target-setting and conversion action selection (I recognize there are still holdouts here but seems crazy to me).
  • Maintain negative keyword lists.
  • Write ads relevant to the queries they serve.
  • Audit automatically created assets for accuracy and brand safety.
  • Cut Search Partners and Display expansion from Search campaigns.
  • Separate brand and generic campaigns using brand controls.
  • Exclude site visitors and past customers from prospecting campaigns where appropriate.
  • Import offline conversion data (MQLs, SQLs, revenue, CLV, repeat rate,) to feed the algorithm downstream signals.
  • Weight conversion values by actual downstream conversion rates.
  • Account for mobile vs. desktop performance gaps.

Those are table stakes. The real audit begins after that.

What a 2026 search audit must focus on

With the prevalence of AI, advertisers need to focus on reconstructing economic visibility in systems designed around aggregation and automation. 

Signal architecture

In the podcast, Ervin says “control still exists, it just looks different.” Ad controls — where, when, and to whom ads appear — are still important and changing, some think, for the worse.

The old ad controls — exact match, manual bids, network selection, and device modifiers — gave advertisers direct influence over where ads appeared and what they paid. 

However, the new controls are indirect. Control now lives in data quality, density, and selectivity. They influence the algorithm, but the algorithm makes the final call.

An audit should focus on three questions:

  • Quality: Are you importing revenue, pipeline stage, or qualified lead status, or only surface conversions?
  • Density: Is there enough high-quality data for the model to learn from, or is it sparse and noisy?
  • Selectivity: Are you intentionally limiting what Google can see, or are you passing everything indiscriminately?
Low prediction, high density

With these new tactics, you only pass net-new customers or high-value customers. The majority of the time, it is better to just pass the densest and most predictive conversion set.  

Incrementality

Google optimizes toward reported conversions, not incremental conversions. Brand search often captures existing demand. Retargeting often captures users already in motion. Pmax/AI Max frequently blends these signals.

Ervin was asked: Are AI-driven campaigns over-indexing on warm brand traffic to inflate blended ROAS (return on ad spend)?

He doesn’t dispute the problem, but points to partial solutions, including using brand controls, better theme your account, and looking at multi-campaign A/B testing. 

If incrementality is not measured, automation amplifies non-incremental signals.

Marginal returns

Google uses a blended cost-per-action (CPA). For example, the first $50K of spend might return a $30 CPA, while the next $50K might return $120. 

With automation, money is spent until the blended metric falls within tolerance, meaning the last dollar is not spent efficiently. The vast majority of advertisers are bidding far beyond what they should be and have no idea it is happening.

An audit must:

  • Plot spend against incremental conversions.
  • Estimate marginal CPA at each spend tier.
  • Identify diminishing return curves.
  • Compare marginal CPA to lifetime value.

A lower target makes the algorithm more selective, competing in fewer high-value auctions. Google doesn’t suggest this because that would mean less spend and lower bids are less effective in general.  

Query resolution and ability to lower targets

On the podcast, Ervin acknowledges that some AI Max matches can “look a little wonky” and says his team is working on exposing the model’s reasoning. 

Query mapping has gotten meaningfully worse over the past several years: queries landing in the wrong ad groups, matching to keywords with different intent, and broad match pulling in traffic unrelated to the keyword.

AI Max has accelerated this — there’s been an increase in the volume of irrelevant queries flowing through AI Max campaigns, with no connection to the advertiser’s business or keywords in the account. 

Meanwhile, Google’s recommendations consistently push toward broad matching and large themed ad groups.  

The issue is not whether broad match works, but whether high-value intent is being diluted in larger, broader ad groups. Fewer ad groups means that we cannot effectively or meaningfully lower targets without a massive structural negative schema, so performance differences have to be large enough to validate the new structure. 

An audit should:

  • Extract full search term reports.
  • Classify queries by intent tier.
  • Compare CPA and lifetime value by query type.
  • Quantify irrelevant or weakly related matches.
  • Measure performance drift across match types.

Network economics

Performance Max and Demand Gen bundle multiple networks into single campaigns, but offer limited visibility into which networks drive results. This makes it hard to cut the underperforming ones. The slow rollout of network-level controls systematically benefits Google’s less competitive inventory.

An audit must:

  • Break out performance by network.
  • Compare CPA and lifetime value by placement.
  • Identify cross-subsidization.
  • Determine whether weaker networks are relying on surplus from strong search inventory.

Value redistribution

Combining these elements in your audit will help you succeed in this new world of ad search: 

  • Non-incremental traffic inflates conversion counts, making performance look better than it is.
  • Looser match types expand where ads appear, diluting intent precision and forcing fewer ad groups/spend and blanket-level targets/bids.
  • No clean marginal return visibility means it is much more difficult to find the point of negative return
  • Network bundling hides which channels actually perform.

The cumulative effect is that the surplus value generated by your best inventory and high-intent, high-converting search queries gets redistributed across Google’s weaker inventory (i.e., Display, YouTube, Discover, Gmail, crazy tail queries).  

This is how to get a dwindling supply of valuable search queries to inflate the cost-per-clicks (CPCs) of low-quality inventory. 

The Ads Decoded episode: Is your campaign structure holding you back in the era of AI?

 

💾

Google Search Ad audits must now rebuild the visibility automation removed. Here's how. (A response to Google’s Ads Decoded podcast.)

Google leaves door open to ads in Gemini

12 March 2026 at 23:58
How to use Google Gemini for better SEO

Google is leaving the door open to advertising in its Gemini AI app, with a senior executive telling WIRED the company is “not ruling them out” — a notable shift from the flat denials made just months ago.

What’s changed: In January, Google DeepMind CEO Demis Hassabis told reporters at Davos that Google had no plans to put ads in Gemini. Now, SVP Nick Fox is saying otherwise — noting that learnings from ads in AI Mode will “likely carry over” to Gemini down the road.

The current strategy. Rather than rushing into Gemini, Google is using AI Mode — its Gemini-powered Search product — as a testing ground for ad formats in AI experiences.

  • Ads are kept separate from organic results and clearly labeled
  • Google says it only shows ads when they’re relevant — if nothing fits, nothing runs
  • The company is drawing on 20-plus years of Search ad experience to inform the approach

Why we care. Google’s entire business is built on advertising. How and if they bring ads into AI products will shape the future of the industry — and set the tone for every AI company trying to figure out how to monetize free users. The brands that figure out how to show up relevantly in conversational AI environments now — before the auction gets competitive — will have a significant first-mover advantage.

The bigger picture. Google is in a stronger position than its rivals to take its time. The company crossed $400 billion in revenue in 2025, giving it the luxury of patience. OpenAI, by contrast, is under pressure to more than double its $30 billion in revenue this year — and has already started testing ads in ChatGPT’s free tier.

Between the lines: Fox’s framing is careful but revealing. By positioning Gemini ads as a “prioritization question” rather than a values question, Google is signaling it’s a matter of when — not if.

What to watch: Personal Intelligence — Gemini’s feature that pulls from a user’s Gmail, Photos, and Calendar — is the sleeper story here. Fox called personalization his “holy grail” for Search, and hinted it could eventually roll into the broader Search experience. If it does, advertisers would gain access to an entirely new layer of contextual targeting — though Fox was quick to add that user data will not be sold or shared.

What’s next. Advertisers should start preparing now. As Google refines its AI ad formats in AI Mode, those learnings will eventually migrate to Gemini. Brands that understand how to show up relevantly in conversational, context-rich AI environments will have a significant head start when the floodgates open.

Dig deeper. Google Is Not Ruling Out Ads in Gemini (registration needed)

Google AI Overviews cut search clicks 42%: Report

12 March 2026 at 23:06
Google traffic redistribution

Google’s AI Overviews may be reducing traditional search clicks, but publishers still have meaningful growth opportunities in breaking news and Google Discover, according to new data from Define Media Group.

  • Organic search clicks have fallen 42% since AI Overviews began expanding in Google Search, according to Define Media Group’s analysis of Google Search Console data across its portfolio of 64 sites.

Why we care. AI-generated answers are reshaping search traffic. Evergreen content is losing clicks, while real-time news coverage and Discover distribution are emerging as stronger traffic channels for publishers.

By the numbers. Across Google Search, Discover, and Google News, breaking news traffic grew 103% from November 2024 through early 2026 in the company’s dataset. Losses were concentrated in informational and evergreen content:

  • Organic search traffic averaged 1.7 billion clicks per quarter from Q1 2023 through Q1 2024.
  • After AI Overviews launched, traffic fell 16% immediately and never recovered.
  • As Google expanded AI Overviews in May 2025, declines accelerated.
  • By Q4 2025, search traffic was down 42% from the pre-AI Overviews baseline.

Discover’s role: Google Discover, which grew 30% across the portfolio, is now the main growth engine for breaking news distribution. Discover traffic rose steadily as web search traffic fell. For the first time in the dataset, Discover and web search now drive roughly equal traffic.

Why is this happening? AI Overviews appear less often for news queries than for other topics. AI Overviews appeared for about 15% of news queries — nearly three times less often than in categories such as health and science — according to Ahrefs data cited in the report.

  • News queries often trigger the Top Stories carousel, which links directly to publisher articles. Searches for major developing events, such as international conflicts, typically show Top Stories rather than AI summaries.
  • Define Media Group suggests Google may be avoiding AI-generated summaries for breaking news because events change rapidly, accuracy stakes are high, and generative systems can still hallucinate.

The report. BREAKING! News Thrives in the Age of AI

The latest jobs in search marketing

13 March 2026 at 23:36
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Salary: $47-$52 Annually Digital Marketing Specialist The Digital Marketing Specialist (DMS) is responsible for the execution and ongoing support of the company and client digital marketing initiatives. This role is hands-on and production-focused, working closely with senior leadership, account managers, and developers to implement marketing strategies across owned, organic, and paid channels. The […]
  • Job Description Hi there! We’re WebFX, a full-service digital marketing agency based in the US. We’ve been 10x named the Best Place To Work in Pennsylvania, and we’d love to meet you! We are a fast-growing company that has doubled in size over the past 5 years, with talented team members now based around the […]
  • SEO Specialist  Location: [Remote/Hybrid] Department: Digital Marketing Reports to: SEO Director About the Role We are looking for an analytical and detail-oriented SEO Specialist to help improve organic visibility and performance for our clients. This role works closely with the SEO Director and SEO team to analyze websites, identify optimization opportunities, and support the execution […]
  • Location: Remote (Full-Time, Work-from-home position) One Firefly is seeking a client-facing SEO Strategist to join our growing SEO team. This role is designed for an agency-experienced professional who enjoys owning client relationships, leading strategic conversations, and translating SEO performance into clear, business-aligned insights. If you thrive in a multi-client environment, are comfortable leading calls and […]
  • Description: As the Digital Account Marketing Manager, your role involves guiding and implementing Island Hospitality’s defined digital strategy across our hotels, restaurants, and other channels. This position is accountable for overseeing Island Hospitality’s e-commerce presence, boosting e-commerce revenue, and delivering top-notch support This role is designated as an in-office position, at our West Palm Beach […]
  • Job Description Salary: $55k-$65k Digital Marketing Specialist Location: Oberlin, Ohio Full-Time About AdeptAg AdeptAg LLC is a North American leader in controlled environment agriculture, integrating innovative growing, automation, and irrigation solutions for customers both domestic and international. We support todays growers with forward-thinking, cost-efficient systems designed to meet the evolving challenges of modern agriculture. Our […]
  • Benefits: 401(k) matching Dental insurance Health insurance Vision insurance Digital Marketing & Listing SpecialistPosition Title: Digital Marketing & Listing Specialist Department: Marketing & Revenue Reports To: General Manager (Victoria Swinford) Location: Santa Rosa Beach, FL (On-site preferred; hybrid considered) Employment Type: Full-Time, Salaried Position Summary Southern Holiday Homes is seeking a highly creative, detail-oriented Digital […]
  • Job Description Biointron is a global antibody services CRO seeking a client-facing, detail-oriented, and self-starting Marketing Associate to join our fast-growing team. Reporting directly to the Marketing Manager, this role is responsible for implementing marketing initiatives and collaborating with the global Biointron marketing team and regional business development teams to support company objectives. The ideal […]
  • Job Description Hi, we’re TechnologyAdvice. At TechnologyAdvice, we pride ourselves on helping B2B tech buyers manage the complexity and risk of the buying process. We are a trusted source of information for tech buyers, delivering advice and facilitating connections between our buyers and the world’s leading sellers of business technology. Headquartered in Nashville, Tennessee, we […]
  • About Haven Services Haven Services LLC is a $100MM residential and commercial plumbing, HVAC, and electrical services and contracting company. Haven Services is executing a growth strategy targeting $200MM in revenue by 2031. We are committed to delivering exceptional service to the homeowners and businesses we serve, and we’re looking for a results-driven digital marketing […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Butler/Till is a results-driven marketing agency offering deeply collaborative client experiences, proprietary technology, and world-class partnerships. At Butler/Till, we take immense pride in our independent, women-owned and led status, our unwavering commitment to a purpose-driven approach, our B-Corp status, and our unique structure as a 100% employee owned company (ESOP). SUMMARY The Channel Manager, Paid […]
  • The Role We’re looking for a bold, inspiring Head of Paid Search to lead our largest and most influential discipline and help shape the future of our agency. This is a high-impact leadership role for someone who commands the room at the executive level and shows up every day as a coach, mentor, and champion […]
  • Business Overview KINESSO is the technology-driven performance marketing agency that sits at the very heart of IPG Mediabrands, providing actionable growth for both our agency partners and clients. We turn ‘action’ into ‘outcome’ for our clients, leveraging our unique capabilities in optimization, analytics, AI, and experimentation. KINESSO has brought together the collective power of what […]
  • Stripe is a financial infrastructure platform for businesses. Millions of companies – from the world’s largest enterprises to the most ambitious startups – use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. […]
  • Executive Alliance is pleased to represent our client who are a boutique integrated advertising and media company that specializes in developing campaigns and brand strategy for their clients. They are headquartered in Melville, Long Island, New York. They seeking a results-driven and analytical Paid Search Specialist with 2–3 years of experience managing paid search campaigns […]

Other roles you may be interested in

Search Engine Optimization Manager, Colling Media (Hybrid, Phoenix Arizona)

  • Salary: $73,000 – $83,000
  • Develop and maintain strategic keyword and topic targeting plans for client campaigns
  • Monitor keyword rankings, search visibility, and performance trends to inform optimization strategies

Paid Ads/Growth Manager, Robert Half (Hybrid, Atlanta Metropolitan Area)

  • Salary: $65,000 – $85,000
  • Manage, optimize, and scale paid campaigns across Google Ads (Search, Display, YouTube) and Meta Ads (Facebook/Instagram).
  • Continuously refine targeting, bidding strategies, and creative to improve CPL, conversion rates, and overall ROAS.

SEO Manager, Clutch (Remote)

  • Salary: $60,000 – $75,000
  • Execute day-to-day SEO tactics across multiple client accounts, ensuring alignment with predefined campaign objectives.
  • Implement optimization strategies, including technical SEO audits and recommendations.

Marketing Manager – SEO & GEO, Care.com (Hybrid, Austin Texas)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Digital Marketplace Manager, Venchi (Hybrid, New York, NY)

  • Salary: $120,000 – $130,000
  • Define and execute channel-specific and cross-marketplace strategies, balancing brand positioning, commercial performance, and operational efficiency.
  • Manage Amazon advertising across Sponsored Products, Brands, and Display campaigns.

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌