Normal view

Today — 14 February 2026Search Engine Land

Andrea Cruz talks about turning client pressure into growth

13 February 2026 at 23:46


On episode 341 of PPC Live The Podcast, I speak to Andrea Cruz, Head of B2B at Tinuiti, to unpack a mistake many senior marketers quietly struggle with: freezing when clients demand answers you don’t immediately have.

The conversation explored how communication missteps can escalate client tension — and how the right mindset, preparation, and culture can turn those moments into career-defining growth.

From hands-on marketer to team leader

As Cruz advanced in her career, she shifted from managing campaigns directly to leading teams running large, complex accounts. That transition introduced a new challenge: representing work she didn’t personally execute day to day.

When clients pushed back — questioning performance or expectations — Cruz sometimes froze. Saying “I don’t know” or delaying a response could quickly erode trust and escalate frustration.

Her key realization: senior leaders are expected to provide perspective in the moment. Even without every detail, they must guide the conversation confidently.

How to buy time without losing trust

Through mentorship and experience, Cruz developed a practical technique: asking clarifying questions to gain thinking time while deepening understanding.

Examples include:

  • Asking clients to clarify expectations or timelines
  • Requesting additional context around their concerns
  • Confirming what the client already knows about the situation

These questions serve two purposes: they slow down emotionally charged moments and ensure responses address the real issue, not just the surface complaint.

For Cruz, this approach was especially important as a non-native English speaker, giving her space to process complex conversations and respond clearly.

A solutions-first culture beats blame

Cruz emphasized that mistakes are inevitable — but how teams respond defines long-term success.

At Tinuiti, the focus is not on assigning blame but on answering two questions:

  1. Where are we now?
  2. How do we get to where we want to be?

This solutions-oriented mindset creates psychological safety. Teams can openly acknowledge errors, run post-mortems, and identify patterns without fear. Cruz argues that leaders must model this behavior by sharing their own mistakes, not just scrutinizing others’.

That transparency builds trust internally and with clients.

Proactive communication builds stronger client relationship

Rather than waiting for clients to surface problems, Cruz encourages teams to raise issues first. Acknowledging underperformance — even when clients haven’t noticed — demonstrates accountability and strengthens partnerships.

She also recommends tailoring communication styles to each client. Some prefer concise updates; others want detailed explanations. Documenting these preferences helps teams deliver information in ways that resonate.

Regular check-ins about business roadblocks — not just campaign metrics — position agencies as strategic partners, not just media operators.

Common agency mistakes in B2B advertising

Cruz didn’t hold back on recurring issues she sees in audits:

  • Budgets spread too thin: Running too many channels with insufficient spend leads to meaningless data and weak performance.
  • Underfunded campaigns: B2B CPCs are inherently high. Campaigns generating only a few clicks per day rarely produce actionable results.

Her advice is blunt: if the budget can’t support a channel properly, it’s better not to run it.

AI is more than a summarization tool

On AI, Cruz cautioned against shallow usage. Treating AI as a simple spreadsheet summarizer misses its broader potential.

Her team is experimenting with advanced applications — automated audits, workflow integrations, and operational efficiencies. She compares AI’s role to medical diagnostics: a powerful assistant that augments expert judgment, not a replacement for it.

For marketers, that means staying curious and continuously exploring new use cases.

The takeaway: preparation and passion drive resilience

Cruz’s central message is simple: mistakes will happen. What matters is preparation, adaptability, and maintaining a solutions-first mindset.

By anticipating client needs, personalizing communication, and embracing experimentation, marketers can transform stressful moments into opportunities to build credibility.

The latest jobs in search marketing

13 February 2026 at 23:26
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Digital Marketing Manager The Digital Marketing Manager will be expected to lead a team that effectively crafts and implements digital marketing initiatives including search marketing, social media, email marketing and lead management for clients in a variety of industries. Candidates should expect to be engaged in managing multiple team members, clients and simultaneous projects, assisting […]
  • About Us HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers […]
  • Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. About this role We’re hiring an SEO Outreach Specialist to partner with high-authority brands and build high-quality backlinks to support our clients’ growth and authority. You will […]
  • SUMMARY The Digital Marketing Manager is a growth-oriented role that will evolve into a strategic marketing leadership position. You will work closely with the CCO and leadership team to shape our go-to-market strategy while executing high-impact marketing programs today. JOB RESPONSIBILITIES ESSENTIAL FUNCTIONS: Strategic Marketing & Positioning Collaborate with CCO and commercial leadership to evolve […]
  • Job Description We are a highly motivated bunch who seek to create a space where you know you are going to have a good time. We are known for our art-inspired spaces that are great for social gatherings. Our restaurants are wall to wall with lights, murals, and vignettes. We are the marinara-muddled minds behind […]
  • Benefits: Bonus based on performance Competitive salary Training & development Fischetti Law Group, a fast-growing Personal Injury and Estate Planning law firm, is seeking a creative, results-driven Digital Marketing Manager to lead our digital presence and community outreach efforts. This is a full-time, in-office position working directly with our Management team to expand our brand […]
  • Who We Are Oncourse Home Solutions (OHS) is a people-centric, $500M organization that is owned by private equity firm, Apax Partners operating under the brands American Water Resources, Pivotal Home Solutions and American Home Solutions. We do what is right for our people so they can do their best when serving our 1.9+ million customers […]
  • AppFolio is more than a company. We’re a community of dreamers, big thinkers, problem solvers, active listeners, and multipliers. At every opportunity, we set the pace while delivering innovation built to carry real estate into the future. One in which every experience feels effortless, yet meaningful. Where customers are empowered to take on any opportunity. […]
  • The Company: VeSync is a portfolio company with brands that cover different categories of health & wellness products. We wouldn’t be surprised if you have one of our Levoit air purifiers in your living room or a COSORI air fryer whipping up healthy and delicious meals for you every night. We’re a young and energetic […]
  • We’re looking for a Senior SEO Strategist to lead enterprise-level organic growth strategies across traditional search and modern discovery channels, including AI-powered SERPs, Google AI Overviews, and large language models (LLMs). In this role, you’ll own both strategy and execution for a portfolio of enterprise and high-growth clients. You’ll act as a trusted, client-facing advisor—translating complex technical […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

Other roles you may be interested in

Senior Manager, SEO, Kennison & Associates (Hybrid, Boston MA)

  • Salary: $150,000 – $180,000
  • You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
  • Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.

Backlink Manager (SEO Agency), SEOforEcommerce (Remote)

  • Salary: $60,000
  • Managing and overseeing backlink production across multiple campaigns
  • Reviewing and approving backlink opportunities (guest posts, niche edits, outreach-based links, etc.)

Senior Content Marketing Manager / Director, ClarityPay (Hybrid, New York, NY)

  • Salary: $95,000 – $135,000
  • Create high-quality content for core channels: website, LinkedIn, email, SMS, and internal communications
  • Write clear, compelling, and on-brand copy—from lifecycle messaging and short-form updates to long-form pages and narratives

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Yesterday — 13 February 2026Search Engine Land

Cloudflare’s Markdown for Agents AI feature has SEOs on alert

13 February 2026 at 22:25
Shadow web

Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.

  • Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
  • When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
  • The response also includes a token estimate header intended to help developers manage context windows.
  • Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.

What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.

  • Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
  • Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.

Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.

  • A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
  • The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.

Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:

  • “In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

And Microsoft’s Fabrice Canel said:

  • “Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
  • Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.

The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:

  • “When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
  • “The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”

Dig deeper. Why LLM-only pages aren’t the answer to AI search

Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…

Google Ads adds ROAS-based tool for valuing new customers

13 February 2026 at 22:08
Top 5 Google Ads opportunities you might be missing

Google Ads is rolling out a feature that lets advertisers calculate conversion value for new customers based on a target return on ad spend (ROAS), automatically generating a suggested value instead of relying on manual estimates.

The update is designed for campaigns using new customer acquisition goals, where advertisers want to bid more aggressively to attract first-time buyers.

How it works. Advertisers enter their desired ROAS target for new customers, and Google Ads proposes a conversion value aligned with that goal. The system removes some of the guesswork involved in estimating how much a new customer should be worth in bidding models.

The feature doesn’t yet adjust dynamically at the auction, campaign, or product level. Advertisers still apply the value at a broader setting rather than letting the system vary bids based on context.

Why we care. Assigning the right value to a new customer is a weak spot in performance bidding. Many advertisers manually set a flat value that doesn’t always reflect profitability or long-term goals.

By tying suggested conversion values to a target ROAS, advertisers can now optimise towards a more strategy-driven bidding, potentially improving how acquisition campaigns balance growth and efficiency.

What advertisers are saying. Early reactions suggest the feature is a meaningful improvement over static manual inputs. Founder of Savvy Revenue, Andrew Lolk argues the next step would be auction-level intelligence that adjusts values depending on campaign or product performance.

What to watch. If Google expands the feature to support more granular adjustments, it could further reshape how advertisers structure acquisition strategies and value lifetime customer growth.

For now, the tool offers a more structured way to calculate new customer value.

First seen. This update was first spotted by Founder and Digital Marketer Andrew Lolk who showed the new setting on LinkedIn.

SEO leaders: stop chasing rankings, start building visibility systems

13 February 2026 at 19:00
AI search is forcing SEO to become organizational infrastructure

SEO is moving out of the marketing silo into organizational design. Visibility now depends on how information is structured, validated, and aligned across the business.

When information is fragmented or contradictory, visibility becomes unstable. The risk isn’t just ranking volatility – it’s losing control of how your brand is interpreted and cited.

For SEO leaders, the choice is unavoidable: remain a channel optimizer or shape the systems that govern how your organization is understood and cited. That shift isn’t happening in a vacuum. AI systems now interpret, reconcile, and assemble information at scale.

The visibility shift beyond rankings

The future of organic search will be shaped by LLMs alongside traditional algorithms. Optimizing for rankings alone is no longer enough. Brands must optimize for how they are interpreted, cited, and synthesized across AI systems.

Clicks may fluctuate and traffic patterns may shift, but the larger change is this: visibility is becoming an interpretation problem, not just a positioning problem. AI systems assemble answers from structured data, brand narratives, third-party mentions, and product signals. When those inputs conflict, inconsistency becomes the output.

In the AI era, collaboration can’t be informal or personality-driven. LLMs reflect the clarity, consistency, and structure of the information they ingest. When messaging, entity signals, or product data are fragmented, visibility fragments with them.

This is a leadership challenge. Visibility can’t be achieved in a silo. It requires redesigning the systems that govern how information is created, validated, and distributed across the organization. That’s how visibility becomes structural, not situational.

If visibility is structural, it needs a system.

Building the visibility supply chain

Collaboration shouldn’t depend on whether the SEO manager and PR manager get along. It must be built into the content supply chain.

To move from a marketing silo to an operational design, we must treat content like an industrial product that requires specific refinement before it’s released into the ecosystem.

This is where visibility gates come in: a series of nonnegotiable checkpoints that filter brand data for machine consumption.

Implementing visibility gates

Think of your content moving through a high-pressure pipe. At each joint, a gate filters out noise and ensures the output is pure:

  • The technical gate (parsing)
    • The filter: Does the new product page template use valid schema.org markup (product, FAQ, review)?
    • The goal: Ensuring the raw material is structured so LLMs can ingest the data without friction.
  • The brand signal gate (clustering)
    • The filter: Does the PR copy align with our core entities? Are we using terminology that helps LLMs cluster our brand correctly?
    • The goal: Removing linguistic drift that confuses an LLM’s understanding of who we are.
  • The accessibility/readability gate (chunking)
    • The filter: Is the content structured for RAG (retrieval-augmented generation) systems?
    • The goal: Moving away from fluff and towards high-information-density prose that can be easily chunked and retrieved by an AI.
  • The authority and de-duplication gate (governance)
    • The filter: Does this asset create “knowledge cannibalization” or internal noise?
    • The goal: Acting as a final sieve to remove conflicting information, ensuring the LLM sees only one single source of truth.
  • The localization gate (verification)
    • The filter: Is the entity information consistent across global regions?
    • The goal: Ensuring cross-referenced data points align perfectly to build model trust.
The visibility supply chain

If gates protect what enters the ecosystem, accountability ensures that behavior changes.

Embedding visibility into cross-functional OKRs

But alignment without visibility into results won’t sustain change.

The most sophisticated infrastructure will fail if it relies on the SEO team’s influence alone.

To move beyond polite collaboration, visibility must be codified into the organization’s performance DNA.

We need to shift from SEO-specific goals to shared visibility OKRs.

When a product owner is measured on the machine-readability of a new feature, or a PR lead is incentivised by entity citation growth, SEO requirements suddenly migrate from the bottom of the backlog to the top of the sprint.

What shared OKRs look like in an operational design:

  • For product teams: “Achieve 100% schema validation and <100ms time-to-first-byte for all top-tier entity pages.”
  • For PR and communications: “Increase ‘brand-as-a-source’ citations in LLM responses by 15% through high-authority, entity-aligned placements.”
  • For content teams: “Ensure 90% of new assets meet the ‘high information density’ threshold for RAG retrieval.”

When stakeholders’ KPIs are tied to the brand’s digital footprint, visibility is no longer “the SEO team’s job.” Instead, it becomes a collective business imperative. 

This is where the magic happens: the organizational structure finally aligns with the way modern search engines actually work.

Measuring visibility across the organization

The gates ensure the quality of what we put into the digital ecosystem; the unified visibility dashboard measures what we get out. Breaking down silos starts with transparent data.

If the PR team can see which mentions drive AI citations and source links in AI Overviews, they’re more likely to shift toward high-authority, contextually relevant publications instead of chasing any media outlet.

We need to shift from reporting rankings to reporting entity health and Share of Model (SoM). This dashboard is the organization’s single source of truth, showing that when we pass the visibility gates correctly, our brand authority grows with humans and machines.

Systems and incentives matter, but they don’t operate on their own.

Dig deeper: Why most SEO failures are organizational, not technical

Hiring for AI-era visibility

Having the right infrastructure isn’t enough. We need a specific set of qualities in the workforce to drive this model. To navigate the visibility transformation, we need to move away from hiring generalists and start hiring for the two distinct pillars of an operational search strategy.

In my experience, this requires a strategic duo: the hacker and the convincer.

FeatureThe hacker (technical architect)The convincer (visibility advocate)
Core missionEnsuring the brand is discoverable by machines.Ensuring the brand is supported by humans.
Primary domainRAG architecture, schema, vector databases, and LLM testing.Cross-departmental OKRs, C-suite buy-in, and PR/brand alignment.
Success metricShare of model (SoM) and information density.Resource allocation and budget growth.
The gate focusTechnical, accessibility, and authority gates.Brand signal and localization gates.

The hacker: The engine room

Deeply technical, driven, and a relentless early adopter. They don’t just “do SEO.” They reverse-engineer how Perplexity attributes trust and how Google’s knowledge vault weighs brand entities. 

They find the “how.” They aren’t just optimizing for a search bar, but are optimizing for agentic discovery, ensuring your brand is the path of least resistance for an LLM’s reasoning engine.

The convincer: The social butterfly of data

This is the visionary who brings people together and talks the language of business results. They act as the social glue, ensuring the hacker’s technical insights are actually implemented by the brand, tech, and PR teams. They translate schema validation into executive visibility, ensuring that the budget flows where it’s needed most.

Hacker vs. convincer

Get the newsletter search marketers rely on.


How AI visibility reshapes in-house and agency roles

As roles evolve, the brand-agency relationship shifts with them. If you’re an in-house SEO manager today, you’re likely evolving into a chief visibility officer, focusing on the “convincer” role of internal politics and resource allocation.

Historically, agencies were the training ground for talent, and brands hired them for execution. That dynamic may flip. In this new era, brands could become training grounds for junior specialists who need to understand a single entity deeply and manage its internal gates. 

Meanwhile, agencies may evolve into elite strategic partners staffed by seasoned visibility hackers who help brands navigate high-level visibility transformation that in-house teams are often too siloed or time-constrained to see.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Leading the transition in the first 90 days

To prepare your team for the shift to SEO as an operational approach, take these steps:

  • Set the vision: Do you want to be part of the change? Define what visibility-first looks like for your business.
  • Take stock of talent: Do you have hackers and convincers? Audit your team not just for skills, but for mindset.
  • Audit the gaps: Where does communication break down? Find friction points between SEO and PR, or SEO and product, and fix them quickly.
  • Shift the KPIs: Move away from rankings and toward channel authority, impressions, sentiment share, and, most importantly, revenue and leads.
  • Be radically transparent: Clarity is key. You’ll need new templates, job descriptions, and responsibilities. Data should be shared in real time. There’s no room for siloed thinking.

What the first 90 days should look like:

  • Days 1-30 (Audit): Map your brand’s entity footprint. Where does your brand data live, and where is it conflicting?
  • Days 31-60 (Infrastructure): Embed visibility gates into your CMS or project management tool, such as Jira or Asana.
  • Days 61-90 (Incentives): Tie 10% of the PR and product teams’ bonuses to information integrity or AI citation growth.

The SEO leader as a systems architect

As we move further into the age of AI, the successful SEO leader will no longer be the person who simply moves a page from position four to position one. They’ll be the systems architect who builds the infrastructure that allows a brand to be seen, understood, and recommended by machines and humans alike.

This transition is messy. It requires challenging old thought patterns and communicating transparently and directly to secure buy-in. But by redesigning the structures that create silos, we don’t just “do SEO.” We build a resilient organization that is visible by default, regardless of what the next algorithm or LLM brings.

The future of search isn’t just about keywords. It’s about how your organization’s information flows through the digital ecosystem. It’s time to stop optimizing pages and start optimizing organizations.

Dig deeper: AI governance in SEO: Balancing automation and oversight

Why creative, not bidding, is limiting PPC performance

13 February 2026 at 18:00
Why creative, not bidding, is limiting PPC performance

For a long time, PPC performance conversations inside agencies have centered on bidding – manual versus automated, Target CPA versus Maximize Conversions, incrementality debates, budget pacing and efficiency thresholds.

But in 2026, that focus is increasingly misplaced. Across Google Ads, Meta Ads, and other major platforms, bidding has largely been solved by automation. 

What’s now holding performance back in most accounts isn’t how bids are set, but the quality, volume, and diversity of creative being fed into those systems. Recent platform updates, particularly Meta’s Andromeda system, make this shift impossible to ignore.

Bidding has been commoditized by automation

Most advertisers today are using broadly similar bidding frameworks.

Google Smart Bidding uses real-time signals across device, location, behavior, and intent that humans can’t practically manage at scale. Meta’s delivery system works in much the same way, optimizing toward predicted outcomes rather than static audience definitions.

In practice, this means most advertisers are now competing with broadly the same optimization engines.

Google has been clear that Smart Bidding evaluates millions of contextual signals per auction to optimize toward conversion outcomes. Meta has likewise stated that its ad system prioritizes predicted action rates and ad quality over manual bid manipulation.

The implication is simple. If most advertisers are using the same optimization engines, bidding is no longer a sustainable competitive advantage. It’s table stakes.

What differentiates performance now is what you give those algorithms to work with – and the most influential input is creative.

Andromeda makes creative a delivery gate

Meta’s Andromeda update is the clearest evidence yet that creative is no longer just a performance lever. It’s now a delivery prerequisite. This matters because it changes what gets shown, not just what performs best once shown.

Meta published a technical deep dive explaining Andromeda, its next-generation ads retrieval and ranking system, which fundamentally changes how ads are selected.

Instead of evaluating every eligible ad equally, Meta now filters and ranks ads earlier in the process using AI models trained heavily on creative signals, improving ad quality by more than 8% while increasing retrieval efficiency.

What this means in practice is critical for marketers. Ads that don’t generate strong engagement signals may never meaningfully enter the auction, regardless of targeting, budget, or bid strategy.

If your creative doesn’t perform, the platform doesn’t just charge you more. It limits your reach altogether.

Dig deeper: Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Creative is now the primary optimization input on Meta

Meta has repeatedly stated that creative quality is one of the strongest drivers of auction outcomes.

In its own advertiser guidance, Meta highlights creative as a core factor in delivery efficiency and cost control. Independent analysis has reached the same conclusion.

A widely cited Meta partnered study showed that campaigns using a higher volume of creative variants saw a 34% reduction in cost per acquisition, despite lower impression volume.

The reason is straightforward. More creative gives the system more signals. More signals improve matching. Better matching improves outcomes.

Andromeda accelerates this effect by learning faster and filtering harder. This is why many advertisers are experiencing plateaus even with stable bidding and budgets. Their creative inputs are not keeping pace with the system’s learning requirements.

Google Ads is quietly making the same shift

While Google has not branded its changes as dramatically as Meta, the direction is the same. Performance Max, Demand Gen, Responsive Search Ads, and YouTube Shorts all rely heavily on creative assets to unlock inventory.

Google has explicitly stated that asset quality and diversity influence campaign performance. Accounts with limited creative assets consistently underperform those with strong asset coverage, even when bidding strategies and budgets are otherwise identical.

Google has reinforced this by introducing creative-focused tools such as Asset Studio and Performance Max experiments that allow advertisers to test creative variants directly. As with Meta, the algorithm can only optimize what it is given.

Strong creative expands reach and efficiency. Weak creative constrains both.

Dig deeper: A quiet Google Ads setting could change your creative

The plateau problem agencies keep hitting

Many agencies are seeing the same pattern across accounts. Performance improves after structural fixes or bidding changes. Then it flattens.

Scaling spend leads to diminishing returns. The instinct is often to revisit bids or efficiency targets. But in most cases, the real constraint is creative fatigue.

Audiences have seen the same hooks, visuals, and messages too many times. Engagement drops. Estimated action rates fall. Delivery becomes more expensive.

This isn’t a platform issue. It’s a creative cadence issue. Creative testing is the missing optimization lever in mature accounts.

Get the newsletter search marketers rely on.


The agency bottleneck: Creative production

Most agencies are structurally set up to optimize bids, budgets, and structure faster than they can produce new creative.

Creative takes time. It requires strategy, copy, design, video, approvals, and iteration. Many retainers still treat creative as a one-off or an add-on rather than a core performance input. The result is predictable. Accounts are technically sound but creatively starved.

If your account has had the same core ads running for three months or more, performance is almost certainly being limited by creative volume, not optimization skill.

High-performing accounts today look messy on the surface with dozens of ads, multiple hooks, frequent refreshes, and constant testing. That isn’t inefficiency. That’s how modern PPC works.

Creative testing is a process, not a campaign

One of the biggest mistakes agencies make is treating creative testing as episodic. Launch new ads. Wait four weeks. Review results. Declare winners and losers. That approach is too slow for how fast platforms learn and audiences fatigue.

High-performing teams treat creative like a product roadmap. There’s always something new in development. Always something learning. Always something being retired.

Effective creative testing focuses on one variable at a time: hook, opening line, visual style, offer framing, social proof, or call to action.

It’s not about finding “the best ad.” It’s about building a library of messages the algorithm can deploy to the right people at the right time.

Dig deeper: Your ads are dying: How to spot and stop creative fatigue before it tanks performance

What agencies should do differently

Once you accept that creative is the constraint, the operational implications are unavoidable. If creative is the main constraint, agency processes need to change.

Creative should be planned alongside media, not after it. Retainers should include ongoing creative production, not just optimization time. Testing frameworks should be explicit and documented.

At a minimum, agencies should be asking:

  • How often are we refreshing creative by platform?
  • Are we testing new hooks or just new designs?
  • Do we have enough volume for the algorithm to learn?
  • Are we feeding performance insights back into creative strategy?

The best agencies now operate closer to content studios than optimization factories. That’s where the value is.

Creative is the performance lever

Bidding, tracking, and structure still matter. But in 2026, those are table stakes.

If your PPC performance is stuck, the answer is rarely another bidding tweak. It’s almost always better creative. More of it. Faster iteration. Smarter testing.

The platforms have told us this. The data supports it. The accounts prove it.

Creative is no longer a nice-to-have. It’s the performance lever. The agencies that recognize that will be the ones that continue to grow.

Dig deeper: Cross-platform, not copy-paste: Smarter Meta, TikTok, and Pinterest ad creative

How to optimize news content for today’s social-first Google SERP

13 February 2026 at 17:00
How to optimize news content for today’s social-first Google SERP

We’re in a new era where web content visibility is fragmenting across a wide range of search and social platforms.

While still a dominant force, Google is no longer the default search experience. Video-based social media platforms like TikTok and community-based sites like Reddit are becoming popular search engines with dedicated audiences. 

This trend is impacting how news content is consumed. Google’s current news SERP evolution is directly influenced by the personalization of query responses offered by LLMs and the rise in influencer authority enabled by social media platforms. 

Google has responded by creating its own AI-powered SERP features, such as AI Overviews and AI Mode, and surfacing more content from social media platforms that provide the “helpful, reliable, people-first content” that Google’s ranking systems prioritize.

Now that search and social are more intertwined than ever, a new paradigm is needed – one in which newsroom audience teams made up of social media, SEO, and AI specialists work holistically on a daily basis toward a cohesive content visibility goal. 

When optimizing news content for social platforms, publishers should also consider how those posts may perform in the Google SERP. I’ll cover optimizing for specific SERP features below, but first, you’ll want to think about making your news content social-friendly.

Optimize news content for social media platforms

First, a dose of sanity. Publishers should resist the temptation to optimize content for every social media platform.

It’s better to pick one or two social platforms – where an audience is already established and that offer the best opportunity for growth – than to create accounts on every social platform and let them languish.

Review analytics and conduct audience surveys to gain insights into which platforms your audience already consumes news content. 

Here’s a breakdown by platform of which content types work best and how content from each platform can appear on Google.

YouTube

If you’re producing YouTube video content, make sure to follow video SEO best practices. This comprehensive YouTube SEO guide will help you develop a successful video strategy and ensure video titles align with your content.

Per Google, YouTube’s search ranking system prioritizes three elements: 

  • Relevance: Metadata needs to accurately represent video content to be surfaced as relevant for a search query.
  • Engagement: Includes factors such as a video’s watch time for a specific user query.
  • Quality: Video content should show topic expertise, authoritativeness, and trustworthiness.

One trend I’ve noticed in YouTube videos on the Google SERP is that older event content can continue to drive visibility rankings long after the event has ended and well after the related article has faded in search rankings.

Explainer videos also demonstrate longevity on the Google SERP. In this government shutdown explainer video, Yahoo Finance includes the expert’s credentials in the description box, further emphasizing the topic expertise element that YouTube’s ranking system prioritizes. 

YouTube can also help your visibility in AI Overviews. Nearly 30% of Google AI Overviews cite YouTube, according to BrightEdge. YouTube was cited most often for tutorials, reviews, and shopping-related queries.

Dig deeper: YouTube is no longer optional for SEO in the age of AI Overviews

Facebook

While Facebook may not be the cool kid on the block anymore, the social platform has served a diverse set of users over its long history, from its initial audience of college kids to now attracting an older, majority female audience, per Pew Research Center data

Community-based content and entertainment news that sparks conversation is key to engagement success on Facebook. 

While Meta removed the dedicated news tab on Facebook in 2023-2024, leading to cratering Facebook referrals for news publishers, it’s worth noting that Facebook posts have been rising in Google SERP visibility over the last year, so it may be time to reconsider the platform from a search perspective.

In my review of Google search visibility, Facebook posts about holidays and the full moon appear consistently, and the short-form video format is popular. 

X

Since Elon Musk took over the platform in 2022, the audience has shifted to the political right. While the left’s exodus made headlines, usage of X for news is stable or increasing, especially in the U.S., according to the 2025 Digital News Report from the Reuters Institute. 

Breaking news, live updates, and political news dominate X feeds and Google visibility, but don’t overlook sports content, where X posts perform well on both the Google SERPs and Discover. 

Instagram

This platform emphasizes stylish, visually driven stories and topics, such as red-carpet fashion at award shows. Health topics, especially nutrition and self-care, are also popular. 

Sports posts from Instagram, especially game highlights, often surface on the Google SERP as part of a dedicated publisher carousel or in “What people are saying.” 

Reddit

A unique aspect of Reddit is that its user base is often not on other social platforms. For news publishers, this can mean a golden opportunity for niche community engagement, but also requires a dedicated strategy that may not translate well to other platforms.

A wide range of news content can perform well on Reddit, from trending topics to health explainers to live sports coverage, but having a deep understanding of the platform’s audience is critical, as is following the Reddit rules of conduct

Publishers should spend time studying the types of news articles and conversations that drive strong engagement on subreddits before posting anything. Per Reddit, the platform’s largest audiences gravitate toward the following topics:

  • Technology.
  • Health.
  • Direct to consumer (DTC).
  • Gaming.
  • Parenting.

The community discussion forum content from Reddit makes it a natural to appear in the Google SERP as part of the “What people are saying” carousel. The Reddit posts I see most often surfaced by Google are related to sports, entertainment, and business.

Dig deeper: A smarter Reddit strategy for organic and AI search visibility

TikTok

The TikTok user base leans female and has a greater share of people of color. Approximately half of 18- to 29-year-olds in the U.S. self-report going on TikTok at least once daily, per Pew Research data

Visual, conversational, and opinion-based content for younger audiences performs best on TikTok. Niche community content also works well; think fashion, #BookTok, etc.

Remember that short-form video requires a dedicated strategy to maximize engagement and reach, and it’s important to keep in mind that TikTok audiences value authenticity over the polish of a professional newsroom production.

Entertainment and shopping content (sales, product reviews) are the categories in which TikTok demonstrates the most Google visibility.

Pinterest

While Pinterest may feel like an old-school social platform, Gen Z is its fastest-growing audience. That being said, Pinterest attracts users from across a wide range of age groups. According to Pinterest’s global data, its audience is 70% women and 30% men.

Don’t overlook the power of Pinterest for lifestyle content niches. Trends around fashion, home decor, DIY, crafts, recipes, and celebrity content are top performers on this visual social platform. 

News publishers interested in this platform should have robust lifestyle content that is actionable and delivered with a motivational tone.

How-to and before/after formats are popular. Excellent quality visuals in a vertical format with a 2:3 aspect ratio and text overlays are recommended. Pinterest supports a more relaxed posting schedule compared to other social platforms. Weekly posting is ideal, since much of the content on Pinterest is evergreen.

Similar to Google Trends, Pinterest Trends can help news publishers stay on top of trending topics on the platform. 

Get the newsletter search marketers rely on.


Social content opportunities by Google SERP feature

If you’re looking to appear in a particular SERP feature, it’s helpful to know how social platform content appears in each type.

Top Stories (or News Box)

The crown jewel of the Google SERP for news publishers, this feature is dedicated to breaking news and developing news stories as well as capturing updates for the big news stories and trends of the moment.

Thumbnail selection is critical for Top Stories. Publishers should pay close attention to the News Box descriptive labels to ensure content is optimized to match the specific intent or angle Google is seeking.

While historically a SERP feature that showcased traditional news publishers, Google is now including relevant social media content in the mix. The Instagram post in Top Stories below is an Instagram Reel from the Detroit Free Press.

Top stories - 2026 Detroit Auto Show

Live update articles are often featured in the News Box and are a great format to embed social media posts.

It helps break up walls of texts and serves as a showcase for a news publisher’s live, original reporting from the scene, eyewitness accounts, and related social content that demonstrates a publisher’s subject expertise.

What people are saying

This Google SERP feature is ideal for capturing audience reaction and user-generated content from a variety of social platforms. Short-form video is often featured in this space.

It’s a showcase for any story or topic that drives emotional engagement, including reactions to everything from a celebrity death to a sporting event outcome to a viral trend. Severe weather is also a recurring topic.

What people are saying

Knowledge Panel

There’s a growing interest in this Google SERP feature among news publishers, especially those publishers who produce entertainment content.

Depending on the configuration, publishers have the opportunity to earn a ranking for an image, social post, or article, such as a celebrity biography.

While content opportunities are limited in the Knowledge Panel, they offer more exclusivity, which can increase CTR. YouTube and Instagram are commonly cited here, but X and TikTok have also been growing in visibility.

Google knowledge panel - Tom Holland

Google Discover

This social-search hybrid product, which features trending, emotionally engaging content based on a user’s web and app activity, requires a separate optimization strategy.

The keys to Discover visibility are identifying topics that spark curiosity and ensuring articles are formatted for frictionless consumption. 

Discover has been considered a “black box” when it comes to content optimization, but there are several basic elements to implement that can increase visibility.

Viral hits may spike a news publisher’s Discover performance temporarily, but as Harry Clarkson-Bennett outlines, publishers need to analyze their Discover performance over time at the entity level to build a smart optimization strategy.

Google’s official Discover optimization tips discourage clickbait practices that actually work quite well on the platform, such as salacious quotes in headlines and content about controversial topics and strong opinion perspectives.

I would never recommend a publisher produce clickbait, but for tabloid publishers, content with a strong, contentious perspective overperforms on Discover, regardless of the official Google guidance.

Headlines and images require serious consideration. While Google is running an experiment in which their AI tool rewrites headlines for Discover, direct, action-oriented, and emotion-driven headlines traditionally perform best. There’s no specific character count recommendation, but at a certain point (typically 100+ characters), the headline will get truncated and an ellipsis will be used.

Images must be formatted to Discover specifications (at least 1,200 pixels wide) and should be eye-catching to make people stop and click. Keep articles short or include a summary box at the top of longer articles. Format articles for scanability.

This Forbes X post featured on my Discover feed nails the elements essential for inclusion.

Politics, sports, and entertainment topics that favor an opinion-driven perspective can drive strong engagement on Discover. For YMYL (Your Money Your Life) content, which can also perform well on Discover, focus on accuracy, expert sources, and lean into the curiosity gap.

YouTube and X are the dominant social platforms featured on Discover, according to a Marfeel study.

This was further confirmed by Clara Soteras, who shared insights from Andy Almeida of Google’s Trust and Safety team as presented at Google Search Central Live in Zurich in December 2025.

Almeida noted that Discover’s algorithm has been updated to “include content from YouTube, Instagram, TikTok, or X published by content creators.”

Threat or opportunity?

Instead of feeling dismayed by the increased competition from social media platform content appearing on Google’s SERPs and Discover, news publishers should welcome the additional opportunities for their content to be seen.

In a social and AI-powered search landscape, brand visibility is the key metric. Whether that visibility comes from a news publisher article, video, or social post, it still counts toward brand engagement.

While search strategies have long focused on algorithms, optimizing content for a social-forward SERP requires a different focus. The merging of social and search will spark a holistic audience team revolution in newsrooms, reduce redundant practices, and inspire a content strategy powered by people over algorithms.

Before yesterdaySearch Engine Land

The real story behind the 53% drop in SaaS AI traffic

12 February 2026 at 22:30
AI Search SaaSpocalypse

As the SaaS market reels from a sell-off sparked by autonomous AI agents like Claude Cowork, new data shows a 53% drop in AI-driven discovery sessions. Wall Street dubbed it the “SaaSpocalypse.”

Whether AI agents will replace SaaS products is a bigger question than this dataset can answer. But the panic is already distorting interpretation, and this data cuts through the noise to show what SEO teams should actually watch.

Copilot went from 0.3% to 9.6% of SaaS AI traffic in 14 months

From November 2024 to December 2025, SaaS sites logged 774,331 LLM sessions. ChatGPT drove 82.3% of that traffic, but Copilot’s growth tells a different story:

SaaS AI Traffic by Source (Nov 2024 – Dec 2025)

SourceSessionsShare
ChatGPT637,55182.3%
Copilot74,6259.6%
Claude40,3635.2%
Gemini15,7592.0%
Perplexity6,0330.8%

Starting with just 148 sessions in late 2024, Copilot grew more than 20x by May 2025. From May through December, it averaged 3,822 sessions per month, making it the second-largest AI referrer to SaaS sites by year-end 2025.

Investors erased $300 billion from SaaS market caps over fears that AI agents will replace enterprise software. But this data points to a less dramatic force: proximity.

Copilot thrives because it captures intent inside the workflow. Standalone tools saw a 53% traffic drop while workplace-embedded AI grew 20x.

Software evaluation is work, and Copilot sits where that work happens.

When someone asks, “What CRM should we use for a 20-person sales team?” while building a business case in Excel, that moment is captured—one ChatGPT never sees. The May surge reflects that activation: Microsoft 365 users realizing they could research software without opening a new tab.

41.4% of SaaS AI traffic lands on internal search pages

SaaS AI discovery sends users to internal search results first, not product pages.

Top SaaS Landing Pages by LLM Volume

Page TypeLLM Sessions% of AI TrafficPenetration vs Site Avg
Search320,61541.4%8.7x
Blog127,29116.4%8.1x
Pricing40,5035.2%3.2x
Product39,8645.1%2.0x
Support34,5994.5%2.1x

Despite capturing 320,615 sessions — more than blog, pricing, and product pages combined — this dominance likely reflects LLM limitations, not superior content. LLMs route users to search when they lack a specific answer.

For SaaS companies watching their stock crater, that’s useful news: there’s a concrete technical fix. The 41.4% isn’t an existential threat. It’s a crawlability problem.

When an LLM can’t find a direct answer, it defaults to the site’s internal search. The AI treats your search bar as a trusted backup, assuming the search schema will generate a relevant page even if a specific product page isn’t indexed.

At 1.22%, search page penetration is 8.7x the site average. The cause is a “safety net” effect, not optimization.

When more specific pages — like Product or Pricing — lack the data an LLM needs, it falls back to broader search results. LLMs recognize the search URL structure and trust it will return something relevant, even if they can’t predict what.

Blog pages follow with 127,291 sessions and 1.13% penetration. These are structured comparison posts — “best CRM for small teams” or “Salesforce alternatives” — that LLMs cite when they have specific recommendations.

Pricing pages show 0.45% penetration; product pages, 0.28%. When users ask about software selection, LLMs route to comparison surfaces — search and blog — first. Direct product or pricing pages get cited only when the query is already vendor-specific.

The July peak and Q4 decline reflect corporate work cycles

SaaS AI traffic peaked in July at 146,512 sessions, then declined steadily through Q4:

MonthSessionsChange
July 2025146,512Peak
August 2025120,802-17.5%
September 2025134,162+11.1%
October 2025135,397+0.9%
November 2025107,257-20.8%
December 202568,896-35.8%

Every platform declined. ChatGPT’s volume was cut in half, dropping from 127,510 sessions in July to 56,786 by year-end. Copilot fell from 4,737 to 2,351. Perplexity dropped from 7,475 to 3,752.

Two factors drove the slide:

  • People weren’t working. August is vacation season, November includes Thanksgiving, and December is the holidays. Software research happens during work hours; when offices close, discovery drops.
  • Q4 ends the fiscal “buying window.” Most teams have spent their annual budgets or are deferring contracts until Q1 funding opens. Even teams still working aren’t evaluating tools because there’s no budget left until the new fiscal year.

The July peak reflects midyear momentum: people are working, and Q3 budgets are still available. The Q4 decline reflects both fewer researchers and fewer active buying cycles.

This is where the sell-off narrative breaks down.

Investors treat a 53% traffic drop as proof that AI discovery is stalling. But the data aligns with standard B2B fiscal cycles.

AI isn’t failing as a discovery channel. It’s settling into the same seasonal rhythms as every other B2B buying behavior.

What this data means for SEO teams

Raw traffic numbers don’t show where to invest. Penetration rates and landing page distribution reveal what matters.

Track penetration by page type, not site-wide averages

SaaS shows 0.41% sitewide AI penetration, but that average hides concentration. Search pages reach 1.22%—8.7x higher. Blog pages hit 1.13%. Pricing pages are at 0.45%. Product pages lag at 0.28%.

If you’re only tracking total AI sessions, you’re measuring the wrong metric. AI traffic could grow 50% while penetration on high-value pages declines. Volume hides what matters: where AI users concentrate when they arrive with intent.

Action:

  • Segment AI traffic by page type in GA4 or your analytics platform.
  • Track penetration (AI sessions ÷ total sessions) by page category monthly.
  • Identify pages with elevated concentration, then optimize those surfaces first.

Search results pages are now a primary discovery surface

Internal search captures 41.4% of SaaS AI traffic. If those results aren’t crawlable, indexable, or structured for comparison, you’re invisible to the largest segment of AI-driven buyers.

Most SaaS sites treat internal search as navigation, not content. Results return paginated lists with minimal product detail, no filter signals in URLs, and JavaScript-rendered content LLMs can’t parse.

Action:

  • With 41.4% of traffic hitting internal search, treat your search bar as an API for AI agents.
  • Make search pages crawlable (check robots.txt and indexability).
  • Add structured data using SoftwareApplication or Product schema.
  • Surface comparison data — pricing, key features, user count — directly in results, not just product names.

Make your data legible to LLMs — pricing and content both

The sell-off is pricing in obsolescence, but for most SaaS companies the real risk is invisibility. Pricing pages show 0.45% AI penetration—below the 0.46% cross-industry average. Blog pages captured 127,291 sessions at 1.13% penetration, but only when content directly answered selection queries. The pattern is clear: LLMs cite what they can read and parse. They skip what they can’t.

Many SaaS sites still gate pricing behind contact forms. If pricing requires a sales conversation, AI won’t recommend you for “tools under $100/month” queries. The same applies to blog content. When someone asks, “What CRM should I use?” the LLM looks for posts that compare options, define criteria, and explain tradeoffs. Generic thought leadership on CRM trends doesn’t get cited.

Action:

  • Publish pricing on a dedicated, crawlable page. Include representative examples, seat minimums, contract terms, and exclusions.
  • Keep pricing transparent. Transparent pages get cited; gated pages don’t.
  • Replace generic blog posts with structured comparison pages. Use tables and clear data points.
  • Remove fluff. Provide grounding data that lets AI verify compliance and integration capabilities in seconds, not minutes.

Workplace-embedded AI is growing 10x faster than standalone LLMs

Copilot grew 15.89x year over year. Claude grew 7.79x. ChatGPT grew 1.42x. The fastest growth is in tools embedded in existing workflows.

Workplace AI shifts discovery context. In ChatGPT, users are explicitly researching. In Copilot, they’re asking questions mid-task—drafting a proposal, building a comparison spreadsheet, or reviewing vendor options with their team.

Action:

  • Track Copilot and Claude referrals separately from ChatGPT. Monitor which pages these sources favor.
  • Recognize intent: these users aren’t browsing — they’re mid-task, deeper in evaluation, and closer to a purchase decision.
  • Show up in workplace AI discovery to support real-time purchase justification.

Survival favors the findable

The 53% drop from July to December reflects AI usage settling into the software buying process. Buyers are learning which decisions benefit from AI synthesis and which don’t. The remaining traffic is more deliberate, concentrated on complex evaluations where comparison matters.

For SaaS companies, the window for early positioning is closing. The $300 billion sell-off is hitting the sector broadly, but the companies that survive the repricing will be those buyers can find when they ask an AI agent, “Should we renew this contract?”

Teams investing now in transparent pricing, crawlable data, and comparison-focused content are building that findability while competitors debate whether AI discovery matters.

If SEO is rocket science, AI SEO is astrophysics

12 February 2026 at 19:00
If SEO is rocket science, AI SEO is astrophysics

In Google AI Overviews and LLM-driven retrieval, credibility isn’t enough. Content must be structured, reinforced, and clear enough for machines to evaluate and reuse confidently.

Many SEO strategies still optimize for recognition. But AI systems prioritize utility. If your authority can’t be located, verified, and extracted within a semantic system, it won’t shape retrieval.

This article explains how authority works in AI search, why familiar SEO practices fall short, and what it takes to build entity strength that drives visibility.

Why traditional authority signals worked – until they didn’t

For years, SEOs liked to believe that “doing E-E-A-T” would make sites authoritative.

Author bios were optimized, credentials showcased, outbound links added, and About pages polished, all in hopes that those signals would translate into authority.

In practice, we all knew what actually moved the needle: links.

E-E-A-T never really replaced external validation. Authority was still conferred primarily through links and third-party references.

E-E-A-T helped sites appear coherent as entities, while links supplied the real gravitas behind the scenes. That arrangement worked as long as authority could be vague and still rewarded.

It stops working when systems need to use authority, not just acknowledge it. In AI-driven retrieval, being recognized as authoritative isn’t enough. Authority still has to be specific, independently reinforced, and machine-verifiable, or it doesn’t get used.

Being authoritative but not used is like being “paid” with experience. It doesn’t pay the bills.

How AI systems calculate authority

Search no longer operates on a flat plane of keywords and pages. AI-driven systems rely on a multi-dimensional semantic space that models entities, relationships, and topical proximity.

In that semantic space, entities function much like celestial bodies in physical space, discrete objects whose influence is defined by mass, distance, and interaction with others.

E-E-A-T still matters, but the framework version is no longer a differentiator. Authority is now evaluated in a broader context that can’t be optimized with a handful of on-page tasks.

In AI Overviews, ChatGPT, Claude, and similar systems, visibility doesn’t hinge on prestige or brand recognition. Those are symptoms of entity strength, not its source.

What matters is whether a model can locate your entity within its semantic environment and whether that entity has accumulated enough mass to exert influence.

That mass isn’t decorative. It’s built through third-party citations, mentions, and corroboration, then made machine-legible through consistent authorship, structure, and explicit entity relationships.

Models don’t trust authority. They calculate it by measuring how densely and consistently an entity is reinforced across the broader corpus.

Smaller brands don’t need to shine like legacy publishers. In a semantic system, apparent size and visibility don’t determine influence. Density does.

In astrophysics, some planets appear enormous yet exert surprisingly weak gravity because their mass is spread thinly. Others are much smaller, but dense enough to exert stronger pull.

AI visibility works the same way. What matters isn’t how large your brand appears to humans, but how concentrated and reinforced your authority is in machine-readable form.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

The E-E-A-T misinterpretation problem

The problem with E-E-A-T was never the concept itself. It was the assumption that trustworthiness could be meaningfully demonstrated in isolation, primarily through signals a site applied to itself.

Over time, E-E-A-T became operationalized as visible, on-page indicators: author bios, credentials, About pages, and lightweight citations.

These signals were easy to implement and easy to audit, which made them attractive. They created the appearance of rigor, even when they did little to change how authority was actually conferred.

That compromise held when search systems were willing to infer authority from proxies. It breaks down in AI-driven retrieval, where authority must be explicitly reinforced, independently corroborated, and machine-verifiable to carry weight.

Surface-level trust markers don’t fail because models ignore them. They fail because they don’t supply the external reinforcement required to give an entity real mass.

In a semantic system, entities gain influence through repeated confirmation across the broader corpus. On-site signals can help make an entity legible, but they don’t generate density on their own. Compliance isn’t comprehension, and E-E-A-T as a checklist doesn’t create gravitational pull.

In human-centered search, these visible trust cues acted as reasonable stand-ins. In LLM retrieval, they don’t translate. Models aren’t evaluating presentation or intent. They’re evaluating semantic consistency, entity alignment, and whether claims can be cross-verified elsewhere.

E-E-A-T isn’t outdated. It’s incomplete. It explains why humans might trust you.

Applying E-E-A-T principles only within your own site won’t create the mass that machines need to recognize, align with, and prioritize your entity in a retrieval system.

AI doesn’t trust, it calculates

Human trust is emotional. Machine trust is statistical.

In practice:

  • LLMs prioritize clarity. Ambiguous writing reduces confidence.
  • They reward clean extraction. Lists, tables, and focused paragraphs are easiest to reuse.
  • They cross-verify facts. Redundant, consistent statements across multiple sources appear more reliable than a single sprawling narrative.

Retrieval models evaluate confidence, not charisma. Structural decisions such as headings, paragraph boundaries, markup, and lists directly affect how accurately a model can map content to a query.

This is why ChatGPT and AI Overview citations often come from unfamiliar brands.

It’s also why brand-specific queries behave differently. When a query explicitly names a brand or entity, the model isn’t navigating the galaxy broadly. It’s plotting a short, precise trajectory to a known body. 

With intent tightly constrained and only one plausible source of truth, there’s far less risk of drifting toward adjacent entities.

In those cases, the system can rely directly on the entity’s own content because the destination is already fixed. The models aren’t “discovering” hidden experts. They’re rewarding content whose structure reduces uncertainty.

The semantic galaxy: How entities behave like bodies

LLMs don’t experience topics, entities, or websites. They model relationships between representations in a high-dimensional semantic space.

That’s why AI retrieval is better understood as plotting a course through a system of interacting gravitational bodies rather than “finding” an answer. Influence comes from mass, not intention.

In embedding-based retrieval, entities behave like bodies in space, as demonstrated by Karpukhin et al. in their 2020 EMNLP paper on dense passage retrieval.

Over time, citations, mentions, and third-party reinforcement increase an entity’s semantic mass. Each independent reference adds weight, making that entity increasingly difficult for the system to ignore.

Queries move through this space as vectors shaped by intent. As they pass near sufficiently massive entities, they bend. The strongest entities exert the greatest gravitational pull, not because they are trusted in a human sense, but because they are repeatedly reinforced across the broader corpus.

Extractability doesn’t create that gravity. It determines what happens after attraction occurs. An entity can be massive enough to warp trajectories and still be unusable if its signals aren’t machine-legible, like a planet with enough gravity to draw a spacecraft in but no viable way to land.

Authority, in this context, isn’t belief. It’s gravity, the cumulative pull created by repeated, independent reinforcement across the wider semantic system.

Entity strength vs. extractability

Classic SEO emphasized backlinks and brand reputation. AI search desires entity strength for discovery, but demands clarity and semantic extractability to be included.

Entity strength – your connections across the Knowledge Graph, Wikidata, and trusted domains – still matters and arguably matters more now. Unfortunately, no amount of entity strength helps if your content isn’t machine-parsable.

Consider two sites featuring recognized experts:

  • One uses clean headings, explicit definitions, and consistent links to verified profiles.
  • The other buries its expertise inside dense, unstructured paragraphs.

Only one will earn citations.

LLMs need:

  • One entity per paragraph or section.
  • Explicit, unambiguous mentions.
  • Repetition that reinforces relationships (“Dr. Jane Smith, cardiologist at XYZ Clinic”).

Precision makes authority extractable. Extractability determines whether existing gravitational pull can be acted on once attraction has occurred, not whether that pull exists in the first place.

Get the newsletter search marketers rely on.


Structure like you mean it: Abstract first, then detail

LLM retrieval is constrained by context windows and truncation limits, as outlined by Lewis et al. in their 2020 NeurIPS paper on retrieval-augmented generation. Models rarely process or reuse long-form content in its entirety.

If you want to be cited, you can’t bury the lede.

LLMs read the beginning, but then they skim. After a certain number of tokens, they truncate. Basically, if your core insight is buried in paragraph 12, it’s invisible.

To optimize for retrieval:

  • Open with a paragraph that functions as its own TL;DR.
  • State your stance, the core insight, and what follows.
  • Expand below the fold with depth and nuance.

Don’t save your best material for the finale. Neither users nor models will reach it.

Dig deeper: Organizing content for AI search: A 3-level framework

Stop ‘linking out,’ start citing like a researcher

The difference between a citation and a link isn’t subtle, but it’s routinely misunderstood. Part of that confusion comes from how E-E-A-T was operationalized in practice.

In many traditional E-E-A-T playbooks, adding outbound links became a checkbox, a visible, easy-to-execute task that stood in for the harder work of substantiating claims. Over time, “cite sources” quietly degraded into “link out a few times.”

A bad citation looks like this:

A generic outbound link to a blog post or company homepage offered as vague “support,” often with language like “according to industry experts” or “SEO best practices say.”

The source may be tangentially related, self-promotional, or simply restating opinion, but it does nothing to reinforce your entity’s factual position in the broader semantic system.

A good citation behaves more like academic referencing. It points to:

  • Primary research.
  • Original reporting.
  • Standards bodies.
  • Widely recognized authorities in that domain.

It’s also tied directly to a specific claim in your content. The model can independently verify the statement, cross-reference it elsewhere, and reinforce the association.

The point was never to just “link out.” The point was to cite sources.

Engineering retrieval authority without falling back into a checklist

The patterns below aren’t tasks to complete or boxes to tick. They describe the recurring structural signals that, over time, allow an entity to accumulate mass and express gravity across systems.

This is where many SEOs slip back into old habits. Once you say “E-E-A-T isn’t a checklist,” the instinct is to immediately ask, “Okay, so what’s the checklist?”

But engineering retrieval authority isn’t a list of tasks. It’s a way of structuring your entire semantic footprint so your entity gains mass in the galaxy the models navigate.

Authority isn’t something you sprinkle into content. It’s something you construct systematically across everything tied to your entity.

  • Make authorship machine-legible: Use consistent naming. Link to canonical profiles. Add author and sameAs schema. Inconsistent bylines fragment your entity mass.
  • Strengthen your internal entity web: Use descriptive anchor text. Connect related topics the way a knowledge graph would. Strong internal linking increases gravitational coherence.
  • Write with semantic clarity: One idea per paragraph. Minimize rhetorical detours. LLMs reward explicitness, not flourish.
  • Use schema and LLMS.txt as amplifiers: They don’t create authority. They expose it.
  • Audit your “invisible” content: If critical information is hidden in pop-ups, accordions, or rendered outside the DOM, the model can’t see it. Invisible authority is no authority.

From rocket science to astrophysics

E-E-A-T taught us to signal trust to humans. AI search demands more: understanding the forces that determine how information is pulled into view.

Rocket science gets something into orbit. Astrophysics navigates and understands the systems it moves through once there.

Traditional SEO focused on launching pages—optimizing, publishing, promoting. AI SEO is about mass, gravity, and interaction: how often your entity is cited, corroborated, and reinforced across the broader semantic system, and how strongly that accumulated mass influences retrieval.

The brands that win won’t shine brightest or claim authority loudest, nor will they be no-name sites simulating credibility with artificial corroboration and junk links.

They’ll be entities that are dense, coherent, and repeatedly confirmed by independent sources—entities with enough gravity to bend queries toward them.

In an AI-driven search landscape, authority isn’t declared. It’s built, reinforced, and made impossible for machines to ignore.

Dig deeper: User-first E-E-A-T: What actually drives SEO and GEO

How social discovery shapes AI search visibility in beauty

12 February 2026 at 18:00
How social discovery shapes AI search visibility in beaut

AI search visibility in beauty is increasingly shaped before a prompt is ever entered.

Brands that appear in generative answers are often those already discussed, validated, and reinforced across social platforms. By the time a user turns to AI search, much of the groundwork has been laid.

Using the beauty category as a lens, this article examines how social discovery influences brand visibility – and why AI search ultimately reflects those signals.

Discovery didn’t move to AI – it fragmented

Brand discovery has fragmented across platforms. AI tools influence mid-funnel consideration, but much discovery happens before a user enters a prompt.

The signals that determine AI visibility are formed upstream. By the time a user reaches generative search, preferences and perceptions may already be set. If brands wait until AI search to influence demand, the window to shape consideration has narrowed.

That upstream influence is increasingly social. Roughly two-thirds of U.S. consumers now use social platforms as search engines, per eMarketer research. 

This shift extends beyond Gen Z and reflects how people validate information and discover brands. These same platforms consistently appear among the top citation sources in AI results. The dynamic is especially visible in the beauty category.

In a study our agency conducted with a beauty brand partner, we found that Reddit, YouTube, and Facebook ranked among the top cited domains in both AI Overviews and ChatGPT.

Stella beauty prompt study

While Reddit is often viewed as an anti-brand environment, YouTube appears nearly as frequently in citation data, making it a logical and underutilized target for citation optimization.

Dig deeper: Social and UGC: The trust engines powering search everywhere

The volume reality: Social behavior still outpaces AI

It’s easy to focus on headline figures around AI usage, including the billions of prompts processed daily. But when measured against business outcomes such as traffic and transactions, the scale looks different.

Social platforms are already embedded in mainstream search behavior. For many users, search-like activity on platforms such as TikTok and YouTube is habitual. Nearly 40% of TikTok users search the platform multiple times per day, and 73% search at least once daily.

Referral data reinforces the contrast. ChatGPT referral traffic accounted for roughly 0.2% of total sessions in a 12-month analysis of 973 ecommerce sites, a University of Hamburg and Frankfurt School working paper found. In the same dataset, Google’s organic search traffic was approximately 200 times larger than organic LLM referrals.

AI search is growing and strategically important. But in terms of repeat behavior, measurable sessions, and downstream transactions, social platforms and traditional search continue to operate at a substantially larger scale.

The validation loop: Why AI needs social

The most critical contrarian point for 2026 is that optimizing for social is also optimizing for AI. Large language models are not primary sources of truth. They function as mirrors, reflecting the consensus formed through human conversations in the data they are trained on.

AI systems also demonstrate skepticism toward brand-owned properties. One study found that only 25% of sources cited in AI-generated answers were brand-managed websites.

At the same time, AI engines prioritize third-party validation. Up to 6.4% of citation links in AI responses originated from Reddit, an analysis by OtterlyAI found. This outpaces many traditional publishers.

There’s also a measurable relationship between sentiment and visibility. Research shows a moderate positive correlation between positive brand sentiment on social media and visibility in AI search results.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Get the newsletter search marketers rely on.


Video and expert authority shape AI visibility

Treating video as a “brand channel” or a social-first effort rather than a search surface is a strategic failure.

On platforms such as TikTok and YouTube, ranking signals are shaped by spoken language, on-screen text, and captions – signals AI crawlers increasingly use to “triangulate trust.”

In the beauty category, for example, ChatGPT accounts for about 4.3% of searches, while Google processes roughly 14 billion searches per day. However, for “how-to” and technique-based queries, consumers favor the detailed, personalized guidance of social-first video content.

At the same time, the beauty sector has fractured into two universes, according to Yotpo’s GEO for Beauty Brands analysis.

Science-backed brands such as Paula’s Choice and CeraVe dominate AI-generated results because they publish deep, structured educational content. Meanwhile, more traditional marketing-led brands are significantly less visible.

The phrase “dermatologist recommended” correlates with high visibility in AI results because large language models treat expert social proof as a primary ranking signal, according to the same report.

Breaking the high-production barrier: Creating content at scale

One of the biggest hurdles brands cite is budget. Many believe they need a Hollywood production crew to compete in video environments. That is a legacy mindset. 

In today’s environment, high-gloss production can be a deterrent. The current landscape rewards authenticity over polish. Consumers are looking for real people with real skin concerns, not highly filtered commercials.

Optimizing for video discovery doesn’t require filmmaking expertise. Brands can leverage internal talent without adding headcount.

  • Partner with creator platforms: Platforms such as Billow or Social Native allow brands to work with creators for as little as $500 per video. When mapped to a high-intent query, that investment can drive measurable search visibility outcomes.
  • Leverage social natives on staff: Often, the strongest asset is internal. Identify team members who are active on platforms such as TikTok and understand platform dynamics. Creating internal incentives or challenges to produce content can generate a steady stream of authentic assets while contributing to culture.
  • Make strategy the differentiator: A large following is not a prerequisite for visibility. In one case, a TikTok profile built from scratch with one part-time creator at $2,500 per month generated hundreds of thousands of views within 90 days. The focus was not on viral trends, but on meaningful transactional terms that drive revenue.

If a new profile can reach more than 100,000 views per video within three months on a limited budget, the barrier isn’t equipment. It’s clarity on the business case and disciplined execution.

Dig deeper: How to optimize video for AI-powered search

The new beauty SEO playbook for 2026

The data is clear. Brands can’t win the generative engine if they’re losing the social conversation.

AI models function as mirrors, reflecting web consensus. If real users on Reddit, YouTube, and TikTok aren’t discussing a brand, AI systems have little to surface.

If marketers wait until a user reaches a ChatGPT prompt to shape perception, the opportunity has already narrowed.

Discovery happens upstream. Validation occurs in the loop between social proof and algorithmic citation.

Translating this into action requires rethinking team structure and priorities:

  • Stop the silos: Your SEO and social teams shouldn’t speak different languages. Both must focus on search surfaces.
  • Prioritize the “why” before the “what”: Don’t just fix a technical tag. Build the business case for how social sentiment and expert validation drive market share.
  • Embrace scrappy execution: Whether through $500 creator partnerships or internal social-native talent, start building authentic assets now.

We’re witnessing a shift from algorithm-driven discovery to community-driven discovery.

It’s agile and multidisciplinary, and when executed well, it can meaningfully impact the bottom line.

Local SEO sprints: A 90-day plan for service businesses in 2026

12 February 2026 at 17:00
Local SEO sprints- A 90-day plan for service businesses in 2026

Local search remains one of the strongest drivers of consistent lead flow for service businesses.

Outdated SEO tactics are losing impact as Google’s algorithm updates reshape local visibility. Success now depends on disciplined tracking and consistent execution.

This 90-day sprint plan shows how to do both.

Why local visibility is more volatile in 2026

Many service businesses aren’t current on how local search has changed or how Google Maps now determines visibility. They have a Google Business Profile (GBP) and a website, yet the phone is quiet.

If a GBP isn’t visible, local prospects won’t find the business when they search for its services. That may sound obvious, but the rules behind that visibility have changed.

Much of that shift traces back to Google’s 2025 spam updates, which significantly cleaned up map results and tightened enforcement.

Review spam, keyword-stuffed business names, fake addresses, and profiles that don’t match real-world details are being filtered more aggressively. At the same time, Google is testing sponsored placements in the map pack, and AI-driven features are shaping how results appear.

The result? Volatility.

Rankings move even when nothing obvious has changed on the site. Business owners and SEOs regularly report drops in GBP impressions and map visibility in public forums. One thread doesn’t prove causation, but it reinforces a broader pattern: local search is less stable than many assume.

Shortcuts that once produced temporary lifts now carry long-term risk. Buying reviews, stuffing keywords into a business name, or stretching service areas beyond reality can lead to suspensions or lost visibility — often just as momentum begins to build.

That is why local SEO sprints matter.

Local performance isn’t driven by one-time actions. Reviews, content, citations, links, and customer experience signals build over time.

The businesses that win in 2026 aren’t chasing hacks. They execute consistently.

This 90-day sprint plan provides the structure to do exactly that.

Dig deeper: Why local SEO is thriving in the AI-first search era

3 lead levers that matter most for local search

If local visibility feels unstable, one of three core levers is usually weak. These levers form the foundation of any effective sprint plan and must work together.

Fix only one, and results will be inconsistent. Strengthen all three, and you create stability and sustained lead flow.

Lead leverWhat it meansWhat it changes
RelevanceGoogle clearly understands your services and service area.More map pack visibility.
ProminenceReviews, links, mentions, and local trust signals.More stability, more clicks.
ConversionYour site and GBP make contacting you frictionless.More leads from the same traffic.

Google evaluates local businesses across multiple signals, from proximity and service clarity to reputation and user behavior.

Durable relevance comes from real local authority – accurate categories, consistent citations, strong service pages, and steady review growth.

The 90-day sprint plan

Here’s a structured way to strengthen each of the three lead levers.

Sprint warm-up (Days 1-3): Establish your measurement baseline

If you don’t track from day one, local SEO becomes guesswork — and guesswork doesn’t generate consistent leads. Without clear attribution, you can’t fix what’s broken or scale what’s working.

When you begin working with a service business, start with attribution. Can you trace every call, form fill, and booking to its source? If not, optimization becomes trial and error.

Use the table below as a stop sign. If the core tracking elements aren’t in place, pause and fix them before moving forward.

Tracking checklist: Mark “yes” or “no.” This is your baseline.

ItemWhat “done” meansYes / NoNotes
GA4 setupGA4 installed and collecting data.
Search ConsoleVerified and connected.
GBP InsightsBaseline saved.
UTM on GBP linkUTM added in GBP website field.
Call trackingTracking number. Source known.CallRail is a solid option
Form trackingForm submit tracked. Source captured.
Booking trackingBookings tracked and attributed.
Weekly numbersWeekly tracking routine set.
Monthly numbersMonthly summary routine set.

Baseline snapshot: Complete the table below before making any changes. Save a monthly screenshot as a clear baseline as you run your 90-day sprint.

MetricLast 7 daysLast 28 days
GBP calls
GBP website clicks
Form submissions
Booked jobs
GSC impressions
GSC clicks

Phase 1 (Days 4-10): Fix GBP fundamentals

Start by fixing issues with your GBP. It’s where Google gathers local signals and evaluates what your business offers. If your profile lacks clarity, even a strong website won’t compensate.

One basic element people often get wrong is the primary category. If you’re an HVAC contractor, your primary category should be “HVAC contractor,” not “Furnace repair service” or “Contractor.” Be exact.

Secondary categories should reflect allied services only. Many businesses add long lists of secondary categories, believing it will generate more calls. In reality, it can dilute relevance and weaken the primary category.

What about posts, geotagged images, inflated service areas, or keyword-stuffed business names? These tactics create activity, not impact.

GBP areaWhat to doWhat to avoid
Primary categoryPick the closest match to your main money servicePicking a vague category “because it ranks”
Secondary categoriesOnly true supporting servicesAdding everything under the sun
ServicesAdd real services you sellMade up services to chase traffic
DescriptionKeep it simple. Service + areas + proofKeyword soup
PhotosReal photos. Real jobsStock images and fake “before after”

Address and service area reality

Don’t try to cover an entire metro area if you can’t serve it. Set service areas based on reality and Google’s rules. If you’re not compliant, your profile faces a higher risk of suspension and video verification.

If you’re a service area business, be conservative. Focus on the radius you can serve well. It’s better to rank and convert strongly within your true radius than to look “bigger” on paper and struggle to build real signals.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

Get the newsletter search marketers rely on.


Phase 2 (Days 11-35): Build service and location pages

This is core relevance work. Your GBP can be perfect, but if your website is thin, you’ll struggle to hold positions long term.

Many businesses have only a homepage and a contact page, yet expect Google to understand everything about what they offer.

Google needs clear service pages, and so do customers. Each page should focus on one service and explain the process, benefits, and expectations in depth. These pages aren’t just for rankings—they answer questions, reduce hesitation, and drive calls.

Start with your highest-value pages:

  • Top 2-3 services you sell most.
  • Top 2-4 areas you truly serve within a two-hour drive.

Focus on your actual location and radius. That’s where you can build the right signals.

For example, if you’re a plumber in Mississauga, Ontario, and you create thin location pages for every city in the Greater Toronto Area, you may get impressions. But without real proof, real jobs, and real conversion strength, those pages rarely hold. You end up with a bloated site that’s hard to maintain and easy for Google to ignore.

What a money service page must include: This isn’t “SEO copy.” This is how you win calls.

BlockWhat to include
Pricing rangeA range. Not “call for quote.” Explain why your pricing differs.
ProcessHow do you do the service, step-by-step?
ProofLicenses. Accreditations. Awards. Local reviews.
FAQsReal answers to real questions customers ask. 
CTACall. Form. Booking. Make it easy for your potential customers.

On pricing, don’t overthink it. You don’t need a perfect quote on the page — just a range and a reason for that range.

  • Why is your pricing different? 
  • What is included? 
  • What changes the price? 
  • What does “emergency” mean? 

These details turn tire-kicking visitors into qualified calls.

Location pages: Do them right or don’t do them at all

Copy-paste location pages are a common mistake. You can’t just swap the city name and call it a strategy.

Use this checklist to ensure each location page is unique and robust:

Location page elementWhat makes it real
Local proofPhotos. Projects. Neighborhood references you actually serve
Service fitOnly services you provide in that area
Local FAQs“Do you serve X.” “What’s the travel fee.” “Same-day service”
ContactPhone and booking paths that work on mobile

A simple and effective internal linking structure

Build internal links on your site like they are a map. Because they are, for both site visitors as well as Google. If you leave pages disconnected, you waste the work you put into them. Check that:

  • Service pages link to relevant location pages.
  • Location pages link to top services.
  • Relevant blog posts link to money pages.

Phase 3 (Days 36-70): Strengthen reviews and local authority

Phase 3 is about cadence. Continuity beats bursts. At this point, many feel tempted to “go hard for two weeks” and then move on to something else. 

That’s the wrong pattern for reviews and trust signals. A steady flow is safer and more believable.

Reviews. Weekly. Forever.

Collect reviews every week, not all at once and then radio silence. Put into place practices that regularly solicit reviews from recent customers.

Also, make customers aware of what they can mention in reviews.

  • The service you provided.
  • Their location (neighborhood/city).

Joy Hawkins has published case studies on review recency and performance, and continues to reinforce the idea that fresh reviews matter. But the bigger point is that this means utilizing a complete review strategy, not just a one-time push. 

Consider this review cadence plan:

StepFrequency
Build list of satisfied customersWeekly
Send SMS review askWeekly
Send email follow-upWeekly
Respond to reviews2-3x weekly

Dig deeper: Want to win at local SEO? Focus on reviews and customer sentiment

NAP consistency and citations

Clear, consistent citations won’t fix a bad business. But they reduce confusion and strengthen local trust signals. The goal here is not “more listings.” The goal is “no contradictions.”

Your name, address, and phone number (NAP) should match across:

  • GBP.
  • Website.
  • Local citations.

Local links that make sense

Don’t buy backlinks. Build local authority that is real. What might this look like?

  • Your City’s Chamber of Commerce membership and listing.
  • Supplier and partner pages (real ones).
  • Sponsoring local teams and events.
  • Local causes.
  • PR-worthy local stories.
  • Partner pages built through real value.

Spammy link tactics might give your site a short boost. But they’re harmful in the long run.

Also, make certain that links are geographically sensible. If you’re a business in Canada, focus on links from Canada and not from random overseas sites. Relevance matters, and locality matters the most.

Phase 4 (Days 71-90): Scale what’s working and report results

By the end of Month 3, your GSC queries should start to look up. Higher impressions. Better clicks. 

If not, take a look at your pages that are in Positions 6-20. That’s where you’re getting impressions, but you’re not getting clicks.

This is where many businesses make mistakes. A big one is that they keep publishing new pages instead of improving pages that are already close to winning.

When you see queries and pages with Positions 6-20 in GSC

If you have pages that are ranking in these positions, here are some things you can fix to help them move up:

  • Update page titles to make certain that are relevant.
  • Add answers on those pages to the questions your customers usually ask.
  • Chunk the Q&A so that it’s easier for the crawler to scan.

This matches how people consume information today: fast, on mobile, and looking for direct answers.

Simple reporting dashboard

Here’s a simple dashboard to help you keep track of how you’re doing during the 90-day sprint and beyond. Use it consistently to track growth.

MetricThis monthLast monthNotes
Organic leads
GBP calls
New reviews
New links
Top queries growth (GSC)

Dig deeper: GEO x local SEO: What it means for the future of discovery

Useful tools for the 90-day sprint

There are countless SEO tools available, but this sprint does not require a complex stack. Keep it simple and focused:

  • Tracking: GA4 and Google Search Console for performance data and attribution. Proof, not opinions.
  • Call tracking: CallRail to track GBP-driven calls and clarify lead sources.
  • Local grid tracking: Local Falcon or Whitespark to measure visibility by neighborhood.
  • Citations: BrightLocal Citation Builder and data aggregators, if needed, to ensure consistency.
  • Speed testing: PageSpeed Insights to benchmark and improve mobile performance.

An ongoing local SEO plan outperforms one-time optimization

Local SEO is no longer something you “set up” and revisit later. Rankings shift. Reviews age. Competitors publish new pages. Google adjusts the map pack. One-time optimization fades faster than most teams expect.

A 90-day sprint enforces consistency—tracking before changing anything, fixing core GBP issues, building real service pages, collecting reviews weekly, and improving pages already close to ranking instead of chasing new ones. The gains compound.

IIt also keeps you away from the shortcuts that create problems in the first place. No:

  • Keyword-stuffed business names.
  • Fake addresses.
  • Bought reviews.
  • Copy-paste location pages.
  • Random secondary categories.
  • Purchased backlinks.

Just as important, no operational gaps. If calls go unanswered or booking paths break, prospects move to the next listing. Over time, that lost engagement shows up in performance.

Local SEO in 2026 rewards businesses that operate like real businesses—clear, consistent, responsive. A 90-day sprint builds that rhythm. One-time optimization doesn’t.

Why video is the canonical source of truth for AI and your brand’s best defense

11 February 2026 at 22:00
Video is the canonical source of truth for AI – and your brand's best defense

The Wild West of web scraping is changing, due in large part to OpenAI’s deal with Disney. The deal allows OpenAI to train on high-fidelity, human-verified cinematic content – intended to combat AI slop fatigue. 

This is how most of us feel when dealing with AI slop. Video production by Impolite. 

This deal opens up new opportunities to reinforce your brand’s visibility and recall. AI models are hungry for high-quality data, and this shift turns video into an essential asset for your brand.

Here’s a breakdown of why video is the new source of truth for AI and how you can use it to protect your brand’s identity.

How AI brand drift happens

When a large language model’s training set lacks data on a specific brand, the LLM doesn’t admit that it doesn’t know. Instead, it interpolates, filling the gaps in your brand’s story. It makes guesses about your brand identity based on patterns from similar brands or general industry information. 

This interpolation can lead to brand drift. Here’s what it looks like when an AI model narrates an inaccurate version of your business.

Say you represent a SaaS company. A user asks ChatGPT about one of your product’s features. But the model doesn’t have information about that specific feature.

So, the model constructs elaborate setup instructions, pricing tiers, and integration requirements for the phantom feature.

This has surfaced for companies like Streamer.bot, where users regularly arrive with confidently wrong instructions generated by ChatGPT – forcing teams to correct misinformation that the product never published. 

A Streamer.bot team member describing how AI-generated setup instructions regularly misrepresent product behavior, creating confusion and additional support burden.

AI brand drift happens to local businesses, too. As one restaurant owner told Futurism, Google AI Overviews repeatedly shared false information about both specials and menu items.

To correct brand drift and prevent AI from distorting your brand message, your company must provide a canonical source of truth.

Video as a source of truth

By producing authoritative videos (e.g., a demo that explicitly clarifies pricing), you provide strong semantic information through the transcript and visual proof. The video becomes the canonical source of truth that makes things clear, overriding opinions from Reddit and other sources.

In contrast, a text file contains low entropy. A statement like “50% off” is identical whether it was written in 2015 or 2025. Text often lacks the timestamp of reality, making it easy for AI to manipulate or lose the context of the real world.

To fix this, you need a medium with more data packed into every second. A five-minute video at 60 frames per second contains 18,000 frames of visual evidence, a nuanced audio track, and a text transcript.

Video enables LLMs to capture non-verbal, high-fidelity cues, creating a validation layer that preserves the visual evidence often flattened or lost in written content.

Creative studios like Berlin-based Impolite specialize in high-production-value video that provides the chaotic, non-repetitive entropy that AI needs to verify. The studio’s work for global brands serves as the high-density data source that prevents brand drift.

For example, Karman’s “The Space That Makes Us Human” project is a masterclass in creating a canonical source of truth, using high-fidelity, expert-led video to anchor brand identity.

Dig deeper: How to optimize video for AI-powered search

Authenticity as a signal

As deepfakes proliferate, authenticity is shifting from a vague moral concept to a hard technical signal. Search engines and AI agents need a way to verify the provenance.

Is this video real? Is it from the brand it claims to be?

For AI models, real-world human footage is the ultimate high-trust data source. It provides physical evidence, such as a person speaking, a product in motion, or a specific location. In contrast, AI-generated video often lacks the chaotic, non-repetitive entropy of real-world light and physics. 

The Coalition for Content Provenance and Authenticity (C2PA) is developing a new provenance standard to verify authenticity. The organization, which includes members such as Google, Adobe, Microsoft, and OpenAI, provides the technical specifications that enable this data to be cryptographically verifiable.

At the same time, the Content Authenticity Initiative (CAI), spearheaded by Adobe, drives the adoption of open-source tools for digital transparency.

Together, the two organizations go beyond simple watermarking. They allow brands to sign videos the moment they begin recording, providing a signal that AI models can prioritize over unverified noise.

How media verification works: From lens to screen

Ever notice that tiny “CR” mark in the corner of certain media on LinkedIn? This label stands for content credentials. It appears on images and videos to indicate their origin and whether the creator used AI to produce or edit them. 

When you click or hover over the “CR” icon on a LinkedIn post, a sidebar or pop-up appears that shows:

  • The creator: The name of the person or organization that produced the media
  • The tools used: Which software (e.g., Adobe Photoshop) the creator used to edit or generate the media
  • AI disclosure: A specific note if the content was generated with AI
  • The process: A history of edits made to the file to ensure the image hasn’t been deceptively altered

Some creators are already looking to circumvent the icon. Some have shared tips to hide the tag.

While some call it LinkedIn shaming, its presence signals authority. It’s also gaining traction. 

Google has begun integrating C2PA signals into search and ads to help enforce policies regarding misrepresentation and AI disclosure. The search giant has also updated its documentation to explain how C2PA metadata is handled in Google Images.

Dig deeper: The SEO shift you can’t ignore: Video is becoming source material

Get the newsletter search marketers rely on.


How verified media maintains its integrity

For content marketers, adopting C2PA is a defensive moat against misinformation and a proactive signal of quality.

If a bad actor deepfakes your CEO, the absence of your corporate cryptographic signature acts as a silent alarm. Platforms and AI agents will immediately detect that the content lacks a verified origin seal and de-prioritize it in favor of authenticated assets.

Here’s how it works in practice.

1. Capture: The hardware root of trust

Select Sony cameras use the brand’s camera authenticity solution to embed digital signatures in real time. The signature uses keys held in a secure hardware chipset. Sony uses 3D depth data alongside the C2PA manifest rather than a 2D screen or a projection to verify that a real 3D subject was filmed.

Similarly, select Qualcomm’s products support a cryptographic seal that proves the photo’s authenticity. In addition, apps like Truepic and ProofMode can sign footage on standard devices.

2. Edit: The editorial ledger

C2PA-aware software, such as Adobe Premiere Pro, integrates content credentials. This allows brands to embed a manifest listing the creator, edits, and software.

Think of it as a content ledger. Content credentials act as a digital paper trail, logging every hand that touches the file:

  • When an editor exports a video, the software preserves the original camera signature and appends a manifest of every cut and color grade.
  • If generative AI tools are used, relevant frames are tagged as AI-generated, preserving the integrity of the remaining human-verified footage.

3. Verify: Tamper-proof evidence in action

If the content is altered outside of a C2PA-compliant tool, the cryptographic link is severed.

When an AI model performs an evidence-weighting calculation to decide which information to show a user, it will see this broken signature.

Dig deeper: How to dominate video-driven SERPs

The expert content workflow

Information overload is constant nowadays. Traditional gatekeepers are struggling because AI generates content faster than humans can verify it. Authenticity becomes scarce online as Audiences increasingly seek out authenticity and strive to distinguish signal from noise.

From LLMs to search engines like Google, AI systems struggle with the same challenge. Verified subject matter experts (SMEs) are emerging as critical differentiators and as guarantors of credibility and pertinence.

An SME is a human anchor point of credibility for both humans and machines. When brands pair expertise with verifiable video documentation, they create something AI can’t replicate: authentic authority that audiences can see, hear, and trust.

Why expert video should be the source material 

Content repurposing engine

A video transcript of an expert explaining a complex topic often captures colloquial, nuanced details that polished, static blog posts miss. Here’s how to use expert-led videos as the starting point of your content flywheel: 

  • Text stream: Extract the transcript to create authoritative, long-form blogs, FAQs, and social captions. This provides the semantic foundation for text-based retrieval.
  • Visual stream: Pull high-quality frames for infographics and thumbnails. This provides visual proof that anchors the text.
  • Audio stream: Repurpose the audio for podcast distribution, capturing your expert’s tonal authority.
  • Discovery stream: Cut vertical TikTok and YouTube clips. These act as entry points that lead AI agents back to your canonical source.

By repurposing a single high-density video asset across these formats, you create a self-reinforcing loop of authority.

This increases the probability that an AI model will encounter and index your brand’s expertise in the format that the model prefers. For example, Gemini might index the video, while Perplexity might index the transcript.

It doesn’t have to be fancy, as this clip from Search with Sean shows:

What to look out for

Before you hit record, identify where your brand is most vulnerable to AI drift. To maximize the surface area for AI retrieval, proceed this way: 

  • Identify the gap: Where is AI hallucinating elements of your story? Find the topics where your brand voice is missing or being misrepresented by outdated Reddit posts or competitor noise.
  • Anchor with verified experts: Use real people with verifiable credentials. AI agents now cross-reference experts against LinkedIn data and professional knowledge graphs to weigh the authority of the content.
  • Preserve the nuance: Marketing and legal departments often strip it from blog posts, making them generic. Video preserves the colloquial, detailed explanations that signal true expertise. 

Here’s a concrete example recorded with Semrush’s Brand Control Quadrant framework:

Dig deeper: The future of SEO content is video – here’s why

Context still beats compliance

With infinite, low-cost AI slop cropping up, it’s going to get harder and harder to fight deepfakes. But it’s harder for an AI to hallucinate a real physical event than a sentence.

The most valuable asset a brand owns is its verifiable expertise. By anchoring your brand in expert-led, multimodal video, you ensure that your identity remains consistent, protected, and prioritized.

A clear hierarchy of data is emerging: high-fidelity, cryptographically signed video is the premium currency. For every other brand, the mandate is simple: Record reality. If you don’t provide a signed, high-density video record of your business, the AI will hallucinate one for you.

💾

A five-minute video provides more data for an LLM than most blog posts. Here’s how to maximize your brand’s surface area for AI retrieval.

Generative engine optimization (GEO): How to win AI mentions

11 February 2026 at 21:02
What is generative engine optimization (GEO)?

Generative engine optimization (GEO) is the practice of positioning your brand and content so that AI platforms like Google AI Overviews, ChatGPT, and Perplexity cite, recommend, or mention you when users search for answers.

If that sounds abstract, the results aren’t.

For bootstrapped form builder tool, Tally, ChatGPT became the #1 referral source.

They’re not alone. Across industries, the shift is already measurable.

ChatGPT reaches over 800 million weekly users. Google’s Gemini app has surpassed 750 million monthly users. And AI Overviews are appearing in at least 16% of all searches (significantly higher for comparison and high-intent queries). 

The question isn’t whether AI is changing discovery. It’s whether your brand is showing up when it happens.

So GEO is real. But is it stable enough to invest in seriously?

That’s a fair question. 

When we tracked 2,500 prompts across Google AI Mode and ChatGPT through the Semrush AI Visibility Index, the first thing we noticed was volatility. 

Between 40 and 60% of cited sources change from month to month.

But underneath the variances, patterns emerged. 

The brands showing up consistently shared specific structural characteristics. Entity clarity, content extractability, multi-platform presence made them easier for AI systems to find, trust, and reference.

In this guide, I’ll share what we’ve found about what GEO requires, how it differs from SEO, and the framework for increasing your visibility in AI-driven discovery.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What GEO looks like in practice

GEO helps your brand appear in AI-generated answers.

For example, when someone asks an AI tool “What is the best whey protein powder for a mom in their 50s,” the response typically evaluates brands and recommends options based on ingredients, reviews, and credibility signals.

If your content or brand is included in that response, it’s an example of GEO in action.

Getting there requires coordinated effort across several areas:

  • Content strategy: Publishing information that AI systems can discover, understand, and extract for answers
  • Brand presence: Establishing your authority across platforms where AI tools pull information (not just your website)
  • Technical Optimization: Ensuring AI crawlers can access and process your content
  • Reputation Building: Earning mentions and associations that signal credibility to AI systems

These activities overlap with traditional SEO, but the emphasis shifts.

How GEO differs from traditional SEO

GEO builds on the same SEO fundamentals you already use. But it shifts the focus from rankings and clicks to how your brand is mentioned and cited inside AI-generated answers.

Here’s a snapshot of some key differences between GEO and traditional SEO:

What ChangesTraditional SEOGEO
Primary goalRank in top search positionsBe referenced or mentioned in AI answers
Success metricsRankings, clicks, trafficCitations, mentions, share of voice
How users find youClick through to your siteAI includes you in generated responses
Key platformsGoogle, BingGoogle AI Overviews and AI Mode, ChatGPT, Perplexity
How you optimize contentTitle tags, keywords, site speed, content qualitySelf-contained paragraphs, clear facts, structured data
How you build credibilityBacklinks, author credentials, reviews, domain authorityPositive mentions across trusted platforms and communities

Use this table to update your mental model. 

Traditional SEO fundamentals still matter. We’re just adapting how we apply them as AI systems change how people discover information.

Now, let’s break down what this means in practice.

What stays the same

The core principles behind effective SEO still apply to GEO.

You still need to publish high-quality, authoritative content for real users. Your site still needs to be technically accessible. You still need credible signals of trust and expertise. And you still need to understand user intent and deliver clear value.

AI systems tend to reference content that is authoritative, well-structured, and easy to interpret. Those are the same qualities that support strong SEO performance. 

If you already have a solid SEO foundation, GEO builds on it rather than replacing it.


Further reading: SEO vs. GEO, AEO, LLMO: What Marketers Need to Know


What changes

Where GEO diverges is in how that foundation is applied.

1. Where you need presence

Traditional SEO focuses primarily on your owned properties, i.e. your website and blog.

GEO benefits from strategic presence across platforms where AI tools discover information, including:

  • Reddit threads where your target audience asks questions
  • YouTube videos demonstrating your expertise
  • Industry publications that establish your authority
  • Review sites where customers discuss solutions
  • Social platforms where conversations happen

2. How you structure information

AI systems extract specific passages from your content to construct answers. They pull a paragraph here, a statistic there, and weave them together.

This changes how you need to structure information. 

When you’re explaining a concept, defining a term, or sharing data, that paragraph should ideally work on its own. AI systems often extract these substantive passages without the conversational setup around them. (We’ll cover the mechanics of how this works in the strategic framework later.)

You need clear headings to help AI identify which section answers which question.

Also, putting answers early in sections may make them easier for AI to find and extract.

Traditional SEO often rewards comprehensive coverage. GEO places more emphasis on content that’s easy to extract and reassemble. We’re still learning exactly how different AI systems prioritize structure, but clarity consistently helps.

3. What you measure

Traditional SEO metrics like rankings, clicks, and bounce rate tell part of the story.

GEO adds new measurements, like:

  • AI visibility score: A benchmark of how often and where your brand appears in AI-generated answers
  • Share of voice: Your visibility compared to competitors in AI responses
  • Sentiment: Whether mentions are positive, neutral, or negative
  • Context or prompt: What questions or topics trigger mentions of your brand
Semrush Enterprise AIO – Backlinko – AIO Overview

Together, these metrics help you understand not just whether you’re visible, but how your brand is being positioned inside AI-generated responses.

You need both traditional SEO metrics and AI visibility metrics to understand your full organic search presence in 2026.


Note: You can track these metrics using Semrush’s Enterprise AIO, which monitors your brand’s visibility across AI platforms like ChatGPT, Google AI Mode, and Perplexity. 

It provides granular tracking of mentions, sentiment, share of voice, and competitive benchmarking to help you optimize your AI visibility strategy.


5 principles for AI visibility: A strategic framework

An effective GEO strategy rests on five connected principles that work together to maximize your AI visibility.

(As AI systems evolve, specific patterns may shift, but these underlying principles provide a stable foundation.)

Each one addresses how AI systems discover, evaluate, and reference your brand.

Let’s look at them in detail.

1. SEO fundamentals are the foundation

SEO fundamentals still matter for GEO, but for a different reason than in traditional search.

In AI-driven discovery, these fundamentals still function as optimization levers, but they influence retrieval, interpretation, and attribution rather than rankings alone. 

They create the baseline conditions that allow AI systems to retrieve information, interpret it accurately, and attribute it to a source with confidence.

For instance, AI-generated answers are assembled from content that is accessible, readable, and attributable. 

When accessibility, readability, or clear attribution are weak, even strong content becomes harder for AI systems to surface or reference reliably.

This is why many sources cited by AI platforms share characteristics long associated with solid SEO foundations. 

The overlap exists because clarity and reliability still matter across discovery systems, even as the surfaces change.

Technical accessibility plays a role here. 

Content that cannot be consistently crawled, indexed, or rendered introduces uncertainty at the retrieval layer. 

Page performance has a similar effect. Slower or unstable experiences don’t block inclusion outright. But they reduce how dependable a source appears when answers are assembled.

JavaScript-heavy implementations highlight this dynamic. 

Many AI crawlers still struggle to consistently process client-side rendered content, which can make core information harder to extract or interpret. 

When that happens, AI systems have less certainty about using the content as a reference point.

But technical setup is only part of the equation.

AI systems also assess content quality and credibility. Information that reflects real experience, clear expertise, and identifiable authorship is easier to contextualize and trust. 

Signals associated with E-E-A-T (Experience, Expertise, Authoritativeness, and Trust) influence not just whether content is referenced, but how it is framed within an answer.

Taken together, these foundations explain why SEO still underpins GEO. Not as a ranking system, but as the infrastructure that makes AI visibility possible.


Further reading: A technical SEO blueprint for GEO: Optimize for AI-powered search


2. Entity clarity shapes AI understanding

Entities help AI systems understand and categorize information on the web. This includes distinguishing your brand from similar names, identifying what category you belong to, and understanding which topics you’re credible for.

AI systems don’t just read words. They interpret structure.

Before schema ever comes into play, they look for clear signals about:

  • What your brand is
  • What category it belongs to
  • What it offers
  • What it’s authoritative for

The most reliable way to provide those signals is through well-structured information.

If those signals are unclear or inconsistent, AI systems have less confidence when deciding whether and how to reference you.

Take monday.com as an example. When AI systems crawl websites and process information, they see “monday” mentioned in many different contexts. 

Clear, consistent descriptions across the site and supporting sources help AI understand that monday.com refers to project management software. Not the day of the week.

The same principle applies to category clarity. If you sell organic dog food, AI needs to categorize your brand under pet nutrition, not general groceries or pet accessories.

When someone asks “what’s the best grain-free dog food,” AI is more likely to consider brands it can clearly place in the correct category.

On a product page, it should be unambiguous what each element represents — the product name, the description, the price, the attributes, availability and variants.

That clarity needs to exist in the visible page content first. 

Schema markup can then mirror that structure in a machine-readable format (typically JSON-LD). And that same structured understanding should also be reflected in downstream systems, like your product feed submitted to Google Merchant Center.

In other words, the page structure, the schema markup, and the commerce feed should all describe the same thing in the same way.

The goal isn’t to “add schema.” The goal is to make your information logically structured so machines can consistently understand it across systems.

This is important because we don’t know how structured data is used inside large language models. Or how exactly schema influences training, retrieval, or real-time answer generation.

But we do know this: AI systems cross-reference signals from multiple sources and formats.

Your brand description on LinkedIn should align with what appears on your site. Profiles on Crunchbase, review platforms, or industry directories should reinforce the same category, positioning, and value proposition.

When these signals are consistent across sources, AI systems can categorize and reference your brand with greater confidence. When they conflict, confidence drops, and your brand is less likely to be mentioned.

This is why entity clarity isn’t just about a single markup tactic. It comes from designing your content and presence so machines can reliably understand who you are, what you offer, and where you belong wherever your brand appears.


Further reading: How Ecommerce Brands Actually Get Discovered In AI Search



Tip: You can check if your site has missing structured data that makes entity relationships unclear — along with other issues that could potentially be hurting your AI search visibility — using Semrush’s Site Audit.


3. Content must be easy to extract and reuse

If entity clarity determines whether AI systems consider your content at all, extractability determines which specific parts get pulled into AI-generated answers.

This principle operates at the retrieval layer.

AI systems don’t consume pages the way humans do. When generating answers, they retrieve specific passages from across the web and assemble them into a response.

Here’s how it works mechanically:

LLMs break content into chunks, convert those chunks into numerical representations (vectors), and retrieve the most relevant passages when assembling an answer.

Those retrieved chunks are then synthesized into a response — often without the surrounding context from your original page.

This has practical implications. 

Based on what we’ve observed, passages that retain meaning when read in isolation are more likely to be retrieved and used accurately. Passages that rely on conversational setup or references like “as mentioned above” or “this is why” tend to lose clarity when extracted.

Now this may not apply to every paragraph on a page. 

But paragraphs that contain definitions, explanations, comparisons, or key facts should ideally stand on their own. These are the passages AI systems are most likely to extract without the surrounding narrative.

So what makes content extractable?

  • Self-contained paragraphs: Each paragraph expresses one complete idea that makes sense on its own, without vague references to surrounding text
  • Specific facts and statistics: Concrete numbers and clear statements are easier for AI to extract than vague generalizations
  • Clear, descriptive headings: Headings signal what each section covers, helping AI understand content organization
  • Front-loaded information: The main point appears at the start of paragraphs rather than at the end

One important distinction: This principle mainly applies to retrieval-augmented systems — like Google AI Mode and Perplexity with grounding, and ChatGPT with browsing enabled. These systems get content in real-time.

For base model knowledge (what the LLM learned during training), content structure is less important. That knowledge comes from training, not from retrieving per-query. Building presence in training data takes time and requires consistent, authoritative publishing.


Below is an example of self-contained content that AI systems can easily extract and reference.

  • It answers a single, well-defined question: which sources AI platforms rely on for finance-related queries
  • The main takeaway is stated immediately, without setup
  • Supporting context (platforms, percentages, category) is included within the same frame
  • The insight makes sense on its own, even if quoted or summarized elsewhere

The same extractability principle shows up in everyday writing as well.

For example, compare these two ways of explaining the same cooking technique:

Hard to extract: “There are several reasons this method works. After trying it, most people find their eggplant tastes better. That’s why many chefs use it.”

Easy to extract: “Salting eggplant for 15 minutes before cooking removes bitterness and excess moisture. This technique improves the final texture.”

Both explain the same idea. But the second version states the technique, timing, benefit, and result clearly, which makes it easy for AI to extract as a standalone passage.

Here are other examples:

When content is structured this way, AI systems can reliably retrieve relevant passages and include them in answers. 

Over time, that increases the likelihood that your expertise is surfaced accurately when users ask questions related to your domain.

4. AI visibility extends beyond your website

AI systems don’t just pull from your website when building answers. They gather information from YouTube, Reddit, review sites, industry publications, social platforms, and more.

This creates two opportunities for visibility: 

Your owned presence

Owned presence is content you or your team create on platforms beyond your website.

  • Your YouTube channel showing product features gives AI video content to reference
  • Your company’s participation in relevant subreddit discussions shows expertise in action
  • Your executives’ LinkedIn newsletters establish thought leadership

Podcasts, webinars, conference presentations, and educational platforms provide additional long-form content AI systems can extract from.

These platforms often play an important role in AI discovery.

In fact, Reddit, Linkedin, and YouTube were among the top cited sources by the top LLMs in October 2025.

When your brand creates valuable content on these platforms, you give AI systems more material to draw from.

But the key is creating substantive, helpful content that addresses real problems in your industry.

Earned mentions

Earned mentions are references to your brand that you don’t directly control.

  • Customer reviews on G2, Capterra, or Trustpilot describe real experiences with your product
  • Industry journalists mentioning your company in news articles provide third-party validation
  • Community discussions on Reddit or Quora where users recommend your solution show authentic sentiment. Like this:

When multiple independent sources discuss your brand in relevant contexts, AI systems have clearer signals to interpret your credibility.


Further reading: 7 ways to grow brand mentions, a key metric for AI Overviews visibility



Side note: Tools like Semrush’s AI PR Toolkit make this easier to evaluate at scale. Beyond counting earned mentions, it shows how your brand is framed across sources, including whether mentions skew positive, neutral, or negative. 

This metric can be very important as you work to extend brand visibility beyond your website. Because sentiment influences how AI systems frame your brand in answers, not just whether they mention you at all.


Why both matter

Owned presence and earned mentions work together.

Your owned content demonstrates expertise and provides detailed information AI can reference. Earned mentions from customers and industry sources validate your credibility.

When AI systems encounter both, they build a comprehensive understanding of what you offer.

This owned and earned content may also become part of LLM training data in the future, shaping how AI systems learn about and reference your brand long-term.

5. Visibility Is measured differently in AI search

Traditional SEO metrics (like rankings, clicks, and traffic) only tell part of the story. But they had one major advantage: the attribution path was clear. 

A user clicked, landed on your site, and either converted or didn’t. You could tie that traffic directly to revenue.

AI search breaks that path. When an AI tool recommends your product to a user, they might never click through to your site. The conversion may still happen — they Google your brand name later, sign up the following week — but your analytics won’t connect it back to the AI mention that started it.

That’s the real measurement challenge. It’s not just that the metrics are different. It’s that the link between visibility and revenue becomes harder to trace.

The value here isn’t just the click. It’s being part of the answer.

This requires measuring your visibility differently.

Here are the key metrics to consider:

  • Citation frequency: This measures how often AI platforms mention your brand when answering questions
  • Share of voice: Your mention rate compared to competitors. If an AI answers 100 questions about “best CRM,” how many times do you appear vs. your rivals? This reveals your true competitive position.
  • Context tracking: Where do you appear? Understanding which specific prompts or topics trigger your brand mentions helps you identify the subjects you own versus where you’re invisible.
  • Sentiment: Are the mentions positive, neutral, or negative? A high share of voice means nothing if the AI is telling users your product is “overpriced” or “buggy.”

The challenge is that traditional analytics platforms (like GA4 or Google Search Console) cannot track these signals. They only see what happens after a click.

This creates a “measurement blind spot.” You might be the most mentioned brand in ChatGPT, but your standard dashboards would show zero activity.

Platforms like Semrush’s AI Visibility Toolkit are built to solve this specific problem. They help quantify these “invisible” GEO metrics, turning qualitative data (like sentiment and mention frequency) into trackable numbers.

Its Brand Performance report shows how visible your brand is in AI answers, how you compare to competitors, and whether mentions skew positive, neutral, or negative. 

The toolkit also highlights AI visibility insights, helping you understand how your brand is currently interpreted in AI answers and where adjustments may improve visibility.

Ultimately, a modern search strategy requires monitoring two distinct dashboards:

One for your website’s performance (rankings and traffic) in traditional search. And one for your brand’s mentions across AI search

You need both to see the full picture.

What this framework doesn’t guarantee

These principles increase your probability of appearing in AI answers. They don’t guarantee it.

The volatility in AI citations means even well-optimized brands experience fluctuation. 

Different AI platforms weigh signals differently. User context and conversation history affect what gets cited. And AI systems are evolving rapidly — what works today may shift as models update.

Think of GEO like brand building: you’re increasing your odds across many moments of potential visibility, not securing a fixed position. 

The brands that do this well show up more often, more accurately, and in better context. But there’s no “rank #1” equivalent to chase.

That realism isn’t a reason to ignore GEO. It’s a reason to approach it as an ongoing discipline. Showing up consistently, across surfaces, over time, is how you build trust with AI systems.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Frequently asked questions

What’s the biggest misconception about GEO right now?

The biggest misconception is that AI-generated answers are too volatile to optimize for.

While individual responses change, the underlying inputs do not. AI systems consistently rely on durable signals like authority, clarity, and trust. Brands with strong entity clarity and credible sources appear repeatedly, even as surface-level outputs fluctuate. The patterns are stable enough to act on.

Is GEO replacing SEO?

No, GEO builds on SEO fundamentals.

Traditional SEO optimizes for rankings and clicks. GEO optimizes for mentions, citations, and recommendations inside AI-generated answers.

They work together. Strong SEO creates the foundation (technical accessibility, quality content, credibility signals) that AI systems rely on when deciding which brands to reference.

How should we think about GEO in the bigger AI search shift?

The clearest way to frame it is as a hierarchy.

  • AI search is the environment
  • AI SEO is the practice
  • AI visibility is the outcome

GEO sits inside AI SEO as one way to improve visibility within generative systems. The goal is not optimizing for a single model or interface. The goal is being seen, trusted, and reused wherever people search for answers.


Further reading: How to Rank in AI Search (New Strategy & Framework)


What types of content are more likely to appear in generative AI responses?

Content that is easy for AI systems to retrieve, understand, and reuse is most likely to appear in generative AI responses.

In practice, this means clear, direct answers to specific questions, self-contained explanations, fact-based comparisons, and concise definitions that make sense without surrounding context. AI systems tend to pull individual passages, not entire pages, so structure and clarity matter more than length.

Does AI search favor large, well-known brands, or does GEO level the playing field?

Well-known brands often start with more authority, but they don’t automatically win. Smaller publishers can compete when they own a clearly defined topic, show up consistently across platforms, and are easy for AI systems to understand and trust. 

In practice, focused niche sites may outperform larger brands when their expertise is clearer, better structured, and tightly aligned with specific audience needs.

What’s the right way to think about GEO moving forward?

The right way to think about GEO is as a long-term visibility discipline, not a short-term optimization tactic.

Success comes from making your expertise clear, consistent, and reusable wherever AI systems look for answers. That requires strong alignment across content, SEO, brand, PR, product, and customer touchpoints. 

AI search does not change the goal of helping users. It raises the standard for coherence, accuracy, and trust across the entire web.

Google previews WebMCP, a new protocol for AI agent interactions

11 February 2026 at 20:50
AI content crawlers

Google today announced an early preview of WebMCP, a new protocol that defines how AI agents interact with websites.

  • “WebMCP aims to provide a standard way for exposing structured tools, ensuring AI agents can perform actions on your side with increased speed, reliability, and precision,” wrote André Cipriani Bandarra from Google.

WebMCP lets developers tell large language models exactly what each button or link on a website does. WebMCP allows websites to explicitly publish a clear “Tool Contract” that defines available actions.

It runs on a new browser API, navigator.modelContext. Through that API, the website shares a structured list of tools — such as buyTicket(destination, date). The AI can then call those functions directly, making interactions faster, more accurate, and far more reliable.

Structured interactions for the agentic web. WebMCP introduces two new APIs that let browser agents act on a user’s behalf:

  • Declarative API: Handles standard actions defined directly in HTML forms.
  • Imperative API: Supports complex, dynamic interactions that require JavaScript execution.

These APIs act as a bridge, making your website agent-ready. They enable faster, more reliable agent workflows than raw DOM manipulation.

Use cases. Google shared use cases that show how an AI agent can handle complex tasks for your users with speed and confidence:

  • Travel: Users can get the exact flights they want. Agents can search, filter results, and complete bookings using structured data that delivers accurate results every time.
  • Customer support: Users can create detailed support tickets faster. Agents can automatically fill in the required technical details.
  • Ecommerce: Users can shop more efficiently. Agents can find products, configure options, and move through checkout with precision.

How to access the preview. You can apply for the preview to WebMCP here.

Why we care. Agentic experiences are shaping the future of search—and possibly SEO. Dan Petrovic called it the biggest shift in technical SEO since structured data. Glenn Gabe called this a big deal. It’s worth exploring these new protocols now.

Google outlines AI-powered, agent-driven future for shopping and ads in 2026

11 February 2026 at 20:25

Google is redesigning shopping and advertising around AI-powered, agent-driven experiences, and said speed and certainty will converge for consumers and brands in 2026.

In her third annual letter, Vidhya Srinivasan, Google’s VP and GM of Ads and Commerce, outlined how Search, YouTube, and its shopping infrastructure are being rebuilt for the agentic era — where AI doesn’t just surface information but actively assists, recommends, and completes transactions.

Key trends. Google is redefining commercial intent across Search, YouTube, and AI interfaces. Ads are moving deeper into conversational experiences like AI Mode, creative production is becoming AI-native, and checkout is embedding directly into Search. Here are key takeaways from Srinivasan’s letter:

  • Creators to commerce: YouTube remains a discovery hub, with creators serving as trusted tastemakers. AI helps match brands with the right creators, turning influence into measurable business impact.
  • Search ads evolve: As conversational and visual queries rise, AI Mode reimagines ads as part of the discovery journey. New formats (e.g., sponsored retail listings, Direct Offers), aim to help users find products and services while giving brands meaningful ways to convert interest into sales.
  • Agentic commerce arrives: Google is standardizing AI-driven shopping through the Universal Commerce Protocol (UCP), enabling consumers to browse, pay, and complete purchases seamlessly in AI Mode. Early rollouts include Etsy and Wayfair, with Shopify, Target, and Walmart to follow.
  • AI-powered creative and performance: Gemini 3 powers ad tools that automate creative production and campaign optimization. Generative tools like Nano Banana and Veo 3 help advertisers create studio-quality assets in minutes, while AI Max expands reach and drives performance.

Why we care. Adapting to AI-mediated commerce is increasingly necessary to stay competitive. Buying decisions are shifting — more often happening inside AI-driven search, creator content, and agent-powered checkout flows that could reshape traffic and conversion paths. These changes may create new ways to reach high-intent shoppers, but they also signal growing platform control over discovery, measurement, and transactions, potentially affecting competition, costs, and brand visibility.

Google’s blog post. What to expect in digital advertising and commerce in 2026

How AI-driven shopping discovery changes product page optimization

11 February 2026 at 20:00
How AI-driven shopping discovery changes product page optimization

As consumers lean into AI search, the industry has focused on the technical “how” – tracking everything from Agentic Commerce Protocols (ACP) to ChatGPT’s latest shopping research tools. In doing so, it often misses the larger shift: conversational search, which is changing how visibility is earned.

There’s a common argument that big brands will always win in AI. I disagree. When you move beyond the “best running shoes” shorthand and look at the deep context users now provide, the playing field levels. AI is trying to match user needs to specific solutions, and it’s up to your brand to provide the details.

This article explains how conversational search changes product discovery and what ecommerce teams need to update on product detail pages (PDPs) to remain visible in AI-driven shopping experiences.

How conversational search builds on semantic search

While semantic search is critical for understanding the meaning and context of words, conversational search is the ability to maintain a back-and-forth dialogue with a user over time.

Semantic search is the foundation for conversational visibility. Think of it like a restaurant: If semantic search is the chef who knows exactly what you mean by “something light,” conversational search is the waiter who remembers that you’re ordering for dinner.

FeatureSemantic searchConversational search
GoalTo understand intent and contextTo handle a flow of questions
How it thinksIt knows “car” and “automobile” are the same thingIt knows that when you say “how much is it?”, “it” refers to the car you just mentioned
The interactionSearching with a phrase instead of keywordsHaving a chat where the computer remembers what you were asking about before
ExampleAsking “What is a healthy meal?” and getting results for “nutritious recipes.”Asking “What is a healthy meal?” followed by “give me a recipe for that.”

AI blends them together. It uses semantic understanding to decode your complex intent and conversational logic to keep the thread of the story moving. For brands, this means your content has to be clear enough for the “chef” to interpret and consistent enough for the “waiter” to follow.

What conversational search and AI discovery mean for ecommerce

I recently shared how my mom was using ChatGPT to remodel her kitchen. She didn’t start by searching for “the best cabinets.” Instead, she leveraged ChatGPT as her pseudo-designer and contractor, using AI to solve specific problems.

Product discovery happened naturally through constraint-based queries:

  • “Find cabinets that fit these dimensions and match this specific wood type.”
  • “Are these cabinets easy for a DIY installation?”

Her conversations were piling up, allowing her to reach multiple solutions at once. Her discovery journey was layered. When ChatGPT recommended products to complete her tasks, she simply followed up with, “Where can I buy those?”

Brands and marketers need to stop optimizing for keywords and start optimizing for tasks. Identify the specific conversations where your product becomes the solution. If your data can’t answer the “Will this fit?” or “Is this easy?” questions, you won’t be part of the final recommendation.

“Recommend products” is the top task users trust AI to handle, highlighting a clear opportunity for brands, according to Tinuiti’s 2026 AI Trends Study. (Disclosure: I am the Sr. Director of AI SEO Innovation at Tinuiti.) 

For your brand to be the one recommended, your PDPs must provide the “ground truth” details these assistants need to make a confident selection.  

Dig deeper: How to make ecommerce product pages work in an AI-first world

What to do before you start changing every PDP

Step away from the keyword research tools and stop asking for “prompt volumes.” In an AI-driven world, intent is more important than volume. Before changing a single page, you need to understand the high-intent journeys your personas are actually taking.

To identify your high-intent semantic opportunities:

  • Audit your personas: Who is your buyer, and what are their non-negotiable questions? If you haven’t mapped these lately, start there.
  • Bridge the team gap: Talk to your product and sales teams. They know the specific attributes and “deal-breaker” details that actually drive conversions.
  • Listen to the market: Use sentiment analysis and social listening to find hidden use cases or brand problems. How are people actually using, or struggling with, your product in ways your brand team hasn’t considered?
  • Map constraints, not keywords: Identify the specific constraints (size, compatibility, budget) that AI agents use to filter recommendations.

How to build PDPs for AI search with decision support

Your PDP should operate like a product knowledge document and be optimized for natural language. This helps an AI system decide whether to recommend the product for a specific situation.

Name your ideal buyer and edge cases

Content should support better decision-making. Audit your PDPs to determine whether they provide enough detail on who the product is best for – and not for. Does the page explicitly name your ideal buyer, their skill level, lifestyle constraints, and deal-breakers?

AI shopping queries often include exclusions, and clearly outlining the important parts of your user search journey will help you understand where your products fit best.

Cover compatibility and product specifications

Compatibility feels synonymous with electronics (e.g., “Will my headphones connect to this computer?”). But think beyond one-to-one compatibility and expand into lifestyle compatibility:

  • Is this laptop bag waterproof enough for a 20-minute bike ride in the rain, and does it have a clip for a taillight?
  • Can I fit a Kindle and a book in this purse?
  • Will this detergent work with my HE washer?
  • Will this carry-on suitcase fit in the overhead compartment on every airline?
  • Is this “family-sized” cutting board actually small enough to fit inside a standard dishwasher?

People are searching for how products fit into their lifestyle needs. Highlight and emphasize the features that make your products compatible with their lifestyle.

Dig deeper: How to make products machine-readable for multimodal AI search

Get the newsletter search marketers rely on.


Provide vertical-specific product guidance

Breaking down your customer search journey and listening to your customers’ concerns, either through AI sentiment analysis, social listening, or product reviews, will help you understand what you need to be specific about.

  • Apparel brands should add sizing and fit guidance. Maybe you’re comparing your size 10 jeans to competitors’ sizing, or considering sizing changes based on the cut or style of your other jeans.
  • Beauty or skincare brands need ingredient combination details. Is this product compatible with other common formulas? Can I layer it over a vitamin C serum?
  • Toy brands could include important details for parents. Does your product need to be assembled, and how long will it take? Can they assemble it the night before Christmas?

If your biggest customer complaint is understanding when and how to use your products, you’re likely not making it easy enough for them to buy. Better defining your product attributes helps users and LLMs alike better understand your products.

Write for constraint matching instead of browsing

AI shopping discovery is driven by constraints instead of keywords. Shoppers aren’t asking for “the best laptop bag.” They’re asking for a bag that fits under an airplane seat, survives a rainy commute, and still looks professional in a meeting.

PDPs should be written to reflect that reality. Audit your product pages to see whether they answer common “Can I …?” and “Will this work if …?” questions in plain language. These details often live in reviews, FAQs, or support tickets, but rarely surface in core product copy where AI systems are most likely to pull from.

Here’s what transforming your content can look like:

Traditional PDP copy

  • Laptop backpack
    • Water-resistant polyester exterior.
    • Fits laptops up to 15″.
    • Multiple interior compartments.
    • Lightweight design.
    • USB charging port.

PDP copy written for constraints

  • Laptop backpack
    • Best for: Daily commuters, frequent flyers, and students who need to carry tech in unpredictable weather.
    • Not ideal for: Extended outdoor exposure or laptops larger than 15.6″.
    • Weather readiness: Water-resistant coating protects electronics during short walks or bike commutes in light rain, but is not designed for heavy downpours.
    • Travel compatibility: Fits comfortably under most airplane seats and in overhead bins on domestic flights.
    • Capacity and layout: Holds a 15-15.6″ laptop, charger, and tablet, with room for a book or light jacket – but not bulky items.
    • Lifestyle considerations: Integrated USB port supports charging on the go (power bank not included).

LLMs evaluate how well a product satisfies specific constraints in conversational queries or based on predetermined user preference information.

PDPs that clearly articulate those constraints are more likely to be selected, summarized, and recommended. This type of copy should also help your on-site customers better understand your products.

Dig deeper: Why ecommerce SEO audits fail – and what actually works in 30 days

Technical foundations still matter for ecommerce

Just because search platforms change doesn’t mean we should abandon everything we’ve learned in traditional optimization.

Technical SEO fundamentals still heavily apply in AI search:

  • Can crawlers access and index your site?
  • Are your product listing pages (PLPs) and PDPs clearly linked and structured?
  • Do pages load quickly enough for crawlers and users?
  • Is your most critical content accessible?

In conversational shopping, structured data is playing a different role than it did in traditional SEO strategies. In conversational shopping, it’s about verification. 

AI systems use your schema to validate facts before they risk reusing them in an answer. If the AI can’t verify your price, availability, or shipping details through a merchant feed or structured data, it won’t risk recommending you.

Variant clarity is just as important. When differences like size, color, or configuration aren’t clearly defined, AI systems may treat variants as separate products or merge them incorrectly. The result is inaccurate pricing, incompatible recommendations, or missed visibility.

Most importantly, structured data must match what’s visibly true on the page. When schema contradicts on-page content, AI systems avoid recommending uncertain information.

Dig deeper: How SEO leaders can explain agentic AI to ecommerce executives

Owning the digital shelf in 2026

Success on the digital shelf has moved beyond high-volume keywords. In this new era, your visibility depends on how well you satisfy the complex constraints users can provide in a single search. AI models are scanning your pages to see if you meet specific, nuanced requirements, like “gluten-free,” “easy to install,” or “fits a 30-inch window.”

The shift to conversational discovery means your product data must be ready to sustain a dialogue. The goal is simple: provide the density of information necessary for an AI to confidently transact on a user’s behalf. Those who build for these multi-layered journeys will own the future of discovery.

OpenAI details how ads will work in ChatGPT

11 February 2026 at 19:47
OpenAI ChatGPT iOS app

On the OpenAI podcast, OpenAI executive Assad Awan talked about how ads will roll out in ChatGPT, who will see them, and how the company plans to protect user trust.

Who will see ads. Ads will appear for Free and Go tier users.

  • Plus, Pro and Enterprise subscribers won’t see ads
  • Enterprise workspaces will remain fully ad-free

The guardrails: Awan emphasized that OpenAI is structuring ads around strict trust principles:

  • Separation: Ads are visually and technically separate from model answers.
  • Privacy: Conversations aren’t shared with advertisers.
  • Sensitive topics: Health, politics, and other sensitive chats won’t show ads.
  • Controls: Users can adjust or turn off personalization, or upgrade to remove ads.

The model doesn’t know when ads are present and can’t reference them unless a user explicitly asks about one, according to Awan.

Zoom in. OpenAI prioritizes user trust over user value, advertiser value, and revenue — a framework meant to prevent ads from shaping the model’s responses, Awan said.

For small businesses. Awan described a future where AI acts as an advertising agent, helping small businesses run campaigns by describing goals in plain language instead of managing complex dashboards.

Why we care. ChatGPT ads could create a high-intent channel for reaching users during active conversations and decision-making moments. Its focus on relevance, AI-driven matching, and agent-style campaign tools could lower barriers for small and midsize advertisers while improving performance for larger brands. If OpenAI builds a trusted ad environment, it could reshape how advertisers approach discovery and customer engagement in AI-driven interfaces.

What’s next. Early ad tests will stay conservative, prioritizing usefulness and relevance over volume as OpenAI refines formats and placement.

The big picture. Through advertising, OpenAI is aiming to scale ChatGPT access while maintaining a trust-first design — a balance the company says is central to its long-term strategy.

The OpenAI podcast. Episode 13 – The Thinking Behind Ads in ChatGPT

💾

Ads will appear only in Free and Go tiers, with sensitive topics excluded and users able to adjust personalization or upgrade to remove them.

Google Ads shows recommended experiments

11 February 2026 at 19:34
Trusting Google Ads AI

Google Ads is rolling out recommended experiments on the Experiments page, highlighting test ideas based on an account’s setup and performance data.

How it works. The platform suggests experiment opportunities—such as testing bidding strategies, creative variations, or new campaign features—and displays them in the Experiments dashboard.

  • Each recommendation includes a preconfigured experiment setup.
  • Advertisers can launch immediately or customize settings.
  • Suggestions appear alongside the standard Create Experiment workflow.

Why we care. By removing the need to build tests from scratch, Google lowers the barrier to experimentation. You can act on optimization ideas more quickly and consistently. Just make sure you’re launching the right tests and configurations to avoid wasted time and budget.

Zoom in. Example prompts include suggestions like enabling final URL expansion to improve campaign performance, displayed as in-dashboard popups within the Experiments interface.

The big picture. Google is increasingly embedding automated guidance into Ads workflows, nudging advertisers toward continuous testing and data-driven optimization.

First seen. This update was spotted by Hana Kobzová, owner of PPC News Feed.

Google Ads simplifies product campaign tracking

11 February 2026 at 19:16
How to write high-performing Google Ads copy with generative AI

Google Ads rolled out a new feature that shows advertisers which campaigns their products are eligible for, directly in the Products section.

How it works. A new dashboard in the Products section includes:

  • A table showing product details, status, issues, and priority flags
  • A line graph summarizing campaign status trends
  • Filters to segment eligibility views
  • A pop-up panel that lists “Eligible” and “Not eligible” campaigns per product

Why we care. dvertisers can now quickly identify products that are missing from key campaigns or unintentionally overlapping across Shopping and Performance Max. The added visibility reduces the need to jump between campaign views to diagnose eligibility gaps.

The big picture: The changes help advertisers quickly identify products that aren’t running in expected campaigns, spot campaign overlap before it becomes a budget problem and save time troubleshooting product-level issues.

Between the lines. This is Google’s latest move to give advertisers more granular control over Shopping campaigns, where product-level optimization can make or break profitability.

When. Available now in Google Ads.

First seen. This update was spotted by PPC News Feed owner Hana Kobzová.

What 4 AI search experiments reveal about attribution and buying decisions

11 February 2026 at 19:00
What 4 AI search experiments reveal about attribution and buying decisions

AI search influence didn’t show up in our SEO reports or AI prompt tracking tools. It showed up in sales calls.

“Found you via Grok, actually,” a new lead said.

That comment stopped us cold. We hadn’t tried to rank in Grok. We weren’t tracking it. Yet it was influencing how buyers discovered and evaluated us.

That disconnect kept appearing in client conversations, too. Everyone was curious about AI search, but no one trusted the data. 

Teams wanted visibility in ChatGPT and other AI tools, then asked the same question: “Why invest in a channel that doesn’t show up cleanly in attribution?”

To answer that, we ran controlled experiments using assets we could fully control – an agency website, personal sites, an ecommerce brand, and purpose-built test domains.

The goal wasn’t to win AI rankings. It was to understand what still matters once AI enters the decision process:

  • Does AI search change what people buy, or just where brands appear?
  • Can something influence revenue without ever appearing in analytics?
  • Does AI recommendation affect performance across other channels?

Why we ran the experiments

Most AI search conversations fixate on visibility signals like brand mentions, citations, or visibility screenshots from AI prompt tracking tools.

Search has always had one job: help people make a decision.

We wanted to know if AI search performed the same job and actually changed commercial outcomes.

AI systems now operate at the stage where buyers compare options, shortlist providers, and reduce risk.

If AI mattered, it had to show up at the moment of decision.

On measurement limits: 

  • We didn’t rely on API data because API responses often differ from what real users see. Instead, we observed live interfaces across ChatGPT, Perplexity, Gemini, and Google AI Overviews. 
  • We used prompt tracking to spot patterns, not to declare absolute wins.

Experiment 1: Self-promotional ‘best of’ lists on your own website

A simple tactic became popular over the past year:

  • Create a “best X” list on your site.
  • Put yourself at the top.
  • Let AI systems pick up the list.

I’ve seen agencies do this locally and felt conflicted about it.

It wasn’t spam. But it relied on a blind spot – LLMs struggle to separate independent rankings from self-written ones.

Around the same time, Ahrefs published a large study that helped explain why this works. Glen Allsopp analyzed ChatGPT responses across hundreds of “best X”-style prompts and found that “best” list posts were the most commonly cited page type.

Two things from the study stood out:

  • Format: This included cases where brands ranked themselves first
  • Freshness: Most cited lists had been updated recently

I could have tested these observations on StudioHawk. Instead, I did it on my personal brand website to manage the risk. 

I published a list of the “Best SEO agencies in Sydney” and included my own website among the entries to test whether AI would “take the bait,” so to speak.

Within two weeks, LawrenceHitches.com appeared across AI tools for “best SEO agency Sydney” style searches:

Best SEO agencies - Sydney

The speed was surprising – traditional SEO rarely moves that fast.

If visibility appears this easily, then visibility alone can’t mean much, so I tested it again.

Experiment 2: Self-promotion of a fake business

Initially, I could have been piggybacking off the already established StudioHawk brand, so I decided to run a self-promotion test on a fake website

We used a basic landscaping site built only for SEO and AI testing and published the same type of page, a “best X” list.

This time, the topic was “best landscapers in Melbourne”:

Best landscapers in Melbourne

Within two weeks, the list appeared in AI responses again. The result repeated almost exactly.

If a brand-new test site can surface this fast, then “appeared in AI” doesn’t mean much on its own.

Visibility vs. trust

These two experiments showed one thing clearly: LLMs are still easy to influence at the surface level.

I ran these tests back in August 2025, but the same pattern still appears today.

A “best SEO agency Sydney” search run in January 2026 shows the same list-driven results:

Top SEO agencies Sydney

This creates a real conflict for brands.

On one side, the data says yes – the Ahrefs research shows “Best X” pages attract citations. Large brands like Shopify, Slack, and HubSpot publish self-ranked lists without obvious damage to rankings or AI visibility.

On the other side is buyer trust.

As Wil Reynolds put it, listing yourself first on your own site doesn’t build confidence with buyers. That’s the tension.

When bullish founders ask for the secret sauce to appear in ChatGPT, I’m blunt. List-based “best of X” pages that rank the author first have been a fast way to surface in some AI results.

That doesn’t work everywhere, and it’s unlikely to hold up long term.

Dig deeper: Google may be cracking down on self-promotional ‘best of’ listicles

If a landscaping site with no reputation can surface this quickly, then appearing in AI means very little on its own.

Why prompt tracking can’t be a success metric

A lot of money is flowing into AI prompt tracking tools. Clients ask for them constantly. We use them too, but with a clear warning.

I wouldn’t make major decisions based on screenshots or Reddit threads about where a brand appears in ChatGPT.

Brand overlap between API outputs and real user sessions was as low as 24%, according to recent research from Surfer SEO comparing tracking APIs with scraped user experiences.

That means three times out of four, what the API told you was happening wasn’t what the user was actually seeing.

If a brand can appear in a screenshot but disappear in a real user session, then appearance alone isn’t a metric.

We stopped asking if we showed up.

Instead, we started asking, “Did this change how buyers behaved?”

  • Did leads reference AI tools without prompting?
  • Did sales calls skip education?
  • Did the speed of buying change?
  • Did price resistance soften?

These signals were harder to collect.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Get the newsletter search marketers rely on.


Experiment 3: Kadi and the limits of digital PR alone

Kadi, an ecommerce brand we invested in that sells luggage, provided insight into our questions about whether AI results were affecting buyer behavior.

Running tests on Kadi has been an eye-opening experience for two reasons: 

  • It’s the difference between running an agency and running ecommerce.
  • It forced us to become our own client.

To move fast, we led with digital PR.

Kadi’s SEO foundation was solid but not perfect. We wanted to see how far off-site mentions could push SEO and AI visibility without heavy technical work or a polished site structure.

We conducted a large number of creative data campaigns and product placements, including:

  • Travel data studies: “Over-touristed destinations,” “Hidden fees,” “Best time to fly,” and “Happy Hour at 30,000 ft.”
  • Advisory pieces: “Airport cybersecurity” and “duty-free shopping” guides
  • Product and feature focus: “Kadi kids carry-on adventure,” “cloud check-in features,” and inclusions in “best suitcase round-ups.”
List of creative data campaigns and product placements

It worked:

  • Coverage landed.
  • Authority grew without the need for “traditional SEO.”
  • We saw temporary keyword spikes and traffic boosts.
Kadi - Digital PR efforts

But there was a catch: Digital PR alone wasn’t enough to close the gap with competitors. It created quick traction in search results, but it didn’t resolve the underlying issues.

After launch, SEO foundation work became the priority.

Then, Black Friday made the reality obvious. A customer found Kadi through ChatGPT on a “kids carry-on” query.

We saw this happen on the day of the query and showed the pathway: 

  • They didn’t buy immediately.
  • They checked the shipping policy.
  • They browsed the range.
  • They added three additional products.
  • They debated colour (olive over pink).
  • Attribution later showed Instagram as the source.

That order was the largest of the Black Friday period.

On paper, AI did nothing. In reality, it was part of shaping the decision. 

Digital PR can get you visibility spikes, but it doesn’t address the whole picture. 

While AI traffic does convert, the attribution is inconsistent.

Experiment 4: StudioHawk 

Across 2024 and 2025, StudioHawk underwent a full website rebrand and migration from WordPress to HubSpot.

Our own site sat at the bottom of the priority list for years. It was always the project we would get to later. 

Finally, we paused other priorities and rebuilt the entire site.

The work started in 2023, before terms like “GEO” existed. We were focused only on rebuilding service pages, social proof, and user experience end to end.

After launch, rankings improved and continue to grow.

Studiohawk post-rebrand performance

In 2025, SEO became the agency’s strongest channel by efficiency. It drove 65% of inbound leads and close to 60% of new revenue.

Agency's strongest channel by efficiency

Between July and December 2025, AI search leads began to appear more often:

AI search leads appered

Initially, these were “Oh, cool, we got a lead from AI” moments around the office.

Sales calls started skipping early education. New leads arrived aligned based on fit and expectations.

Over time, we saw that:

  • SEO inbound leads: Averaged 29 days to close.
  • AI search leads: Closed in roughly 18 days.

That 10-day gap mattered.

It meant less time educating, fewer scope objections, lower price sensitivity, and higher confidence earlier in the process.

Within the first year, AI-influenced conversations contributed over $100,000 in closed revenue from 20+ leads, including deals with direct attribution from tools like ChatGPT, Perplexity, and Grok.

The blind spot remains attribution paths such as Instagram, direct, or organic, where AI influenced the decision but didn’t appear in reporting (as seen in the Kadi example).

Where direct AI attribution existed, buyers were more prepared. That preparedness shortened sales cycles and lifted revenue.

AI compresses consideration

We started by asking where people would search next.

Our key finding? AI search doesn’t replace discovery. It compresses the consideration phase.

AI compresses consideration

Consideration is that messy middle where buyers reduce risk, shortlist vendors, compare tradeoffs, and ask, “Who should I trust?”

They answer these questions before a buyer ever clicks a link. 

It means your website no longer carries the full load – AI summaries and third-party mentions do the pre-selling for you.

This is the shift we now describe as the new consideration era.

As the map illustrates, we’ve moved from a straight funnel to a complex, AI-influenced pathway where consensus is key:

The new consideration era

Because this happens off-site, last-click attribution is broken. 

A buyer might use ChatGPT to create a shortlist but convert later via direct search.

Where traditional SEO still fits

Strong SEO metrics were a core across all our experiments, but we’ve stopped viewing them as the primary driver of value:

  • Keyword rankings confirm search engines understand your entity.
  • However, those high rankings don’t guarantee effective pre-selling.

Traditional SEO became a supporting signal – proof that the foundation is sound, rather than the end goal.

What this means for brands

After running a variety of AI search experiments, here’s what I think brands should focus on.

1. Measure where AI influence actually lands

Stop obsessing over prompt appearances (e.g., citations, mentions). These are shiny objects, but they fluctuate too easily. 

Instead, measure:

  • Sales velocity (Did deals close faster?)
  • Quality of the lead (Did they ask fewer questions to learn?)
  • Value per lead (Did price friction ease?)

2. Make clarity more important than creativity

AI hates vagueness. Making pages that make it clear what you do and who it’s for.

Pay attention to content that answers questions about risk, comparison, and price.

3. Change the content to help people decide what to buy

Focus on content that answers comparison, risk, and pricing questions. This makes a bigger difference than general category explanations.

4. Make entity consistency a crucial factor

Lack of consistency makes people doubt themselves. Conversely, consistency boosts confidence.

Check to see that your website, reviews, and digital PR all talk about your brand in the same way.

AI search compresses consideration, not discovery

In the end, the results were the same across all experiments. What we got from our sales pipeline was typical:

  • Clear intent.
  • Tight positioning.
  • Consistent signals of authority.

AI search isn’t replacing basic SEO. Instead, it shows weak positioning more quickly than traditional search ever did. 

What does that mean? 

Simply put, AI speeds up decisions that were already forming.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

How to reduce low-quality leads from Performance Max campaigns

11 February 2026 at 18:00
How to reduce low-quality leads from Performance Max campaigns

When left to its own devices, there are a couple of things Performance Max is absolutely great at doing for lead gen campaigns:

  • Driving volume.
  • Finding the lowest-quality leads it possibly can.

It’s not inherently surprising that Google is doing what’s best for Google – that is, lining its own pockets – by heavily optimizing toward the cheapest, path-of-least-resistance conversion events.

From experience with campaigns we inherit from new clients, this performance often catches brands off guard – especially those who take Google sales reps’ “helpful advice” at face value.

It can take time for those brands to look past PMax’s shiny, low CPAs and realize the truth: those leads do little to nothing for real pipeline or revenue.

However, Performance Max, when given the proper guardrails, can be a good source of incremental, quality leads – but the trick is in building those guardrails. 

This article covers lead quality tactics that work and how to execute them, tactics that don’t work, and important differences between Performance Max campaigns in Google and Bing. 

How to improve lead quality in PMax campaigns

These are the specific levers that consistently influence lead quality in Performance Max.

  • Use conversion goals focused on metrics that indicate a higher quality lead than just form fills.
    • Depending on your data density, this could mean closed-won leads, opportunities, or (if you need to go up the funnel to get enough volume) sales-qualified leads. 
    • It’s important to note that the effectiveness of this tactic depends on good offline conversion tracking implementation and a clean CRM instance, so don’t turn on PMax lead gen campaigns until you’re confident in your HubSpot or Salesforce integrity.
  • Use high-value lists for audience signals. This can be based on a certain activity, like “booked a meeting,” instead of simply including all converters.
  • Keep the focus on the right audiences. Exclude irrelevant ones and upload Customer Match lists to help Google’s algorithm find similar users.
  • Be smart with your campaign settings.
    • Use brand exclusions to ensure you’re not letting PMax cannibalize your brand traffic. 
    • Restrict your location targeting to high-performing geos.
    • Set strategic scheduling, such as excluding early-morning hours if those conversions tend to be lower quality.
    • Evaluate search themes and placements, and be aggressive about negative keywords and placement exclusions.
    • Use sitelinks to steer traffic to pages with full, detailed forms.
  • Refine the forms themselves.
    • Implement reCAPTCHA or honeypot fields in forms that keep bots from “converting.”
    • Use field validation:
      • Block disposable domains.
      • Block freemails.
    • Add freeform or disqualifying questions.
      • “How did you hear about us?”
      • “Do you have a budget for [solution]?”
      • “How many employees are in your organization?”

Dig deeper: Top Performance Max optimization tips for 2026

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Tactics that won’t affect lead quality

On the other hand, some of the usual campaign optimization strategies won’t do much to move the needle on PMax lead quality. If that’s your sole focus, you can de-prioritize:

  • Switching bid strategies (e.g., switching from Max Conversions to tCPA helps a little but doesn’t fix everything).
  • Adding more assets.
  • Adding more budget.
  • Asking Google support (something I’d just stay away from in general these days).

Get the newsletter search marketers rely on.


Important (and subtle) differences to know between Google and Bing PMax campaigns

Both Google and Bing have Performance Max campaigns, but there are differences in their offerings.

Google’s Performance Max network spans search, display, YouTube, discovery campaigns, and Gmail. It’s an absolutely huge amount of inventory – especially display and YouTube, which can be huge spam drivers if left unchecked.

Microsoft has far less video and display inventory. Their PMax campaigns primarily include Bing search, syndicated search, and the Microsoft audience network (which spans display, Outlook, and MSN). 

When comparing performance between the two, we haven’t seen any notable differences, but it’s worth monitoring updates to each platform’s reporting and inventory going forward.

Dig deeper: Google and Microsoft: How their Performance Max approaches align and diverge

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Performance Max isn’t broken, but it needs control

If you’re considering running PMax for lead gen, you should approach it with a healthy dose of skepticism. 

While PMax has been effective at driving scalable revenue for ecommerce,those campaigns need considerable guidelines to maintain lead quality. 

For instance, preventing a high-end shoe retailer from racking up tons of conversions on things like replacement laces and shoe polish will require that the campaign develop sufficient PMax guardrails.

Considering how Google is moving toward additional automation and AI in campaigns, it’s important to keep testing and experimenting to gain an understanding of the tools available to analyze and shape PMax campaigns. 

Google has issued some new releases to help as of late, including channel-level reporting, more options for exclusions, and campaign-level negative keywords.) 

There is a lead scale out there that can provide a healthy ROAS if you’re willing and able to wrestle the algorithm into submission. 

If you’ve tested Performance Max campaigns for lead gen but paused them once it was clear they weren’t driving revenue, do a quick post-mortem on your past efforts. You might find there’s room to whip Google into shape to do better this time around. 

Take note of the tactics you haven’t yet implemented and prioritize putting them in place before you waste another dollar of your 2026 budget on poor-quality leads that just junk up your CRM.

PPC mistakes that humble even experienced marketers

11 February 2026 at 17:00
Marketing mistakes

Every seasoned PPC pro carries a few scars — the kind you earn when a campaign launches too fast, an automation quietly runs wild, or a “small” setting you were sure you checked comes back to bite you.

At SMX Next, we had a candid, refreshingly honest conversation about the mistakes that still trip us up, no matter how long we’ve been in the game. I was joined by Greg Kohler, director of digital marketing at ServiceMaster Brands, and Susan Yen, PPC team lead at SearchLab Digital.

Read on to see the missteps that can humble even the most experienced search marketers.

Never launch campaigns on a Friday

This might be the most notorious mistake in PPC — and yet it keeps happening. Yen shared that campaigns often go live on Fridays, driven by client pressure and the excitement to move fast.

The risk is obvious. If something breaks over the weekend, you either won’t see it or you’ll spend Saturday and Sunday glued to your screen fixing it. One small slip — like setting a $100 daily budget instead of $10 — can burn through spend before anyone notices.

Kohler stressed the value of fresh eyes. Even if you build campaigns on Friday, wait until Monday to review and launch. Experience can breed overconfidence. You start to believe you won’t make mistakes — until a Friday launch proves otherwise.

The lesson: Don’t launch before holidays, before time off, or on Fridays. If clients push back, be the “annoying paid person” who says no. You’ll protect your sanity — and the campaign’s performance.

Location targeting disasters

Kohler shared a mishap where location targeting didn’t carry over correctly while copying campaigns in bulk through Google Ads Editor. By Saturday morning, those campaigns had already racked up 10,000 impressions — because the ads were running in Europe while the intended U.S. audience slept.

The lesson: Some settings, especially location targeting, are safer to configure directly in the Google Ads interface. There, you can explicitly set “United States only,” which reduces the risk of accidental international targeting.

The search term report trap

Yen made it clear: reviewing search term reports isn’t optional. It matters for every campaign type—standard search, Performance Max, and AI-driven campaigns included. Skip this step, and it looks like you’re chasing clicks instead of qualified traffic.

The real damage shows up months later. Explaining to a client where their budget went—when you could’ve caught irrelevant queries early—leads to uncomfortable conversations. Yen recommends reviewing search terms at least once a month. The time required is small compared to the spend it can save.

The lesson: Regular reviews also help you decide what to add as keywords and what to block as negatives. The goal is balance. Too many new keywords create cluttered accounts. Too many negatives often signal deeper issues with match types.

Google Ads Editor vs. interface: A constant battle

The conversation surfaced a familiar frustration: Google Ads Editor and the main interface don’t always play well together. Features roll out to the interface first, then slowly make their way to Editor, which creates gaps and surprises.

Yen explained that her team builds campaigns in Excel first, including character counts for ad copy, before uploading everything into Editor. Even so, they avoid setting most campaign configurations there. Instead, they rely on the interface to visually confirm that every setting is correct.

Kohler added that Editor shines for franchise accounts with dozens — or hundreds — of near-identical campaigns. It’s especially useful for spotting inconsistent settings at scale.

The lesson: For precision work like location targeting or building responsive display ads, the interface offers better control and clearer visibility.

The automatically created assets problem

Kohler called out automatically created assets as a major pain point. These settings default to “on,” and turning them off means clicking through multiple layers — assets, additional assets, then selecting a reason for disabling each one.

The frustration gets worse when Google introduces new automated asset types, like dynamic business names and logos, and automatically applies them to every existing campaign by default. For Kohler’s team, which manages 500 accounts per brand, that meant reopening every account just to turn off the new features.

The lesson: Set recurring calendar reminders to review these settings every few months. Google isn’t slowing down on automation, and most of it requires opting out.

Importing campaigns from Google to Microsoft Ads

Yen warned about the risks of importing Google campaigns into Microsoft Ads without a thorough review. The import tool feels convenient, but it often introduces real problems:

  • Budgets that make sense for Google’s volume can be far too high for Microsoft.
  • Automated bidding strategies don’t always translate correctly.
  • Imports default to recurring schedules instead of one-time transfers.
  • Smaller audience sizes demand different budget assumptions.

Kohler added that Microsoft Ads’ forced inclusion in the audience network makes things worse. Unlike Google, Microsoft doesn’t offer a simple opt-out from display. Advertisers must manually exclude placements as they surface, or work directly with Microsoft support for brands with legitimate placement concerns.

The lesson: import once to get a starting point, then stop. Treat Microsoft Ads as its own platform, with its own strategy, budgets, and ongoing optimization.

The App placement nightmare

Audience member Jason Lucas shared a painful lesson about forgetting to turn off app audiences for B2B display campaigns. The result was a flood of spend on “Candy Crush” views — completely irrelevant for business marketing.

Yen confirmed this is a common problem, made worse by how well Google hides the settings. To exclude all apps in the interface, advertisers must manually enter mobile app category code 69500 in the app categories section. In Editor, it’s easier — you can exclude all apps in one move.

Kohler added another familiar mistake: forgetting to exclude kids’ YouTube channels. His brands have accidentally spent so much on the Ryan’ World YouTube channel that they joke about helping fund the kid’s college tuition.

The lesson: Build a blanket exclusion list that covers apps, kids’ content, and inappropriate placements, then apply it to every campaign — no exceptions.

Content exclusions and placement control

Beyond app exclusions, the group stressed the need for comprehensive content exclusions across every campaign. Their advice is to apply these exclusions at launch, then review placement reports a few weeks later to catch anything that slips through.

The lesson: Consistency. Even when exclusions are in place, Google doesn’t always honor them. That makes regular placement monitoring essential. Automation can ignore manual rules, so verification is still the only real safeguard.

Call tracking quality issues

When the conversation turned to call tracking, Yen stressed the need for consistent client communication. Many businesses lack a CRM or close alignment with their sales teams, making it hard to evaluate call quality.

The lesson: Hold monthly check-ins that focus specifically on call quality, Yen said. If calls aren’t converting, the problem may be what happens after the phone rings, not marketing.

Kohler added a technical tip for CallRail users. Separate first-time callers from repeat callers in your conversion setup. Send both into Google Ads, but mark return calls as secondary conversions. That way, automated bidding doesn’t optimize for repeat callers the same way it does for new prospects.

The promo date problem

Litner flagged ongoing frustration with scheduled headline assets appearing outside their intended dates, especially for time-sensitive promotions. Although the issue now seems resolved, he still double-checks at both the start and end of each promotional period.

Kohler reported similar problems with automated rules. Scheduled rules sometimes don’t run at all or trigger a day early, which can pause campaigns too soon or activate them late.

The lesson: If you schedule a launch for a specific day, verify it manually that day. Don’t rely on automation alone.

AI Max settings and control

The conversation also touched on Google’s AI Max campaigns. Chad pointed out that all AI Max settings default to “on,” with no bulk way to disable them. The only option is digging into individual campaigns and ad groups.

Kohler suggested checking Google Ads Editor for workarounds. In some cases, Editor makes it easier to control settings like landing page expansion across multiple ad groups at once.

The lesson: While AI Max and Performance Max have improved, Yen noted they still demand close monitoring and manual exclusions to avoid wasted spend.

Account-level settings that haunt you

Yen called out an easy-to-miss issue: account-level auto-apply settings that don’t play nicely with AI Max and Performance Max campaigns. These controls live in three different places in the interface, which makes them easy to overlook unless you’re checking deliberately.

The lesson: Build a standard checklist of account-level settings and run through it whenever you touch a new account or launch automated campaign types.

Final wisdom

Several themes kept surfacing throughout the discussion:

  • Trust issues with ad platforms are justified, so verify everything.
  • Fresh eyes catch mistakes that familiarity glosses over.
  • Clear client communication prevents misplaced blame when performance slips.
  • Manual checks still matter, even as automation expands.
  • Well-maintained exclusion lists prevent repeat problems.
  • Google Ads Editor and the interface serve different roles, so use each for what it does best.

The bigger message: Mistakes happen to everyone, no matter how experienced you are. The real difference between novices and experts isn’t avoiding errors — it’s catching them fast, learning from them, and building systems so they don’t happen again.

As Kohler put it, these platforms will eventually humble everyone. The key is staying alert, questioning automation, and never launching campaigns on Fridays.

Watch: PPC mistakes I’ve made

💾

From Friday launches to sloppy imports, PPC veterans share hard-earned lessons on automation traps, Google Ads Editor quirks, and more.

Google pushes AI Max tool with in-app ads

10 February 2026 at 21:44
Google vs. AI systems visitors

Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.

What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.

  • The notifications appear during routine account audits and updates.
  • It essentially serves as an internal advertisement for Google’s own tooling.

Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.

What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.

First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”

Bing Webmaster Tools officially adds AI Performance report

10 February 2026 at 21:34

Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.

  • AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
  • Search Engine Land first reported on Jan. 27 that Microsoft was testing the AI Performance report.

What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.

  • Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.

What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:

What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:

  • Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
  • Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
  • Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
  • Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
  • Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.

These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.

Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.

How to use it. Microsoft said publishers can use the data to:

  • Confirm which pages are already cited in AI answers.
  • Identify topics that consistently appear across AI-generated responses.
  • Improve clarity, structure, and completeness on indexed pages that are cited less often.

The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.

What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”

Microsoft’s announcement. Introducing AI Performance in Bing Webmaster Tools Public Preview 

How to make automation work for lead gen PPC

10 February 2026 at 21:00

B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.

Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.

The good news? Automation can still work.

Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.

The fundamental challenge: Why automation struggles with lead gen

Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:

  • Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
  • Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
  • The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.

The solution: Sending the right signals

Despite these challenges, proven strategies can make automation work for B2B lead generation.

Offline conversions: Your number one priority

Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.

In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:

  • HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
  • Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
  • Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.

Embrace micro conversions with strategic values

Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.

The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:

  • Video views (value: 1): Shows curiosity, but qualification is unclear.
  • Ungated asset downloads (value: 10): Indicates stronger engagement and added effort.
  • Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
  • Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.

This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.

Making Performance Max work for lead generation

You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.

But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.

One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:

  • Leads increased 150%
  • Opportunities increased 350%
  • Closed deals increased 200%

Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.

Campaign-specific goals: An underutilized feature

Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.

You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:

  • Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
  • Build audiences from those form fills to capture engaged prospects.
  • Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.

This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.

Portfolio bidding: Reaching the data threshold faster

Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.

For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.

You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.

Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.

First-party audiences: Powerful targeting signals

First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.

If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:

  • Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
  • Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.

Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.

Leveraging AI for B2B lead generation

AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.

The essential B2B prompt addition

Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.

Client onboarding and profile creation

Use AI to build detailed client profiles by feeding it clear inputs, including:

  • What you sell and your core value.
  • Your unique selling propositions.
  • Target personas.
  • Ideal customer profiles.

Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.

Competitor research in minutes, not hours

Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:

  • Current offers
  • Positioning and messaging
  • Value propositions
  • Customer sentiment
  • Social proof
  • Pricing strategies

AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.

Competitor keyword analysis

Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:

  • Identify keywords competitors rank for that you don’t to uncover gaps to fill.
  • Identify keywords you own that competitors don’t to surface unique advantages.
  • Group keywords by theme to reveal patterns and inform campaign structure.

What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.

Automating routine tasks

  • Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
  • Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.

Experiments: testing what works

The Experiments feature is widely underused. Put it to work by testing:

  • Different bid strategies, including portfolio vs. standard
  • Match types
  • Landing pages
  • Campaign structures

Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.

Solutions: Pre-built scripts made easy

Solutions are prebuilt Google Ads scripts that automate common tasks, including:

  • Reporting and dashboards
  • Anomaly detection
  • Link checking
  • Flexible budgeting
  • Negative keyword list creation

Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.

Key takeaways

Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.

  • Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
  • AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
  • Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.

Watch: It’s time to embrace automation for B2B lead gen 

💾

Automation isn’t just for ecommerce. Learn how to drive more leads, cut costs, improve quality, and save time with AI-powered campaigns.

Why governance maturity is a competitive advantage for SEO

10 February 2026 at 19:00
How SEO governance shifts teams from reaction to prevention

Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata. 

Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.

Sound familiar?

Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.

This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.

Governance isn’t bureaucracy – it’s your insurance policy

I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.

The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.

Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.

This maturity model:

  • Protects your work from being undone by releases you weren’t consulted on.
  • Documents your standards so you’re not explaining canonical tags for the 47th time.
  • Establishes clear ownership so you’re not expected to fix everything across six different teams.
  • Gets you a seat at the table when decisions affecting SEO are being made.
  • Makes your expertise visible to leadership in ways they understand.

The real problem: AI just made everything harder

Remember when SEO was mostly about your website and Google? Those were simpler times.

Now you’re trying to optimize for:

  • AI Overviews that rewrite your content.
  • ChatGPT citations that may or may not link back.
  • Perplexity summaries that pull from competitors.
  • Voice assistants that only cite one source.
  • Knowledge panels that conflict with your site.

And you’re still dealing with:

  • Content teams who write AI-generated fluff.
  • Developers who don’t understand crawl budget.
  • Product managers who launch features that break structured data.
  • Marketing directors who want “just one small change” that tanks rankings.

Without governance, you’re the only person who understands how all these pieces fit together. 

When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.

You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.

Dig deeper: Why most SEO failures are organizational, not technical

What VGMM actually measures – in terms you care about

VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:

Level 1: Unmanaged (your current nightmare)

  • Nobody knows who’s responsible for SEO decisions.
  • Changes happen without SEO review.
  • You discover problems after they’ve tanked traffic.
  • You’re constantly firefighting.
  • Documentation doesn’t exist or is ignored.

Level 2: Aware (slightly better)

  • Leadership admits SEO matters.
  • Some standards exist but aren’t enforced.
  • You have allies but no authority.
  • Improvements happen but get reversed next quarter.
  • You’re still the only one who really gets it.

Level 3: Defined (getting somewhere)

  • SEO ownership is documented.
  • Standards exist, and some teams follow them.
  • You’re consulted before major changes.
  • QA checkpoints include SEO review.
  • You’re working normal hours most weeks.

Level 4: Integrated (the dream)

  • SEO is built into release workflows.
  • Automated checks catch problems before they ship.
  • Cross-functional teams share accountability.
  • You can actually take a vacation without a disaster.
  • Your expertise is respected and resourced.

Level 5: Sustained (unicorn territory)

  • SEO survives leadership changes.
  • Governance adapts to new AI surfaces automatically.
  • Problems are caught before they impact traffic.
  • You’re doing strategic work, not firefighting.
  • The organization values prevention over reaction.

Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.

Dig deeper: SEO’s future isn’t content. It’s governance

How VGMM works: The less boring explanation

VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.

It evaluates maturity across domains like:

  • SEO governance: Your core competency.
  • Content governance: Are writers following standards?
  • Performance governance: Is the site actually fast?
  • Accessibility governance: Is the site inclusive?
  • Workflow governance: Do processes exist and work?

Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.

VGMM produces a 0–100% score based on:

  • Domain scores: How mature is each area?
  • Weighting: Which domains matter most for your business?
  • Dependencies: Are weaknesses in one area breaking strengths in another?
  • Coherence: Do decision rights and accountability actually align?

The final score isn’t about effort – it’s about whether governance actually works.

Get the newsletter search marketers rely on.


What this means for your daily life

Before VGMM-style governance:

  • Product launches a redesign → You find out when traffic drops.
  • Content team uses AI → You discover thin content in Search Console.
  • Dev changes URL structure → You spend a week fixing redirects.
  • Marketing wants “quick changes” → You explain why it’s not quick (again).
  • Site goes down → Everyone asks why you didn’t catch it.

After governance maturity improves:

  • Product can’t launch without SEO sign-off.
  • Content AI usage has review checkpoints.
  • URL changes require documented SEO approval.
  • Marketing requests go through defined workflows.
  • Site monitoring includes automated SEO health checks.

You move from reactive firefighting to proactive prevention. Your weekends become yours again.

The supporting models: What they actually check

VGMM doesn’t score you on technical SEO execution. It checks whether the organization has processes in place to prevent SEO disasters.

SEO Governance Maturity Model (SEOGMM) asks:

  • Are there documented SEO standards?
  • Who can override them, and how?
  • Do templates enforce SEO requirements?
  • Are there QA checkpoints before releases?
  • Can SEO block launches that will cause problems?

Content Governance Maturity Model (CGMM) asks:

  • Are content quality standards documented?
  • Is AI-generated content reviewed?
  • Are writers trained on SEO basics?
  • Is there a process for updating outdated content?

Website Performance Maturity Model (WPMM) asks:

  • Are Core Web Vitals monitored?
  • Can releases be rolled back if they break performance?
  • Is there a performance budget?
  • Are third-party scripts governed?

You get the idea. Each domain has its own checklist, and VGMM shows leadership where gaps create risk.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

How to pitch this to your boss

You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.

  • Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
  • Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
  • Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
  • Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
  • The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”

What success actually looks like

Organizations with higher VGMM maturity experience measurably better outcomes:

  • Fewer unexplained traffic drops because changes are reviewed.
  • More stable AI citations because content quality is governed.
  • Less rework after launches because SEO is built into workflows.
  • Clearer accountability because ownership is documented.
  • Better resource allocation because gaps are visible to leadership.

But the real win for you personally: 

  • You stop being the hero who saves the day and become the strategist who prevents disasters. 
  • Your expertise is recognized and properly resourced. 
  • You can take actual vacations. 
  • You work normal hours most of the time.

Your job becomes about building and improving, not constantly fixing.

Getting started: Practical next steps

Step 1: Self-assessment

Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.

Step 2: Document current-state pain

Make a list of the last six months of SEO incidents:

  • Changes that weren’t reviewed.
  • Traffic drops from preventable problems.
  • Time spent fixing avoidable issues.
  • Requests that had to be explained multiple times.

This becomes your business case.

Step 3: Start with one domain

You don’t need to implement full VGMM immediately. Start with SEOGMM:

  • Document your standards.
  • Create a review checklist.
  • Establish who can approve exceptions.
  • Get stakeholder sign-off on the process.

Step 4: Show results 

Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.

Step 5: Expand systematically

Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.

Why governance determines whether SEO survives

Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.

VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.

Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.

You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.

If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.

The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”

That someone is you.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Why PPC measurement feels broken (and why it isn’t)

10 February 2026 at 18:00
Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

How SEO leaders can explain agentic AI to ecommerce executives

10 February 2026 at 17:00
How to communicate agentic AI to ecommerce leadership without the hype

Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.

Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.

What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance. 

This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.

Start by explaining what ‘agentic’ actually means

A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.

What changes is who does the work.

Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.

When speaking to executive teams, a simple framing works best:

  • “We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.” 

Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.

Keep expectations realistic and avoid the hype

Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.

Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.

This matters because leadership teams often fall into one of two traps:

  1. Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away. 
  2. Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.

SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems. 

Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.

Dig deeper: Are we ready for the agentic web?

Change the conversation from rankings to eligibility

One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”

Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.

Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.

Explain why SEO no longer sits only in marketing

Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.

Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.

SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.

Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.

Dig deeper: How to integrate SEO into your broader marketing strategy

Be clear that discovery will change first

For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.

Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.

This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.

This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.

Get the newsletter search marketers rely on.


Reframe consideration as filtering, not persuasion

The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.

In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.

This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.

In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Set honest expectations about measurement

Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.

SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.

The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.

Promote a proactive, low-risk response

The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.

Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.

Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.

This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.

Agentic AI changes the focus, not the fundamentals

For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.

This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.

Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.

For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.

That is the conversation SEO leaders should be having now.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

What repeated ChatGPT runs reveal about brand visibility

10 February 2026 at 16:00
What repeated ChatGPT runs reveal about brand visibility

We know AI responses are probabilistic – if you ask an AI the same question 10 times, you’ll get 10 different responses.

But how different are the responses?

That’s the question Rand Fishkin explored in some interesting research.

And it has big implications for how we should think about tracking AI visibility for brands.

In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.

Basically, he found that:

  • AIs rarely recommend the same list of brands in the same order twice.
  • For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.

For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:

  • Does prompt complexity affect the consistency of AI recommendations?
  • Does the competitiveness of the category affect the consistency of recommendations?

Methodology

To explore those questions, I first designed 12 prompts:

  • Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
  • Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)

I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.

Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Findings

So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?

How many ‘open slots’ in ChatGPT responses are available to brands?

On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.

How many brands does ChatGPT draw from, on average?

Competitive vs. niche categories

On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)

Simple vs. nuanced prompts

On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.

This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.

I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)

Return of the ’10 blue links’

In each individual response, ChatGPT will, on average, mention only 10 brands.

There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.

How many brands per response, on average?

But a single response typically names about 10 brands regardless of category or prompt type.

The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.

Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.

Dig deeper: How to measure your AI search brand visibility and prove business impact

Get the newsletter search marketers rely on.


How consistent are ChatGPT’s brand recommendations?

When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.

To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.

ChatGPT knows ~44 brands in your category

So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.

As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.

If you’re not a big brand, you’re better off being in a niche category:

It's easier to get good AI visibility in niche categories

When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.

In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.

Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.

The responses to nuanced prompts are harded to dominate

A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.

This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.

Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?

Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.

But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What does this mean for B2B marketers?

If you’re not a dominant brand, pick your battles – niche down

It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.

Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.

But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”

Most AI visibility tracking tools are grossly misleading

Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.

If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.

But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.

That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.

Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What’s next?

In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:

  • How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
  • Do different prompts with the same search intent tend to produce the same set of recommendations?
  • How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?

This article was originally published on Visible on beehiiv (as Most AI visibility tracking is misleading (here’s my new data)) and is republished with permission.

Reddit says 80 million people now use its search weekly

9 February 2026 at 22:56
Reddit search

Eighty million people use Reddit search every week, Reddit said on its Q4 2025 earnings call last week. The increase followed a major change: Reddit merged its core search with its AI-powered Reddit Answers and began positioning the platform as a place where users can start — and finish — their searches.

  • Executives framed the move as a response to changing behavior. People are increasingly researching products and making decisions by asking questions within communities rather than relying solely on traditional search engines.
  • Reddit is betting it can keep more of that intent on-platform, rather than acting mainly as a source of links for elsewhere.

Why we care. Reddit is becoming a place where people start — and complete — their searches without ever touching Google. For brands, that means visibility on Reddit now matters as much as ranking in traditional and AI search for many buying decisions.

Reddit’s search ambitions. CEO Steve Huffman said Reddit made “significant progress” in Q4 by unifying keyword search with Reddit Answers, its AI-driven Q&A experience. Users can now move between standard search results and AI answers in a single interface, with Answers also appearing directly inside search results.

  • “Reddit is already where people go to find things,” Huffman said, adding the company wants to become an “end-to-end search destination.”
  • More than 80 million people searched Reddit weekly in Q4, up from 60 million a year earlier, as users increasingly come to the platform to research topics — not just scroll feeds or click through from Google.

Reddit Answers is growing. Reddit Answers is driving much of that growth. Huffman said Answers queries jumped from about 1 million a year ago to 15 million in Q4, while overall search usage rose sharply in parallel.

  • He said Answers performs best for open-ended questions—what to buy, watch, or try—where people want multiple perspectives instead of a single factual answer. Those queries align naturally with Reddit’s community-driven discussions.
  • Reddit is also expanding Answers beyond text. Huffman said the company is piloting “dynamic agentic search results” that include media formats, signaling a more interactive and immersive search experience ahead.

Search is a ‘big one’ for Reddit. Huffman said the company is testing new app layouts that give search prominent placement, including versions with a large, always-visible search bar at the top of the home screen.

  • COO Jennifer Wong said search and Answers represent a major opportunity, even though monetization remains early on some surfaces.
  • Wong described Reddit search behavior as “incremental and additive” to existing engagement and often tied to high-intent moments, such as researching purchases or comparing options.

AI answers make Reddit more important. Huffman also linked Reddit’s search push to its partnerships with Google and OpenAI. He said Reddit content is now the most-cited source in AI-generated answers, highlighting the platform’s growing influence on how people find information.

  • Reddit sees AI summaries as an opportunity — to move users from AI answers into Reddit communities, where they can read discussions, ask follow-up questions, and participate.
  • If someone asks “what the best speaker is,” he said, Reddit wants users to discover not just a summary, but the community where real people are actively debating the topic.

Reddit earnings. Reddit Reports Fourth Quarter and Full Year 2025 Results; Announces $1 Billion Share Repurchase Program

OpenAI starts testing ChatGPT ads

9 February 2026 at 22:09

OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.

The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.

  • OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
  • Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.

How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.

  • For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.

User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.

  • Turning personalization off limits ads to the current chat.
  • Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.

Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.

Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.

OpenAI’s announcement.Testing ads in ChatGPT (OpenAI)

Google AI Mode doesn’t favor above-the-fold content: Study

9 February 2026 at 21:43
AI Mode depth doesn't matter

Google’s AI Mode isn’t more likely to cite content that appears “above the fold,” according to a study from SALT.agency, a technical SEO and content agency.

  • After analyzing more than 2,000 URLs cited in AI Mode responses, researchers found no correlation between how high text appears on a page and whether Google’s AI selects it for citation.

Pixel depth doesn’t matter. AI Mode cited text from across entire pages, including content buried thousands of pixels down.

  • Citation depth showed no meaningful relationship to visibility.
  • Average depth varied by vertical, from about 2,400 pixels in travel to 4,600 pixels in SaaS, with many citations far below the traditional “above the fold” area.

Page layout affects depth, not visibility. Templates and design choices influenced how far down the cited text appeared, but not whether it was cited.

  • Pages with large hero images or narrative layouts pushed cited text deeper, while simpler blog or FAQ-style pages surfaced citations earlier.
  • No layout type showed a visibility advantage in AI Mode.

Descriptive subheadings matter. One consistent pattern emerged: AI Mode frequently highlighted a subheading and the sentence that followed it.

  • This suggests Google uses heading structures to navigate content, then samples opening lines to assess relevance, behavior consistent with long-standing search practices, according to SALT.

What Google is likely doing. SALT believes AI Mode relies on the same fragment indexing technology Google has used for years. Pages are broken into sections, and the most relevant fragment is retrieved regardless of where it appears on the page.

What they’re saying. While the study examined only one structural factor and one AI model, the takeaway is clear: there’s no magic formula for AI Mode visibility. Dan Taylor, partner and head of innovation (organic and AI) at SALT.agency, said:

  • “Our study confirms that there is no magic template or formula for increased visibility in AI Mode responses – and that AI Mode is not more likely to cite text from ‘above the fold.’ Instead, the best approach mirrors what’s worked in search for years: create well-structured, authoritative content that genuinely addresses the needs of your ideal customers.
  • “…the data clearly debunks the idea that where the information sits within a page has an impact on whether it will be cited.”

Why we care. The findings challenge the idea that AI-specific templates or rigid page structures drive better AI Mode visibility. Chasing “AI-optimized” layouts may distract from work that actually matters.

About the research. SALT analyzed 2,318 unique URLs cited in AI Mode responses for high-value queries across travel, ecommerce, and SaaS. Using a Chrome bookmarklet and a 1920×1080 viewport, researchers recorded the vertical pixel position of the first highlighted character in each AI-cited fragment. They also cataloged layouts and elements, such as hero sections, FAQs, accordions, and tables of contents.

The study. Research: Does Structuring Your Content Improve the Chances of AI Mode Surfacing?

❌
❌