Normal view

Yesterday — 13 March 2026Main stream

Why surface-level SEO tactics won’t build lasting AI search visibility

13 March 2026 at 18:00
Google search monolith crumbling

A recent Harvard Business Review piece echoes the shift we’re sseeing in the SEO industry: at a macro level, LLMs and Google’s AI-powered SERP features, such as AI Overviews, aren’t just creating a zero-click environment, but also changing user journeys and behavior.

They’re collapsing what used to be multi-touch customer journeys into a single synthesized answer.

For a more visual and emphatic metaphor, the monolith of “Search” is crumbling.

When that happens, brands lose many of the touchpoints they once owned, and your marketing strategy must change accordingly. HBR captures this moment well, arguing that marketing now has a new audience and that algorithms increasingly shape first impressions.

That said, while the article points in the right direction on the broader trend, its tactical advice is generic and falls back on shallow tactics.

Much of the guidance returns to familiar marketing playbook ideas that sound strategic and innovative but lack real operational depth. That gap matters for the longevity and sustainability of visibility.

The narrative may be easy for you to understand and repeat at the executive level, but it glosses over the deeper structural changes you must actually make to adapt to the new search ecosystem.

The problem with flock tactics

The HBR article centers on schema, authorship signals, and branded concepts. These recommendations risk becoming what I call “flock tactics.”

These ideas spread quickly because they’re easy to explain, but they offer little lasting competitive advantage once everyone adopts them.

Schema 

Schema has been one of the most debated topics in LLM and AI optimization. Microsoft Bing confirmed it uses schema for its LLMs, but the relationship between Google’s models and third-party LLMs isn’t as straightforward.

While it isn’t necessarily wrong to recommend schema as part of your overall search optimization activities (SEO and AI), positioning it as a table-stakes tactic ignores diminishing returns once competitors implement similar markup and it becomes standard.

Another gap is the role of external knowledge systems, such as Wikidata or authoritative publishers. Much of the information LLMs rely on comes from those sources rather than a single company’s website.

This is less linear to understand, explain, and demonstrate as a single line item on an activity tracker, but these are nuances you now have to deal with, whether you like it or not.

What’s also missing is any exploration — or even a nod — to how models ingest and prioritize structured data compared with the many unstructured signals they rely on.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

E-E-A-T — shallow authorship signals

Attaching the names, credentials, and biographies of real experts follows familiar E-E-A-T logic and represents reasonable hygiene.

The problem is that the treatment remains superficial. It risks pushing you to focus on cosmetic signals such as bios, headshots, and credential lists without strengthening the underlying expertise pipeline.

There is a meaningful difference between placing an author bio on a page and cultivating a genuine expert entity whose work appears in conferences, third-party publications, standards committees, or academic collaborations.

Only the latter produces signals that models are more likely to recognize and trust.

Vanity concepts

The article also suggests creating branded frameworks or concepts — for example, something like “The Acme Index” — to help models associate ideas with your company. In theory this sounds appealing, but in practice it’s extremely difficult to execute.

Unless those ideas spread into the trusted datasets LLMs tend to prioritize, they rarely gain traction.

You need those concepts and frameworks adopted and discussed by entities other than yourself, including academic journals, technical standards, widely used software ecosystems, and other prominent entities in your category.

What often results instead is a proliferation of branded labels that remain largely invisible to the models they were meant to influence.

The structural blind spots

Beyond these tactical issues, the analysis overlooks deeper structural challenges. It treats AI primarily as an external platform shift.

The implication is that you must simply adapt to it rather than actively shaping your own environment.

Internalizing AI infrastructure

HBR never seriously considers the possibility of building AI into your own infrastructure. You can deploy assistants, RAG systems, and domain-specific agents within your own products and customer experiences.

These systems operate in logged-in, transactional contexts where first-party data and controlled interfaces still matter enormously.

In those environments, traditional concerns such as site architecture, structured data, and product design remain deeply relevant, though they operate differently from public search optimization.

It’s not just SEO

The discussion also frames SEO primarily as a page-ranking problem tied to discovery.

That perspective misses the broader shift toward entity-level knowledge management (things, not strings).

Visibility within LLMs increasingly depends on how well you structure entities, taxonomies, and knowledge graphs, and on how those systems connect with external data sources.

Most LLMs don’t process data at the petabyte scale Google uses to understand entity relationships. There is a strong correlation that when something ranks well on Google, third-party LLMs often correlate and “trust” Google’s guidance on which brands to show, for what, and when.

HBR’s phrase “engineering recall” points directly to this deeper data engineering work, yet the implications aren’t expanded.

LLM model heterogeneity

Another major omission is the diversity of AI systems themselves.

Different AI assistants and models rely on different training datasets, refresh cycles, retrieval mechanisms, and safety layers.

That heterogeneity means you can’t assume a single optimization strategy will work across all AI surfaces.

It also doesn’t explore the risk of broad-stroke approaches. If you try to increase visibility within AI models without accounting for safety filters, attribution errors, or hallucinations, you may gain visibility in ways that are inaccurate or reputationally damaging.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Surface-level tactics won’t build AI visibility

HBR’s article works well as a high-level explanation of how AI is changing marketing. It helps you understand that traditional SEO alone is no longer enough and that you must consider how AI systems see and describe your brand.

As a practical guide, however, the advice is thin. Most recommendations focus on surface-level tactics that many companies will quickly copy, reinforcing the echo chamber of flock tactics that are easy to sell and quantify, but risk narrowing your focus to short-term wins at the expense of longer-term strategy.

The real challenge is deeper. You need clear entity definitions, structured knowledge systems, reliable data in trusted sources AI models use, testing across how different models represent you, and AI-powered experiences within your own products.

“Winning” in the AI era will depend less on cosmetic SEO improvements and more on the harder structural work behind the scenes.

Before yesterdayMain stream

The infinite tail: When search demand moves beyond keywords

10 March 2026 at 18:00
The infinite tail- When search demand moves beyond keywords

When people speak naturally, their language flows. It’s often messy, incomplete, and not especially coherent. The Google search bar, however, required something different. Users had to compress their needs into short phrases or slightly longer queries — what’s traditionally classified as short-tail or long-tail.

To make that work, users stacked queries across a journey, moving through a funnel from A to B and refining as they went. In the process, users often stripped out personalized nuance to match what they believed the search engine could understand. In response, SEO professionals built systems around that constraint, grouping queries by search volume, categorizing them by a limited set of intents, and measuring competitiveness.

That dynamic is changing. SEOs need to understand the behavioral change that’s emerging. Google is promoting Gemini, and phone manufacturers like Samsung are marketing AI-enabled features as product USPs. Alongside this product marketing, there’s also a level of education happening. Users are being encouraged to be more expressive with their queries, personalize their searches, and describe what they’re looking for in greater depth.

Long-tail query on Google search bar

Moving from keyword research to prompt research

This is where we need to move away from the notion of keyword research to prompt research. Keyword research traditionally assumes that demand can be quantified, that variations can be listed and grouped, and that optimization happens at a phrase level or a cluster level. In the new hybrid AI and organic search world, demand is much more of a generative concept. Prompts can be written in countless ways while preserving the same underlying need. 

This doesn’t make keyword research obsolete, but it does change its focus. Instead of extracting keywords from tools as we’ve done, we also need to start understanding and modeling journeys. Instead of grouping by volume alone, we need to group by decision stage and the type and level of uncertainty the user has.

The output of this process isn’t simply a keyword map, but a task map that accurately reflects the real pressures and constraints experienced by the audience. This is an evolution from short-tail and long-tail keyword research to an infinite tail of prompt research.

Dig deeper: Why AI optimization is just long-tail SEO done right

The infinite tail as a behavioral shift

You can describe the infinite tail as an expansion of the long tail. But that underestimates what’s actually changing. It’s not just about more niche phrases or longer query strings. It’s about the level of personalization that’s been layered into each request.

As users add context, constraints, and preferences, prompts become unique combinations of a multitude of factors. The number of possible combinations effectively becomes infinite, even if the underlying tasks remain finite. AI systems respond by evaluating the given prompts and probabilistically predicting the next tokens rather than using exact-match strings.

It’s less about how you rank for a specific keyword or whether you’re visible in AI for a specific phrase. It becomes whether your content has the highest probability of satisfying the situation being described. That’s a different optimization problem altogether. You’re not competing on phrasing. You’re competing on task completion.

This part of the journey is where “fuzzy searches” happen, meaning the path isn’t a straight line. Success isn’t just about finishing a task. It’s about making sure the user actually found what they were looking for. Since every user moves differently, the process is flexible rather than a set of rigid steps.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

Get the newsletter search marketers rely on.


Fan-out and grounding queries

One of the most important mechanics in AI search is query fan-out. When a complex prompt is submitted, the system doesn’t treat it as a single string. Instead, it decomposes a request into a network of subquestions, classifications, and checks that together form a broader evaluation framework.

From an SEO perspective, this means your content moves beyond evaluation against a single phrase or specific document matches. Instead, it’s assessed across a network of related questions, with a collective determination of whether it can satisfy a broader task. 

In a fan-out world, you win by supporting the entire decision cluster that surrounds that term. If your content addresses only one narrow dimension of the task, it becomes fragile. If it supports multiple layers of the decision, it becomes resilient. Fan-out rewards structural coverage and contextual relevance rather than repetition of specific phrases.

Grounding queries help provide the LLM with a level of confidence through its fan-created queries. AI systems generate answers and attempt to validate them.

They’re used to check whether a proposed answer is supported elsewhere, whether claims are consistent across sources, and whether the entity behind the information is reputable. If an AI system includes your brand in a summarized response, it needs a level of confidence to defend it virtually if challenged by alternative information.

This changes the meaning of authority. In traditional SEO, ranking could be achieved through technical content, links, and other forms of manipulation. In AI search, selection also depends on how easily your content can be corroborated against a broader consensus within the cohort. This can involve factors tied to entity clarity, including structure, data consistency, consistent messaging, and external validation. These signals reduce uncertainty for the system. You’re not just trying to appear. You’re trying to be selected and defended.

Dig deeper: The authority era: How AI is reshaping what ranks in search

Designing for hybrid search

Organic search isn’t disappearing. Ranking still influences discovery, technical SEO still shapes crawlability, and architecture still determines how well a site and its content are understood. 

But now, AI layers sit on top, synthesizing information and influencing which brands are surfaced within conversational responses. In this hybrid environment, organic visibility feeds AI selection. They aren’t exclusive, and yet they aren’t codependent. 

AI selection can reinforce brand perception, and fan-out rewards depth of current coverage. Grounding then rewards trust and consistency. This is where the infinite tail rewards genuine audience understanding and the creation of websites and content systems that support it.

This is a shift from keyword research to prompt research, and not just a cosmetic renaming of the process. Success will depend on understanding why people search, the decisions they’re making, the uncertainties they face, and the evidence they need before committing. Search increasingly revolves around satisfying situations rather than matching strings. Designing for the infinite tail means designing for people and the tasks they’re trying to complete.

Dig deeper: How to use AI response patterns to build better content

❌
❌